CfnInferenceComponentPropsMixin
- class aws_cdk.mixins_preview.aws_sagemaker.mixins.CfnInferenceComponentPropsMixin(props, *, strategy=None)
Bases:
MixinCreates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.
In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
- See:
- CloudformationResource:
AWS::SageMaker::InferenceComponent
- Mixin:
true
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview import mixins from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins cfn_inference_component_props_mixin = sagemaker_mixins.CfnInferenceComponentPropsMixin(sagemaker_mixins.CfnInferenceComponentMixinProps( deployment_config=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentDeploymentConfigProperty( auto_rollback_configuration=sagemaker_mixins.CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty( alarms=[sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty( alarm_name="alarmName" )] ), rolling_update_policy=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty( maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 ), maximum_execution_timeout_in_seconds=123, rollback_maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 ), wait_interval_in_seconds=123 ) ), endpoint_arn="endpointArn", endpoint_name="endpointName", inference_component_name="inferenceComponentName", runtime_config=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRuntimeConfigProperty( copy_count=123, current_copy_count=123, desired_copy_count=123 ), specification=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentSpecificationProperty( base_inference_component_name="baseInferenceComponentName", compute_resource_requirements=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty( max_memory_required_in_mb=123, min_memory_required_in_mb=123, number_of_accelerator_devices_required=123, number_of_cpu_cores_required=123 ), container=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty( artifact_url="artifactUrl", deployed_image=sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty( resolution_time="resolutionTime", resolved_image="resolvedImage", specified_image="specifiedImage" ), environment={ "environment_key": "environment" }, image="image" ), model_name="modelName", startup_parameters=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty( container_startup_health_check_timeout_in_seconds=123, model_data_download_timeout_in_seconds=123 ) ), tags=[CfnTag( key="key", value="value" )], variant_name="variantName" ), strategy=mixins.PropertyMergeStrategy.OVERRIDE )
Create a mixin to apply properties to
AWS::SageMaker::InferenceComponent.- Parameters:
props (
Union[CfnInferenceComponentMixinProps,Dict[str,Any]]) – L1 properties to apply.strategy (
Optional[PropertyMergeStrategy]) – (experimental) Strategy for merging nested properties. Default: - PropertyMergeStrategy.MERGE
Methods
- apply_to(construct)
Apply the mixin properties to the construct.
- Parameters:
construct (
IConstruct)- Return type:
- supports(construct)
Check if this mixin supports the given construct.
- Parameters:
construct (
IConstruct)- Return type:
bool
Attributes
- CFN_PROPERTY_KEYS = ['deploymentConfig', 'endpointArn', 'endpointName', 'inferenceComponentName', 'runtimeConfig', 'specification', 'tags', 'variantName']
Static Methods
- classmethod is_mixin(x)
(experimental) Checks if
xis a Mixin.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsMixin.- Stability:
experimental
AlarmProperty
- class CfnInferenceComponentPropsMixin.AlarmProperty(*, alarm_name=None)
Bases:
objectAn Amazon CloudWatch alarm configured to monitor metrics on an endpoint.
- Parameters:
alarm_name (
Optional[str]) – The name of a CloudWatch alarm in your account.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins alarm_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty( alarm_name="alarmName" )
Attributes
- alarm_name
The name of a CloudWatch alarm in your account.
AutoRollbackConfigurationProperty
- class CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty(*, alarms=None)
Bases:
object- Parameters:
alarms (
Union[IResolvable,Sequence[Union[IResolvable,AlarmProperty,Dict[str,Any]]],None])- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins auto_rollback_configuration_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty( alarms=[sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty( alarm_name="alarmName" )] )
Attributes
DeployedImageProperty
- class CfnInferenceComponentPropsMixin.DeployedImageProperty(*, resolution_time=None, resolved_image=None, specified_image=None)
Bases:
objectGets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant .
If you used the
registry/repository[:tag]form to specify the image path of the primary container when you created the model hosted in thisProductionVariant, the path resolves to a path of the formregistry/repository[@digest]. A digest is a hash value that identifies a specific version of an image. For information about Amazon ECR paths, see Pulling an Image in the Amazon ECR User Guide .- Parameters:
resolution_time (
Optional[str]) – The date and time when the image path for the model resolved to theResolvedImage.resolved_image (
Optional[str]) – The specific digest path of the image hosted in thisProductionVariant.specified_image (
Optional[str]) – The image path you specified when you created the model.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins deployed_image_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty( resolution_time="resolutionTime", resolved_image="resolvedImage", specified_image="specifiedImage" )
Attributes
- resolution_time
The date and time when the image path for the model resolved to the
ResolvedImage.
- resolved_image
The specific digest path of the image hosted in this
ProductionVariant.
- specified_image
The image path you specified when you created the model.
InferenceComponentCapacitySizeProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(*, type=None, value=None)
Bases:
objectSpecifies the type and size of the endpoint capacity to activate for a rolling deployment or a rollback strategy.
You can specify your batches as either of the following:
A count of inference component copies
The overall percentage or your fleet
For a rollback strategy, if you don’t specify the fields in this object, or if you set the
Valueparameter to 100%, then SageMaker AI uses a blue/green rollback strategy and rolls all traffic back to the blue fleet.- Parameters:
type (
Optional[str]) – Specifies the endpoint capacity type. - COPY_COUNT - The endpoint activates based on the number of inference component copies. - CAPACITY_PERCENT - The endpoint activates based on the specified percentage of capacity.value (
Union[int,float,None]) – Defines the capacity size, either as a number of inference component copies or a capacity percentage.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_capacity_size_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 )
Attributes
- type
Specifies the endpoint capacity type.
COPY_COUNT - The endpoint activates based on the number of inference component copies.
CAPACITY_PERCENT - The endpoint activates based on the specified percentage of capacity.
- value
Defines the capacity size, either as a number of inference component copies or a capacity percentage.
InferenceComponentComputeResourceRequirementsProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty(*, max_memory_required_in_mb=None, min_memory_required_in_mb=None, number_of_accelerator_devices_required=None, number_of_cpu_cores_required=None)
Bases:
objectDefines the compute resources to allocate to run a model, plus any adapter models, that you assign to an inference component.
These resources include CPU cores, accelerators, and memory.
- Parameters:
max_memory_required_in_mb (
Union[int,float,None]) – The maximum MB of memory to allocate to run a model that you assign to an inference component.min_memory_required_in_mb (
Union[int,float,None]) – The minimum MB of memory to allocate to run a model that you assign to an inference component.number_of_accelerator_devices_required (
Union[int,float,None]) – The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.number_of_cpu_cores_required (
Union[int,float,None]) – The number of CPU cores to allocate to run a model that you assign to an inference component.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_compute_resource_requirements_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty( max_memory_required_in_mb=123, min_memory_required_in_mb=123, number_of_accelerator_devices_required=123, number_of_cpu_cores_required=123 )
Attributes
- max_memory_required_in_mb
The maximum MB of memory to allocate to run a model that you assign to an inference component.
- min_memory_required_in_mb
The minimum MB of memory to allocate to run a model that you assign to an inference component.
- number_of_accelerator_devices_required
The number of accelerators to allocate to run a model that you assign to an inference component.
Accelerators include GPUs and AWS Inferentia.
- number_of_cpu_cores_required
The number of CPU cores to allocate to run a model that you assign to an inference component.
InferenceComponentContainerSpecificationProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty(*, artifact_url=None, deployed_image=None, environment=None, image=None)
Bases:
objectDefines a container that provides the runtime environment for a model that you deploy with an inference component.
- Parameters:
artifact_url (
Optional[str]) – The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).deployed_image (
Union[IResolvable,DeployedImageProperty,Dict[str,Any],None])environment (
Union[Mapping[str,str],IResolvable,None]) – The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.image (
Optional[str]) – The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_container_specification_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty( artifact_url="artifactUrl", deployed_image=sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty( resolution_time="resolutionTime", resolved_image="resolvedImage", specified_image="specifiedImage" ), environment={ "environment_key": "environment" }, image="image" )
Attributes
- artifact_url
The Amazon S3 path where the model artifacts, which result from model training, are stored.
This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployed_image
-
- Type:
see
- environment
The environment variables to set in the Docker container.
Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image
The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
InferenceComponentDeploymentConfigProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentDeploymentConfigProperty(*, auto_rollback_configuration=None, rolling_update_policy=None)
Bases:
objectThe deployment configuration for an endpoint that hosts inference components.
The configuration includes the desired deployment strategy and rollback settings.
- Parameters:
auto_rollback_configuration (
Union[IResolvable,AutoRollbackConfigurationProperty,Dict[str,Any],None])rolling_update_policy (
Union[IResolvable,InferenceComponentRollingUpdatePolicyProperty,Dict[str,Any],None]) – Specifies a rolling deployment strategy for updating a SageMaker AI endpoint.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_deployment_config_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentDeploymentConfigProperty( auto_rollback_configuration=sagemaker_mixins.CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty( alarms=[sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty( alarm_name="alarmName" )] ), rolling_update_policy=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty( maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 ), maximum_execution_timeout_in_seconds=123, rollback_maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 ), wait_interval_in_seconds=123 ) )
Attributes
- auto_rollback_configuration
-
- Type:
see
- rolling_update_policy
Specifies a rolling deployment strategy for updating a SageMaker AI endpoint.
InferenceComponentRollingUpdatePolicyProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty(*, maximum_batch_size=None, maximum_execution_timeout_in_seconds=None, rollback_maximum_batch_size=None, wait_interval_in_seconds=None)
Bases:
objectSpecifies a rolling deployment strategy for updating a SageMaker AI inference component.
- Parameters:
maximum_batch_size (
Union[IResolvable,InferenceComponentCapacitySizeProperty,Dict[str,Any],None]) – The batch size for each rolling step in the deployment process. For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.maximum_execution_timeout_in_seconds (
Union[int,float,None]) – The time limit for the total deployment. Exceeding this limit causes a timeout.rollback_maximum_batch_size (
Union[IResolvable,InferenceComponentCapacitySizeProperty,Dict[str,Any],None]) – The batch size for a rollback to the old endpoint fleet. If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.wait_interval_in_seconds (
Union[int,float,None]) – The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_rolling_update_policy_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty( maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 ), maximum_execution_timeout_in_seconds=123, rollback_maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty( type="type", value=123 ), wait_interval_in_seconds=123 )
Attributes
- maximum_batch_size
The batch size for each rolling step in the deployment process.
For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.
- maximum_execution_timeout_in_seconds
The time limit for the total deployment.
Exceeding this limit causes a timeout.
- rollback_maximum_batch_size
The batch size for a rollback to the old endpoint fleet.
If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.
- wait_interval_in_seconds
The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet.
InferenceComponentRuntimeConfigProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentRuntimeConfigProperty(*, copy_count=None, current_copy_count=None, desired_copy_count=None)
Bases:
objectRuntime settings for a model that is deployed with an inference component.
- Parameters:
copy_count (
Union[int,float,None]) – The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.current_copy_count (
Union[int,float,None]) – The number of runtime copies of the model container that are currently deployed.desired_copy_count (
Union[int,float,None]) – The number of runtime copies of the model container that you requested to deploy with the inference component.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_runtime_config_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRuntimeConfigProperty( copy_count=123, current_copy_count=123, desired_copy_count=123 )
Attributes
- copy_count
The number of runtime copies of the model container to deploy with the inference component.
Each copy can serve inference requests.
- current_copy_count
The number of runtime copies of the model container that are currently deployed.
- desired_copy_count
The number of runtime copies of the model container that you requested to deploy with the inference component.
InferenceComponentSpecificationProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentSpecificationProperty(*, base_inference_component_name=None, compute_resource_requirements=None, container=None, model_name=None, startup_parameters=None)
Bases:
objectDetails about the resources to deploy with this inference component, including the model, container, and compute resources.
- Parameters:
base_inference_component_name (
Optional[str]) – The name of an existing inference component that is to contain the inference component that you’re creating with your request. Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. When you create an adapter inference component, use theContainerparameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrlparameter of theInferenceComponentContainerSpecificationdata type. Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.compute_resource_requirements (
Union[IResolvable,InferenceComponentComputeResourceRequirementsProperty,Dict[str,Any],None]) – The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.container (
Union[IResolvable,InferenceComponentContainerSpecificationProperty,Dict[str,Any],None]) – Defines a container that provides the runtime environment for a model that you deploy with an inference component.model_name (
Optional[str]) – The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.startup_parameters (
Union[IResolvable,InferenceComponentStartupParametersProperty,Dict[str,Any],None]) – Settings that take effect while the model container starts up.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_specification_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentSpecificationProperty( base_inference_component_name="baseInferenceComponentName", compute_resource_requirements=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty( max_memory_required_in_mb=123, min_memory_required_in_mb=123, number_of_accelerator_devices_required=123, number_of_cpu_cores_required=123 ), container=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty( artifact_url="artifactUrl", deployed_image=sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty( resolution_time="resolutionTime", resolved_image="resolvedImage", specified_image="specifiedImage" ), environment={ "environment_key": "environment" }, image="image" ), model_name="modelName", startup_parameters=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty( container_startup_health_check_timeout_in_seconds=123, model_data_download_timeout_in_seconds=123 ) )
Attributes
- base_inference_component_name
The name of an existing inference component that is to contain the inference component that you’re creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Containerparameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrlparameter of theInferenceComponentContainerSpecificationdata type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- compute_resource_requirements
The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- container
Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- model_name
The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- startup_parameters
Settings that take effect while the model container starts up.
InferenceComponentStartupParametersProperty
- class CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty(*, container_startup_health_check_timeout_in_seconds=None, model_data_download_timeout_in_seconds=None)
Bases:
objectSettings that take effect while the model container starts up.
- Parameters:
container_startup_health_check_timeout_in_seconds (
Union[int,float,None]) – The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .model_data_download_timeout_in_seconds (
Union[int,float,None]) – The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_component_startup_parameters_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty( container_startup_health_check_timeout_in_seconds=123, model_data_download_timeout_in_seconds=123 )
Attributes
- container_startup_health_check_timeout_in_seconds
The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting.
For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- model_data_download_timeout_in_seconds
The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.