This is the new AWS CloudFormation Template Reference Guide. Please update your bookmarks and links. For help getting started with CloudFormation, see the AWS CloudFormation User Guide.
AWS::SageMaker::InferenceComponent
Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint. In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "Type" : "AWS::SageMaker::InferenceComponent", "Properties" : { "DeploymentConfig" :InferenceComponentDeploymentConfig, "EndpointArn" :String, "EndpointName" :String, "InferenceComponentName" :String, "RuntimeConfig" :InferenceComponentRuntimeConfig, "Specification" :InferenceComponentSpecification, "Tags" :[ Tag, ... ], "VariantName" :String} }
YAML
Type: AWS::SageMaker::InferenceComponent Properties: DeploymentConfig:InferenceComponentDeploymentConfigEndpointArn:StringEndpointName:StringInferenceComponentName:StringRuntimeConfig:InferenceComponentRuntimeConfigSpecification:InferenceComponentSpecificationTags:- TagVariantName:String
Properties
DeploymentConfig-
The deployment configuration for an endpoint, which contains the desired deployment strategy and rollback configurations.
Required: No
Type: InferenceComponentDeploymentConfig
Update requires: No interruption
EndpointArn-
The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
Required: No
Type: String
Minimum:
1Maximum:
256Update requires: No interruption
EndpointName-
The name of the endpoint that hosts the inference component.
Required: Yes
Type: String
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9])*$Maximum:
63Update requires: No interruption
InferenceComponentName-
The name of the inference component.
Required: No
Type: String
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9])*$Maximum:
63Update requires: No interruption
RuntimeConfigProperty description not available.
Required: No
Type: InferenceComponentRuntimeConfig
Update requires: No interruption
SpecificationProperty description not available.
Required: Yes
Type: InferenceComponentSpecification
Update requires: No interruption
Property description not available.
Required: No
Type: Array of Tag
Maximum:
50Update requires: No interruption
VariantName-
The name of the production variant that hosts the inference component.
Required: No
Type: String
Pattern:
^[a-zA-Z0-9](-*[a-zA-Z0-9])*$Maximum:
63Update requires: No interruption
Return values
Ref
When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the Amazon Resource Name (ARN) of the inference
component.
For more information about using the Ref function, see Ref.
Fn::GetAtt
The Fn::GetAtt intrinsic function returns a value for a specified attribute of this type. The following are the available attributes and sample return values.
For more information about using the Fn::GetAtt intrinsic function, see Fn::GetAtt.
CreationTime-
The time when the inference component was created.
FailureReasonProperty description not available.
InferenceComponentArn-
The Amazon Resource Name (ARN) of the inference component.
InferenceComponentStatus-
The status of the inference component.
LastModifiedTime-
The time when the inference component was last updated.
RuntimeConfig.CurrentCopyCount-
The number of runtime copies of the model container that are currently deployed.
RuntimeConfig.DesiredCopyCount-
The number of runtime copies of the model container that you requested to deploy with the inference component.