class CfnInferenceComponentPropsMixin
| Language | Type name |
|---|---|
.NET | Amazon.CDK.Mixins.Preview.AWS.SageMaker.Mixins.CfnInferenceComponentPropsMixin |
Go | github.com/aws/aws-cdk-go/awscdkmixinspreview/v2/awssagemaker/mixins#CfnInferenceComponentPropsMixin |
Java | software.amazon.awscdk.mixins.preview.services.sagemaker.mixins.CfnInferenceComponentPropsMixin |
Python | aws_cdk.mixins_preview.aws_sagemaker.mixins.CfnInferenceComponentPropsMixin |
TypeScript | @aws-cdk/mixins-preview » aws_sagemaker » mixins » CfnInferenceComponentPropsMixin |
Implements
IMixin
Extends
Mixin
Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.
In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { mixins } from '@aws-cdk/mixins-preview';
import { mixins as sagemaker_mixins } from '@aws-cdk/mixins-preview/aws-sagemaker';
const cfnInferenceComponentPropsMixin = new sagemaker_mixins.CfnInferenceComponentPropsMixin({
deploymentConfig: {
autoRollbackConfiguration: {
alarms: [{
alarmName: 'alarmName',
}],
},
rollingUpdatePolicy: {
maximumBatchSize: {
type: 'type',
value: 123,
},
maximumExecutionTimeoutInSeconds: 123,
rollbackMaximumBatchSize: {
type: 'type',
value: 123,
},
waitIntervalInSeconds: 123,
},
},
endpointArn: 'endpointArn',
endpointName: 'endpointName',
inferenceComponentName: 'inferenceComponentName',
runtimeConfig: {
copyCount: 123,
currentCopyCount: 123,
desiredCopyCount: 123,
},
specification: {
baseInferenceComponentName: 'baseInferenceComponentName',
computeResourceRequirements: {
maxMemoryRequiredInMb: 123,
minMemoryRequiredInMb: 123,
numberOfAcceleratorDevicesRequired: 123,
numberOfCpuCoresRequired: 123,
},
container: {
artifactUrl: 'artifactUrl',
deployedImage: {
resolutionTime: 'resolutionTime',
resolvedImage: 'resolvedImage',
specifiedImage: 'specifiedImage',
},
environment: {
environmentKey: 'environment',
},
image: 'image',
},
modelName: 'modelName',
startupParameters: {
containerStartupHealthCheckTimeoutInSeconds: 123,
modelDataDownloadTimeoutInSeconds: 123,
},
},
tags: [{
key: 'key',
value: 'value',
}],
variantName: 'variantName',
}, /* all optional props */ {
strategy: mixins.PropertyMergeStrategy.OVERRIDE,
});
Initializer
new CfnInferenceComponentPropsMixin(props: CfnInferenceComponentMixinProps, options?: CfnPropertyMixinOptions)
Parameters
- props
Cfn— L1 properties to apply.Inference Component Mixin Props - options
Cfn— Mixin options.Property Mixin Options
Create a mixin to apply properties to AWS::SageMaker::InferenceComponent.
Properties
| Name | Type | Description |
|---|---|---|
| props | Cfn | |
| strategy | Property | |
| static CFN_PROPERTY_KEYS | string[] |
props
Type:
Cfn
strategy
Type:
Property
static CFN_PROPERTY_KEYS
Type:
string[]
Methods
| Name | Description |
|---|---|
| apply | Apply the mixin properties to the construct. |
| supports(construct) | Check if this mixin supports the given construct. |
applyTo(construct)
public applyTo(construct: IConstruct): IConstruct
Parameters
- construct
IConstruct
Returns
Apply the mixin properties to the construct.
supports(construct)
public supports(construct: IConstruct): boolean
Parameters
- construct
IConstruct
Returns
boolean
Check if this mixin supports the given construct.

.NET
Go
Java
Python
TypeScript