Show / Hide Table of Contents

Class CfnInferenceComponentPropsMixin

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

Inheritance
object
Mixin
CfnInferenceComponentPropsMixin
Implements
IMixin
Inherited Members
Mixin.IsMixin(object)
Namespace: Amazon.CDK.Mixins.Preview.AWS.SageMaker.Mixins
Assembly: Amazon.CDK.Mixins.Preview.dll
Syntax (csharp)
public class CfnInferenceComponentPropsMixin : Mixin, IMixin
Syntax (vb)
Public Class CfnInferenceComponentPropsMixin Inherits Mixin Implements IMixin
Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Examples
// The code below shows an example of how to instantiate this type.
             // The values are placeholders you should change.
             using Amazon.CDK.Mixins.Preview.Mixins;
             using Amazon.CDK.Mixins.Preview.AWS.SageMaker.Mixins;

             var cfnInferenceComponentPropsMixin = new CfnInferenceComponentPropsMixin(new CfnInferenceComponentMixinProps {
                 DeploymentConfig = new InferenceComponentDeploymentConfigProperty {
                     AutoRollbackConfiguration = new AutoRollbackConfigurationProperty {
                         Alarms = new [] { new AlarmProperty {
                             AlarmName = "alarmName"
                         } }
                     },
                     RollingUpdatePolicy = new InferenceComponentRollingUpdatePolicyProperty {
                         MaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         MaximumExecutionTimeoutInSeconds = 123,
                         RollbackMaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         WaitIntervalInSeconds = 123
                     }
                 },
                 EndpointArn = "endpointArn",
                 EndpointName = "endpointName",
                 InferenceComponentName = "inferenceComponentName",
                 RuntimeConfig = new InferenceComponentRuntimeConfigProperty {
                     CopyCount = 123,
                     CurrentCopyCount = 123,
                     DesiredCopyCount = 123
                 },
                 Specification = new InferenceComponentSpecificationProperty {
                     BaseInferenceComponentName = "baseInferenceComponentName",
                     ComputeResourceRequirements = new InferenceComponentComputeResourceRequirementsProperty {
                         MaxMemoryRequiredInMb = 123,
                         MinMemoryRequiredInMb = 123,
                         NumberOfAcceleratorDevicesRequired = 123,
                         NumberOfCpuCoresRequired = 123
                     },
                     Container = new InferenceComponentContainerSpecificationProperty {
                         ArtifactUrl = "artifactUrl",
                         DeployedImage = new DeployedImageProperty {
                             ResolutionTime = "resolutionTime",
                             ResolvedImage = "resolvedImage",
                             SpecifiedImage = "specifiedImage"
                         },
                         Environment = new Dictionary<string, string> {
                             { "environmentKey", "environment" }
                         },
                         Image = "image"
                     },
                     ModelName = "modelName",
                     StartupParameters = new InferenceComponentStartupParametersProperty {
                         ContainerStartupHealthCheckTimeoutInSeconds = 123,
                         ModelDataDownloadTimeoutInSeconds = 123
                     }
                 },
                 Tags = new [] { new CfnTag {
                     Key = "key",
                     Value = "value"
                 } },
                 VariantName = "variantName"
             }, new CfnPropertyMixinOptions {
                 Strategy = PropertyMergeStrategy.OVERRIDE
             });

Synopsis

Constructors

CfnInferenceComponentPropsMixin(ICfnInferenceComponentMixinProps, ICfnPropertyMixinOptions?)

Create a mixin to apply properties to AWS::SageMaker::InferenceComponent.

Properties

CFN_PROPERTY_KEYS

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

Props

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

Strategy

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

Methods

ApplyTo(IConstruct)

Apply the mixin properties to the construct.

Supports(IConstruct)

Check if this mixin supports the given construct.

Constructors

CfnInferenceComponentPropsMixin(ICfnInferenceComponentMixinProps, ICfnPropertyMixinOptions?)

Create a mixin to apply properties to AWS::SageMaker::InferenceComponent.

public CfnInferenceComponentPropsMixin(ICfnInferenceComponentMixinProps props, ICfnPropertyMixinOptions? options = null)
Parameters
props ICfnInferenceComponentMixinProps

L1 properties to apply.

options ICfnPropertyMixinOptions

Mixin options.

Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Properties

CFN_PROPERTY_KEYS

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

protected static string[] CFN_PROPERTY_KEYS { get; }
Property Value

string[]

Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Examples
// The code below shows an example of how to instantiate this type.
             // The values are placeholders you should change.
             using Amazon.CDK.Mixins.Preview.Mixins;
             using Amazon.CDK.Mixins.Preview.AWS.SageMaker.Mixins;

             var cfnInferenceComponentPropsMixin = new CfnInferenceComponentPropsMixin(new CfnInferenceComponentMixinProps {
                 DeploymentConfig = new InferenceComponentDeploymentConfigProperty {
                     AutoRollbackConfiguration = new AutoRollbackConfigurationProperty {
                         Alarms = new [] { new AlarmProperty {
                             AlarmName = "alarmName"
                         } }
                     },
                     RollingUpdatePolicy = new InferenceComponentRollingUpdatePolicyProperty {
                         MaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         MaximumExecutionTimeoutInSeconds = 123,
                         RollbackMaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         WaitIntervalInSeconds = 123
                     }
                 },
                 EndpointArn = "endpointArn",
                 EndpointName = "endpointName",
                 InferenceComponentName = "inferenceComponentName",
                 RuntimeConfig = new InferenceComponentRuntimeConfigProperty {
                     CopyCount = 123,
                     CurrentCopyCount = 123,
                     DesiredCopyCount = 123
                 },
                 Specification = new InferenceComponentSpecificationProperty {
                     BaseInferenceComponentName = "baseInferenceComponentName",
                     ComputeResourceRequirements = new InferenceComponentComputeResourceRequirementsProperty {
                         MaxMemoryRequiredInMb = 123,
                         MinMemoryRequiredInMb = 123,
                         NumberOfAcceleratorDevicesRequired = 123,
                         NumberOfCpuCoresRequired = 123
                     },
                     Container = new InferenceComponentContainerSpecificationProperty {
                         ArtifactUrl = "artifactUrl",
                         DeployedImage = new DeployedImageProperty {
                             ResolutionTime = "resolutionTime",
                             ResolvedImage = "resolvedImage",
                             SpecifiedImage = "specifiedImage"
                         },
                         Environment = new Dictionary<string, string> {
                             { "environmentKey", "environment" }
                         },
                         Image = "image"
                     },
                     ModelName = "modelName",
                     StartupParameters = new InferenceComponentStartupParametersProperty {
                         ContainerStartupHealthCheckTimeoutInSeconds = 123,
                         ModelDataDownloadTimeoutInSeconds = 123
                     }
                 },
                 Tags = new [] { new CfnTag {
                     Key = "key",
                     Value = "value"
                 } },
                 VariantName = "variantName"
             }, new CfnPropertyMixinOptions {
                 Strategy = PropertyMergeStrategy.OVERRIDE
             });

Props

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

protected virtual ICfnInferenceComponentMixinProps Props { get; }
Property Value

ICfnInferenceComponentMixinProps

Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Examples
// The code below shows an example of how to instantiate this type.
             // The values are placeholders you should change.
             using Amazon.CDK.Mixins.Preview.Mixins;
             using Amazon.CDK.Mixins.Preview.AWS.SageMaker.Mixins;

             var cfnInferenceComponentPropsMixin = new CfnInferenceComponentPropsMixin(new CfnInferenceComponentMixinProps {
                 DeploymentConfig = new InferenceComponentDeploymentConfigProperty {
                     AutoRollbackConfiguration = new AutoRollbackConfigurationProperty {
                         Alarms = new [] { new AlarmProperty {
                             AlarmName = "alarmName"
                         } }
                     },
                     RollingUpdatePolicy = new InferenceComponentRollingUpdatePolicyProperty {
                         MaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         MaximumExecutionTimeoutInSeconds = 123,
                         RollbackMaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         WaitIntervalInSeconds = 123
                     }
                 },
                 EndpointArn = "endpointArn",
                 EndpointName = "endpointName",
                 InferenceComponentName = "inferenceComponentName",
                 RuntimeConfig = new InferenceComponentRuntimeConfigProperty {
                     CopyCount = 123,
                     CurrentCopyCount = 123,
                     DesiredCopyCount = 123
                 },
                 Specification = new InferenceComponentSpecificationProperty {
                     BaseInferenceComponentName = "baseInferenceComponentName",
                     ComputeResourceRequirements = new InferenceComponentComputeResourceRequirementsProperty {
                         MaxMemoryRequiredInMb = 123,
                         MinMemoryRequiredInMb = 123,
                         NumberOfAcceleratorDevicesRequired = 123,
                         NumberOfCpuCoresRequired = 123
                     },
                     Container = new InferenceComponentContainerSpecificationProperty {
                         ArtifactUrl = "artifactUrl",
                         DeployedImage = new DeployedImageProperty {
                             ResolutionTime = "resolutionTime",
                             ResolvedImage = "resolvedImage",
                             SpecifiedImage = "specifiedImage"
                         },
                         Environment = new Dictionary<string, string> {
                             { "environmentKey", "environment" }
                         },
                         Image = "image"
                     },
                     ModelName = "modelName",
                     StartupParameters = new InferenceComponentStartupParametersProperty {
                         ContainerStartupHealthCheckTimeoutInSeconds = 123,
                         ModelDataDownloadTimeoutInSeconds = 123
                     }
                 },
                 Tags = new [] { new CfnTag {
                     Key = "key",
                     Value = "value"
                 } },
                 VariantName = "variantName"
             }, new CfnPropertyMixinOptions {
                 Strategy = PropertyMergeStrategy.OVERRIDE
             });

Strategy

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

protected virtual PropertyMergeStrategy Strategy { get; }
Property Value

PropertyMergeStrategy

Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Examples
// The code below shows an example of how to instantiate this type.
             // The values are placeholders you should change.
             using Amazon.CDK.Mixins.Preview.Mixins;
             using Amazon.CDK.Mixins.Preview.AWS.SageMaker.Mixins;

             var cfnInferenceComponentPropsMixin = new CfnInferenceComponentPropsMixin(new CfnInferenceComponentMixinProps {
                 DeploymentConfig = new InferenceComponentDeploymentConfigProperty {
                     AutoRollbackConfiguration = new AutoRollbackConfigurationProperty {
                         Alarms = new [] { new AlarmProperty {
                             AlarmName = "alarmName"
                         } }
                     },
                     RollingUpdatePolicy = new InferenceComponentRollingUpdatePolicyProperty {
                         MaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         MaximumExecutionTimeoutInSeconds = 123,
                         RollbackMaximumBatchSize = new InferenceComponentCapacitySizeProperty {
                             Type = "type",
                             Value = 123
                         },
                         WaitIntervalInSeconds = 123
                     }
                 },
                 EndpointArn = "endpointArn",
                 EndpointName = "endpointName",
                 InferenceComponentName = "inferenceComponentName",
                 RuntimeConfig = new InferenceComponentRuntimeConfigProperty {
                     CopyCount = 123,
                     CurrentCopyCount = 123,
                     DesiredCopyCount = 123
                 },
                 Specification = new InferenceComponentSpecificationProperty {
                     BaseInferenceComponentName = "baseInferenceComponentName",
                     ComputeResourceRequirements = new InferenceComponentComputeResourceRequirementsProperty {
                         MaxMemoryRequiredInMb = 123,
                         MinMemoryRequiredInMb = 123,
                         NumberOfAcceleratorDevicesRequired = 123,
                         NumberOfCpuCoresRequired = 123
                     },
                     Container = new InferenceComponentContainerSpecificationProperty {
                         ArtifactUrl = "artifactUrl",
                         DeployedImage = new DeployedImageProperty {
                             ResolutionTime = "resolutionTime",
                             ResolvedImage = "resolvedImage",
                             SpecifiedImage = "specifiedImage"
                         },
                         Environment = new Dictionary<string, string> {
                             { "environmentKey", "environment" }
                         },
                         Image = "image"
                     },
                     ModelName = "modelName",
                     StartupParameters = new InferenceComponentStartupParametersProperty {
                         ContainerStartupHealthCheckTimeoutInSeconds = 123,
                         ModelDataDownloadTimeoutInSeconds = 123
                     }
                 },
                 Tags = new [] { new CfnTag {
                     Key = "key",
                     Value = "value"
                 } },
                 VariantName = "variantName"
             }, new CfnPropertyMixinOptions {
                 Strategy = PropertyMergeStrategy.OVERRIDE
             });

Methods

ApplyTo(IConstruct)

Apply the mixin properties to the construct.

public override IConstruct ApplyTo(IConstruct construct)
Parameters
construct IConstruct
Returns

IConstruct

Overrides
Mixin.ApplyTo(IConstruct)
Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Supports(IConstruct)

Check if this mixin supports the given construct.

public override bool Supports(IConstruct construct)
Parameters
construct IConstruct
Returns

bool

Overrides
Mixin.Supports(IConstruct)
Remarks

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource: AWS::SageMaker::InferenceComponent

Mixin: true

ExampleMetadata: fixture=_generated

Implements

IMixin
Back to top Generated by DocFX