CfnModelPropsMixin
- class aws_cdk.mixins_preview.aws_sagemaker.mixins.CfnModelPropsMixin(props, *, strategy=None)
Bases:
MixinThe
AWS::SageMaker::Modelresource to create a model to host at an Amazon SageMaker endpoint.For more information, see Deploying a Model on Amazon SageMaker Hosting Services in the Amazon SageMaker Developer Guide .
- See:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-model.html
- CloudformationResource:
AWS::SageMaker::Model
- Mixin:
true
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview import mixins from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins # environment: Any cfn_model_props_mixin = sagemaker_mixins.CfnModelPropsMixin(sagemaker_mixins.CfnModelMixinProps( containers=[sagemaker_mixins.CfnModelPropsMixin.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker_mixins.CfnModelPropsMixin.ImageConfigProperty( repository_access_mode="repositoryAccessMode", repository_auth_config=sagemaker_mixins.CfnModelPropsMixin.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_source=sagemaker_mixins.CfnModelPropsMixin.ModelDataSourceProperty( s3_data_source=sagemaker_mixins.CfnModelPropsMixin.S3DataSourceProperty( compression_type="compressionType", hub_access_config=sagemaker_mixins.CfnModelPropsMixin.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker_mixins.CfnModelPropsMixin.ModelAccessConfigProperty( accept_eula=False ), s3_data_type="s3DataType", s3_uri="s3Uri" ) ), model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker_mixins.CfnModelPropsMixin.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) )], enable_network_isolation=False, execution_role_arn="executionRoleArn", inference_execution_config=sagemaker_mixins.CfnModelPropsMixin.InferenceExecutionConfigProperty( mode="mode" ), model_name="modelName", primary_container=sagemaker_mixins.CfnModelPropsMixin.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker_mixins.CfnModelPropsMixin.ImageConfigProperty( repository_access_mode="repositoryAccessMode", repository_auth_config=sagemaker_mixins.CfnModelPropsMixin.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_source=sagemaker_mixins.CfnModelPropsMixin.ModelDataSourceProperty( s3_data_source=sagemaker_mixins.CfnModelPropsMixin.S3DataSourceProperty( compression_type="compressionType", hub_access_config=sagemaker_mixins.CfnModelPropsMixin.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker_mixins.CfnModelPropsMixin.ModelAccessConfigProperty( accept_eula=False ), s3_data_type="s3DataType", s3_uri="s3Uri" ) ), model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker_mixins.CfnModelPropsMixin.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) ), tags=[CfnTag( key="key", value="value" )], vpc_config=sagemaker_mixins.CfnModelPropsMixin.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] ) ), strategy=mixins.PropertyMergeStrategy.OVERRIDE )
Create a mixin to apply properties to
AWS::SageMaker::Model.- Parameters:
props (
Union[CfnModelMixinProps,Dict[str,Any]]) – L1 properties to apply.strategy (
Optional[PropertyMergeStrategy]) – (experimental) Strategy for merging nested properties. Default: - PropertyMergeStrategy.MERGE
Methods
- apply_to(construct)
Apply the mixin properties to the construct.
- Parameters:
construct (
IConstruct)- Return type:
- supports(construct)
Check if this mixin supports the given construct.
- Parameters:
construct (
IConstruct)- Return type:
bool
Attributes
- CFN_PROPERTY_KEYS = ['containers', 'enableNetworkIsolation', 'executionRoleArn', 'inferenceExecutionConfig', 'modelName', 'primaryContainer', 'tags', 'vpcConfig']
Static Methods
- classmethod is_mixin(x)
(experimental) Checks if
xis a Mixin.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsMixin.- Stability:
experimental
ContainerDefinitionProperty
- class CfnModelPropsMixin.ContainerDefinitionProperty(*, container_hostname=None, environment=None, image=None, image_config=None, inference_specification_name=None, mode=None, model_data_source=None, model_data_url=None, model_package_name=None, multi_model_config=None)
Bases:
objectDescribes the container, as part of model definition.
- Parameters:
container_hostname (
Optional[str]) – This parameter is ignored for models that contain only aPrimaryContainer. When aContainerDefinitionis part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for aContainerDefinitionthat is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinitionin the pipeline. If you specify a value for theContainerHostNamefor anyContainerDefinitionthat is part of an inference pipeline, you must specify a value for theContainerHostNameparameter of everyContainerDefinitionin that pipeline.environment (
Any) – The environment variables to set in the Docker container. Don’t include any sensitive data in your environment variables. The maximum length of each key and value in theEnvironmentmap is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to aCreateModelrequest, then the maximum length of all of their maps, combined, is also 32 KB.image (
Optional[str]) – The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports bothregistry/repository[:tag]andregistry/repository[@digest]image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker . .. epigraph:: The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.image_config (
Union[IResolvable,ImageConfigProperty,Dict[str,Any],None]) – Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers . .. epigraph:: The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.inference_specification_name (
Optional[str]) – The inference specification name in the model package version.mode (
Optional[str]) – Whether the container hosts a single model or multiple models.model_data_source (
Union[IResolvable,ModelDataSourceProperty,Dict[str,Any],None]) – Specifies the location of ML model data to deploy. .. epigraph:: Currently you cannot useModelDataSourcein conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.model_data_url (
Optional[str]) – The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters . .. epigraph:: The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide . .. epigraph:: If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts inModelDataUrl.model_package_name (
Optional[str]) – The name or Amazon Resource Name (ARN) of the model package to use to create the model.multi_model_config (
Union[IResolvable,MultiModelConfigProperty,Dict[str,Any],None]) – Specifies additional configuration for multi-model endpoints.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins # environment: Any container_definition_property = sagemaker_mixins.CfnModelPropsMixin.ContainerDefinitionProperty( container_hostname="containerHostname", environment=environment, image="image", image_config=sagemaker_mixins.CfnModelPropsMixin.ImageConfigProperty( repository_access_mode="repositoryAccessMode", repository_auth_config=sagemaker_mixins.CfnModelPropsMixin.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) ), inference_specification_name="inferenceSpecificationName", mode="mode", model_data_source=sagemaker_mixins.CfnModelPropsMixin.ModelDataSourceProperty( s3_data_source=sagemaker_mixins.CfnModelPropsMixin.S3DataSourceProperty( compression_type="compressionType", hub_access_config=sagemaker_mixins.CfnModelPropsMixin.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker_mixins.CfnModelPropsMixin.ModelAccessConfigProperty( accept_eula=False ), s3_data_type="s3DataType", s3_uri="s3Uri" ) ), model_data_url="modelDataUrl", model_package_name="modelPackageName", multi_model_config=sagemaker_mixins.CfnModelPropsMixin.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" ) )
Attributes
- container_hostname
This parameter is ignored for models that contain only a
PrimaryContainer.When a
ContainerDefinitionis part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline . If you don’t specify a value for this parameter for aContainerDefinitionthat is part of an inference pipeline, a unique name is automatically assigned based on the position of theContainerDefinitionin the pipeline. If you specify a value for theContainerHostNamefor anyContainerDefinitionthat is part of an inference pipeline, you must specify a value for theContainerHostNameparameter of everyContainerDefinitionin that pipeline.
- environment
The environment variables to set in the Docker container. Don’t include any sensitive data in your environment variables.
The maximum length of each key and value in the
Environmentmap is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to aCreateModelrequest, then the maximum length of all of their maps, combined, is also 32 KB.
- image
The path where inference code is stored.
This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both
registry/repository[:tag]andregistry/repository[@digest]image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker . .. epigraph:The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- image_config
Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers . .. epigraph:
The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.
- inference_specification_name
The inference specification name in the model package version.
- mode
Whether the container hosts a single model or multiple models.
- model_data_source
Specifies the location of ML model data to deploy.
Currently you cannot use
ModelDataSourcein conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
- model_data_url
The S3 path where the model artifacts, which result from model training, are stored.
This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters . .. epigraph:
The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating.
If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide . .. epigraph:
If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model artifacts in ``ModelDataUrl`` .
- model_package_name
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
- multi_model_config
Specifies additional configuration for multi-model endpoints.
HubAccessConfigProperty
- class CfnModelPropsMixin.HubAccessConfigProperty(*, hub_content_arn=None)
Bases:
objectThe configuration for a private hub model reference that points to a public SageMaker JumpStart model.
For more information about private hubs, see Private curated hubs for foundation model access control in JumpStart .
- Parameters:
hub_content_arn (
Optional[str]) – The ARN of your private model hub content. This should be aModelReferenceresource type that points to a SageMaker JumpStart public hub model.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins hub_access_config_property = sagemaker_mixins.CfnModelPropsMixin.HubAccessConfigProperty( hub_content_arn="hubContentArn" )
Attributes
- hub_content_arn
The ARN of your private model hub content.
This should be a
ModelReferenceresource type that points to a SageMaker JumpStart public hub model.
ImageConfigProperty
- class CfnModelPropsMixin.ImageConfigProperty(*, repository_access_mode=None, repository_auth_config=None)
Bases:
objectSpecifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
- Parameters:
repository_access_mode (
Optional[str]) – Set this to one of the following values:. -Platform- The model image is hosted in Amazon ECR. -Vpc- The model image is hosted in a private Docker registry in your VPC.repository_auth_config (
Union[IResolvable,RepositoryAuthConfigProperty,Dict[str,Any],None]) – (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specifiedVpcas the value for theRepositoryAccessModefield, and the private Docker registry where the model image is hosted requires authentication.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins image_config_property = sagemaker_mixins.CfnModelPropsMixin.ImageConfigProperty( repository_access_mode="repositoryAccessMode", repository_auth_config=sagemaker_mixins.CfnModelPropsMixin.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" ) )
Attributes
- repository_access_mode
.
Platform- The model image is hosted in Amazon ECR.Vpc- The model image is hosted in a private Docker registry in your VPC.
- See:
- Type:
Set this to one of the following values
- repository_auth_config
(Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted.
Specify a value for this property only if you specified
Vpcas the value for theRepositoryAccessModefield, and the private Docker registry where the model image is hosted requires authentication.
InferenceExecutionConfigProperty
- class CfnModelPropsMixin.InferenceExecutionConfigProperty(*, mode=None)
Bases:
objectSpecifies details about how containers in a multi-container endpoint are run.
- Parameters:
mode (
Optional[str]) – How containers in a multi-container are run. The following values are valid. -Serial- Containers run as a serial pipeline. -Direct- Only the individual container that you specify is run.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins inference_execution_config_property = sagemaker_mixins.CfnModelPropsMixin.InferenceExecutionConfigProperty( mode="mode" )
Attributes
- mode
How containers in a multi-container are run. The following values are valid.
Serial- Containers run as a serial pipeline.Direct- Only the individual container that you specify is run.
ModelAccessConfigProperty
- class CfnModelPropsMixin.ModelAccessConfigProperty(*, accept_eula=None)
Bases:
objectThe access configuration file to control access to the ML model.
You can explicitly accept the model end-user license agreement (EULA) within the
ModelAccessConfig.If you are a Jumpstart user, see the End-user license agreements section for more details on accepting the EULA.
If you are an AutoML user, see the Optional Parameters section of Create an AutoML job to fine-tune text generation models using the API for details on How to set the EULA acceptance when fine-tuning a model using the AutoML API .
- Parameters:
accept_eula (
Union[bool,IResolvable,None]) – Specifies agreement to the model end-user license agreement (EULA). TheAcceptEulavalue must be explicitly defined asTruein order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins model_access_config_property = sagemaker_mixins.CfnModelPropsMixin.ModelAccessConfigProperty( accept_eula=False )
Attributes
- accept_eula
Specifies agreement to the model end-user license agreement (EULA).
The
AcceptEulavalue must be explicitly defined asTruein order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
ModelDataSourceProperty
- class CfnModelPropsMixin.ModelDataSourceProperty(*, s3_data_source=None)
Bases:
objectSpecifies the location of ML model data to deploy.
If specified, you must specify one and only one of the available data sources.
- Parameters:
s3_data_source (
Union[IResolvable,S3DataSourceProperty,Dict[str,Any],None]) – Specifies the S3 location of ML model data to deploy.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins model_data_source_property = sagemaker_mixins.CfnModelPropsMixin.ModelDataSourceProperty( s3_data_source=sagemaker_mixins.CfnModelPropsMixin.S3DataSourceProperty( compression_type="compressionType", hub_access_config=sagemaker_mixins.CfnModelPropsMixin.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker_mixins.CfnModelPropsMixin.ModelAccessConfigProperty( accept_eula=False ), s3_data_type="s3DataType", s3_uri="s3Uri" ) )
Attributes
- s3_data_source
Specifies the S3 location of ML model data to deploy.
MultiModelConfigProperty
- class CfnModelPropsMixin.MultiModelConfigProperty(*, model_cache_setting=None)
Bases:
objectSpecifies additional configuration for hosting multi-model endpoints.
- Parameters:
model_cache_setting (
Optional[str]) – Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins multi_model_config_property = sagemaker_mixins.CfnModelPropsMixin.MultiModelConfigProperty( model_cache_setting="modelCacheSetting" )
Attributes
- model_cache_setting
Whether to cache models for a multi-model endpoint.
By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
RepositoryAuthConfigProperty
- class CfnModelPropsMixin.RepositoryAuthConfigProperty(*, repository_credentials_provider_arn=None)
Bases:
objectSpecifies an authentication configuration for the private docker registry where your model image is hosted.
Specify a value for this property only if you specified
Vpcas the value for theRepositoryAccessModefield of theImageConfigobject that you passed to a call toCreateModeland the private Docker registry where the model image is hosted requires authentication.- Parameters:
repository_credentials_provider_arn (
Optional[str]) – The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide .- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins repository_auth_config_property = sagemaker_mixins.CfnModelPropsMixin.RepositoryAuthConfigProperty( repository_credentials_provider_arn="repositoryCredentialsProviderArn" )
Attributes
- repository_credentials_provider_arn
The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted.
For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide .
S3DataSourceProperty
- class CfnModelPropsMixin.S3DataSourceProperty(*, compression_type=None, hub_access_config=None, model_access_config=None, s3_data_type=None, s3_uri=None)
Bases:
objectDescribes the S3 data source.
Your input bucket must be in the same AWS region as your training job.
- Parameters:
compression_type (
Optional[str])hub_access_config (
Union[IResolvable,HubAccessConfigProperty,Dict[str,Any],None]) – The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.model_access_config (
Union[IResolvable,ModelAccessConfigProperty,Dict[str,Any],None])s3_data_type (
Optional[str]) – If you chooseS3Prefix,S3Uriidentifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training. If you chooseManifestFile,S3Uriidentifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training. If you chooseAugmentedManifestFile,S3Uriidentifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training.AugmentedManifestFilecan only be used if the Channel’s input mode isPipe. If you chooseConverse,S3Uriidentifies an Amazon S3 location that contains data formatted according to Converse format. This format structures conversational messages with specific roles and content types used for training and fine-tuning foundational models.s3_uri (
Optional[str]) – Depending on the value specified for theS3DataType, identifies either a key name prefix or a manifest. For example: - A key name prefix might look like this:s3://bucketname/exampleprefix/- A manifest might look like this:s3://bucketname/example.manifestA manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set ofS3Uri. Note that the prefix must be a valid non-emptyS3Urithat precludes users from specifying a manifest whose individualS3Uriis sourced from different S3 buckets. The following code example shows a valid manifest format:[ {"prefix": "s3://customer_bucket/some/prefix/"},"relative/path/to/custdata-1","relative/path/custdata-2",..."relative/path/custdata-N"]This JSON is equivalent to the followingS3Urilist:s3://customer_bucket/some/prefix/relative/path/to/custdata-1s3://customer_bucket/some/prefix/relative/path/custdata-2...s3://customer_bucket/some/prefix/relative/path/custdata-NThe complete set ofS3Uriin this manifest is the input data for the channel for this data source. The object that eachS3Uripoints to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf. Your input bucket must be located in same AWS region as your training job.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins s3_data_source_property = sagemaker_mixins.CfnModelPropsMixin.S3DataSourceProperty( compression_type="compressionType", hub_access_config=sagemaker_mixins.CfnModelPropsMixin.HubAccessConfigProperty( hub_content_arn="hubContentArn" ), model_access_config=sagemaker_mixins.CfnModelPropsMixin.ModelAccessConfigProperty( accept_eula=False ), s3_data_type="s3DataType", s3_uri="s3Uri" )
Attributes
- compression_type
-
- Type:
see
- hub_access_config
The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
- model_access_config
-
- Type:
see
- s3_data_type
If you choose
S3Prefix,S3Uriidentifies a key name prefix.SageMaker uses all objects that match the specified key name prefix for model training.
If you choose
ManifestFile,S3Uriidentifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.If you choose
AugmentedManifestFile,S3Uriidentifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training.AugmentedManifestFilecan only be used if the Channel’s input mode isPipe.If you choose
Converse,S3Uriidentifies an Amazon S3 location that contains data formatted according to Converse format. This format structures conversational messages with specific roles and content types used for training and fine-tuning foundational models.
- s3_uri
Depending on the value specified for the
S3DataType, identifies either a key name prefix or a manifest.For example:
A key name prefix might look like this:
s3://bucketname/exampleprefix/A manifest might look like this:
s3://bucketname/example.manifest
A manifest is an S3 object which is a JSON file consisting of an array of elements. The first element is a prefix which is followed by one or more suffixes. SageMaker appends the suffix elements to the prefix to get a full set of
S3Uri. Note that the prefix must be a valid non-emptyS3Urithat precludes users from specifying a manifest whose individualS3Uriis sourced from different S3 buckets.The following code example shows a valid manifest format:
[ {"prefix": "s3://customer_bucket/some/prefix/"},"relative/path/to/custdata-1","relative/path/custdata-2",..."relative/path/custdata-N"]This JSON is equivalent to the following
S3Urilist:s3://customer_bucket/some/prefix/relative/path/to/custdata-1s3://customer_bucket/some/prefix/relative/path/custdata-2...s3://customer_bucket/some/prefix/relative/path/custdata-NThe complete set of
S3Uriin this manifest is the input data for the channel for this data source. The object that eachS3Uripoints to must be readable by the IAM role that SageMaker uses to perform tasks on your behalf.Your input bucket must be located in same AWS region as your training job.
VpcConfigProperty
- class CfnModelPropsMixin.VpcConfigProperty(*, security_group_ids=None, subnets=None)
Bases:
objectSpecifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to.
You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC .
- Parameters:
security_group_ids (
Optional[Sequence[str]]) – The VPC security group IDs, in the formsg-xxxxxxxx. Specify the security groups for the VPC that is specified in theSubnetsfield.subnets (
Optional[Sequence[str]]) – The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins vpc_config_property = sagemaker_mixins.CfnModelPropsMixin.VpcConfigProperty( security_group_ids=["securityGroupIds"], subnets=["subnets"] )
Attributes
- security_group_ids
The VPC security group IDs, in the form
sg-xxxxxxxx.Specify the security groups for the VPC that is specified in the
Subnetsfield.
- subnets
The ID of the subnets in the VPC to which you want to connect your training job or model.
For information about the availability of specific instance types, see Supported Instance Types and Availability Zones .