This is the new AWS CloudFormation Template Reference Guide. Please update your bookmarks and links. For help getting started with CloudFormation, see the AWS CloudFormation User Guide.
AWS::SageMaker::MonitoringSchedule BatchTransformInput
Input object for the batch transform job.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "DataCapturedDestinationS3Uri" :String, "DatasetFormat" :DatasetFormat, "ExcludeFeaturesAttribute" :String, "LocalPath" :String, "S3DataDistributionType" :String, "S3InputMode" :String}
YAML
DataCapturedDestinationS3Uri:StringDatasetFormat:DatasetFormatExcludeFeaturesAttribute:StringLocalPath:StringS3DataDistributionType:StringS3InputMode:String
Properties
DataCapturedDestinationS3Uri-
The Amazon S3 location being used to capture the data.
Required: Yes
Type: String
Pattern:
^(https|s3)://([^/]+)/?(.*)$Maximum:
512Update requires: No interruption
DatasetFormat-
The dataset format for your batch transform job.
Required: Yes
Type: DatasetFormat
Update requires: No interruption
ExcludeFeaturesAttribute-
The attributes of the input data to exclude from the analysis.
Required: No
Type: String
Maximum:
100Update requires: No interruption
LocalPath-
Path to the filesystem where the batch transform data is available to the container.
Required: Yes
Type: String
Pattern:
.*Maximum:
256Update requires: No interruption
S3DataDistributionType-
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to
FullyReplicatedRequired: No
Type: String
Allowed values:
FullyReplicated | ShardedByS3KeyUpdate requires: No interruption
S3InputMode-
Whether the
PipeorFileis used as the input mode for transferring data for the monitoring job.Pipemode is recommended for large datasets.Filemode is useful for small files that fit in memory. Defaults toFile.Required: No
Type: String
Allowed values:
Pipe | FileUpdate requires: No interruption