CfnDeliveryStreamPropsMixin
- class aws_cdk.mixins_preview.aws_kinesisfirehose.mixins.CfnDeliveryStreamPropsMixin(props, *, strategy=None)
Bases:
MixinThe
AWS::KinesisFirehose::DeliveryStreamresource specifies an Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivery stream that delivers real-time streaming data to an Amazon Simple Storage Service (Amazon S3), Amazon Redshift, or Amazon Elasticsearch Service (Amazon ES) destination.For more information, see Creating an Amazon Kinesis Data Firehose Delivery Stream in the Amazon Kinesis Data Firehose Developer Guide .
- See:
- CloudformationResource:
AWS::KinesisFirehose::DeliveryStream
- Mixin:
true
- ExampleMetadata:
fixture=_generated
Example:
Create a mixin to apply properties to
AWS::KinesisFirehose::DeliveryStream.- Parameters:
props (
Union[CfnDeliveryStreamMixinProps,Dict[str,Any]]) – L1 properties to apply.strategy (
Optional[PropertyMergeStrategy]) – (experimental) Strategy for merging nested properties. Default: - PropertyMergeStrategy.MERGE
Methods
- apply_to(construct)
Apply the mixin properties to the construct.
- Parameters:
construct (
IConstruct)- Return type:
- supports(construct)
Check if this mixin supports the given construct.
- Parameters:
construct (
IConstruct)- Return type:
bool
Attributes
- CFN_PROPERTY_KEYS = ['amazonOpenSearchServerlessDestinationConfiguration', 'amazonopensearchserviceDestinationConfiguration', 'databaseSourceConfiguration', 'deliveryStreamEncryptionConfigurationInput', 'deliveryStreamName', 'deliveryStreamType', 'directPutSourceConfiguration', 'elasticsearchDestinationConfiguration', 'extendedS3DestinationConfiguration', 'httpEndpointDestinationConfiguration', 'icebergDestinationConfiguration', 'kinesisStreamSourceConfiguration', 'mskSourceConfiguration', 'redshiftDestinationConfiguration', 's3DestinationConfiguration', 'snowflakeDestinationConfiguration', 'splunkDestinationConfiguration', 'tags']
Static Methods
- classmethod is_mixin(x)
(experimental) Checks if
xis a Mixin.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsMixin.- Stability:
experimental
AmazonOpenSearchServerlessBufferingHintsProperty
- class CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessBufferingHintsProperty(*, interval_in_seconds=None, size_in_m_bs=None)
Bases:
objectDescribes the buffering to perform before delivering data to the Serverless offering for Amazon OpenSearch Service destination.
- Parameters:
interval_in_seconds (
Union[int,float,None]) – Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).size_in_m_bs (
Union[int,float,None]) – Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins amazon_open_search_serverless_buffering_hints_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 )
Attributes
- interval_in_seconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination.
The default value is 300 (5 minutes).
- size_in_m_bs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination.
The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the Firehose stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
AmazonOpenSearchServerlessDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessDestinationConfigurationProperty(*, buffering_hints=None, cloud_watch_logging_options=None, collection_endpoint=None, index_name=None, processing_configuration=None, retry_options=None, role_arn=None, s3_backup_mode=None, s3_configuration=None, vpc_configuration=None)
Bases:
objectDescribes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- Parameters:
buffering_hints (
Union[IResolvable,AmazonOpenSearchServerlessBufferingHintsProperty,Dict[str,Any],None]) – The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None])collection_endpoint (
Optional[str]) – The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.index_name (
Optional[str]) – The Serverless offering for Amazon OpenSearch Service index name.processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None])retry_options (
Union[IResolvable,AmazonOpenSearchServerlessRetryOptionsProperty,Dict[str,Any],None]) – The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).role_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.s3_backup_mode (
Optional[str]) – Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None])vpc_configuration (
Union[IResolvable,VpcConfigurationProperty,Dict[str,Any],None])
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins amazon_open_search_serverless_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessDestinationConfigurationProperty( buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), collection_endpoint="collectionEndpoint", index_name="indexName", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessRetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), vpc_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.VpcConfigurationProperty( role_arn="roleArn", security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] ) )
Attributes
- buffering_hints
The buffering options.
If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud_watch_logging_options
-
- Type:
see
- collection_endpoint
The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- index_name
The Serverless offering for Amazon OpenSearch Service index name.
- processing_configuration
-
- Type:
see
- retry_options
The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service.
The default value is 300 (5 minutes).
- role_arn
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3_backup_mode
Defines how documents should be delivered to Amazon S3.
When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- s3_configuration
-
- Type:
see
AmazonOpenSearchServerlessRetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessRetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectConfigures retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service.
- Parameters:
duration_in_seconds (
Union[int,float,None]) – After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins amazon_open_search_serverless_retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonOpenSearchServerlessRetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt).
After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
AmazonopensearchserviceBufferingHintsProperty
- class CfnDeliveryStreamPropsMixin.AmazonopensearchserviceBufferingHintsProperty(*, interval_in_seconds=None, size_in_m_bs=None)
Bases:
objectDescribes the buffering to perform before delivering data to the Amazon OpenSearch Service destination.
- Parameters:
interval_in_seconds (
Union[int,float,None]) – Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).size_in_m_bs (
Union[int,float,None]) – Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins amazonopensearchservice_buffering_hints_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonopensearchserviceBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 )
Attributes
- interval_in_seconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination.
The default value is 300 (5 minutes).
- size_in_m_bs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination.
The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
AmazonopensearchserviceDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.AmazonopensearchserviceDestinationConfigurationProperty(*, buffering_hints=None, cloud_watch_logging_options=None, cluster_endpoint=None, document_id_options=None, domain_arn=None, index_name=None, index_rotation_period=None, processing_configuration=None, retry_options=None, role_arn=None, s3_backup_mode=None, s3_configuration=None, type_name=None, vpc_configuration=None)
Bases:
objectDescribes the configuration of a destination in Amazon OpenSearch Service.
- Parameters:
buffering_hints (
Union[IResolvable,AmazonopensearchserviceBufferingHintsProperty,Dict[str,Any],None]) – The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – Describes the Amazon CloudWatch logging options for your delivery stream.cluster_endpoint (
Optional[str]) – The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.document_id_options (
Union[IResolvable,DocumentIdOptionsProperty,Dict[str,Any],None]) – Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.domain_arn (
Optional[str]) – The ARN of the Amazon OpenSearch Service domain.index_name (
Optional[str]) – The Amazon OpenSearch Service index name.index_rotation_period (
Optional[str]) – The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None]) – Describes a data processing configuration.retry_options (
Union[IResolvable,AmazonopensearchserviceRetryOptionsProperty,Dict[str,Any],None]) – The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).role_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.s3_backup_mode (
Optional[str]) – Defines how documents should be delivered to Amazon S3.s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – Describes the configuration of a destination in Amazon S3.type_name (
Optional[str]) – The Amazon OpenSearch Service type name.vpc_configuration (
Union[IResolvable,VpcConfigurationProperty,Dict[str,Any],None]) – The details of the VPC of the Amazon OpenSearch Service destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins amazonopensearchservice_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonopensearchserviceDestinationConfigurationProperty( buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonopensearchserviceBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), cluster_endpoint="clusterEndpoint", document_id_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DocumentIdOptionsProperty( default_document_id_format="defaultDocumentIdFormat" ), domain_arn="domainArn", index_name="indexName", index_rotation_period="indexRotationPeriod", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonopensearchserviceRetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), type_name="typeName", vpc_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.VpcConfigurationProperty( role_arn="roleArn", security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] ) )
Attributes
- buffering_hints
The buffering options.
If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud_watch_logging_options
Describes the Amazon CloudWatch logging options for your delivery stream.
- cluster_endpoint
The endpoint to use when communicating with the cluster.
Specify either this ClusterEndpoint or the DomainARN field.
- document_id_options
Indicates the method for setting up document ID.
The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain_arn
The ARN of the Amazon OpenSearch Service domain.
- index_name
The Amazon OpenSearch Service index name.
- index_rotation_period
The Amazon OpenSearch Service index rotation period.
Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processing_configuration
Describes a data processing configuration.
- retry_options
The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service.
The default value is 300 (5 minutes).
- role_arn
The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3_backup_mode
Defines how documents should be delivered to Amazon S3.
- s3_configuration
Describes the configuration of a destination in Amazon S3.
- type_name
The Amazon OpenSearch Service type name.
- vpc_configuration
The details of the VPC of the Amazon OpenSearch Service destination.
AmazonopensearchserviceRetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.AmazonopensearchserviceRetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectConfigures retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service.
- Parameters:
duration_in_seconds (
Union[int,float,None]) – After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins amazonopensearchservice_retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AmazonopensearchserviceRetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt).
After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
AuthenticationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.AuthenticationConfigurationProperty(*, connectivity=None, role_arn=None)
Bases:
objectThe authentication configuration of the Amazon MSK cluster.
- Parameters:
connectivity (
Optional[str]) – The type of connectivity used to access the Amazon MSK cluster.role_arn (
Optional[str]) – The ARN of the role used to access the Amazon MSK cluster.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins authentication_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AuthenticationConfigurationProperty( connectivity="connectivity", role_arn="roleArn" )
Attributes
- connectivity
The type of connectivity used to access the Amazon MSK cluster.
- role_arn
The ARN of the role used to access the Amazon MSK cluster.
BufferingHintsProperty
- class CfnDeliveryStreamPropsMixin.BufferingHintsProperty(*, interval_in_seconds=None, size_in_m_bs=None)
Bases:
objectThe
BufferingHintsproperty type specifies how Amazon Kinesis Data Firehose (Kinesis Data Firehose) buffers incoming data before delivering it to the destination.The first buffer condition that is satisfied triggers Kinesis Data Firehose to deliver the data.
- Parameters:
interval_in_seconds (
Union[int,float,None]) – The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see theIntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .size_in_m_bs (
Union[int,float,None]) –The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins buffering_hints_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 )
Attributes
- interval_in_seconds
The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination.
For valid values, see the
IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- size_in_m_bs
The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination.
For valid values, see the
SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
CatalogConfigurationProperty
- class CfnDeliveryStreamPropsMixin.CatalogConfigurationProperty(*, catalog_arn=None, warehouse_location=None)
Bases:
objectDescribes the containers where the destination Apache Iceberg Tables are persisted.
- Parameters:
catalog_arn (
Optional[str]) – Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the formatarn:aws:glue:region:account-id:catalog.warehouse_location (
Optional[str]) – The warehouse location for Apache Iceberg tables. You must configure this when schema evolution and table creation is enabled. Amazon Data Firehose is in preview release and is subject to change.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins catalog_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CatalogConfigurationProperty( catalog_arn="catalogArn", warehouse_location="warehouseLocation" )
Attributes
- catalog_arn
Specifies the Glue catalog ARN identifier of the destination Apache Iceberg Tables.
You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog.
- warehouse_location
The warehouse location for Apache Iceberg tables. You must configure this when schema evolution and table creation is enabled.
Amazon Data Firehose is in preview release and is subject to change.
CloudWatchLoggingOptionsProperty
- class CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty(*, enabled=None, log_group_name=None, log_stream_name=None)
Bases:
objectThe
CloudWatchLoggingOptionsproperty type specifies Amazon CloudWatch Logs (CloudWatch Logs) logging options that Amazon Kinesis Data Firehose (Kinesis Data Firehose) uses for the delivery stream.- Parameters:
enabled (
Union[bool,IResolvable,None]) – Indicates whether CloudWatch Logs logging is enabled.log_group_name (
Optional[str]) – The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use. Conditional. If you enable logging, you must specify this property.log_stream_name (
Optional[str]) – The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery. Conditional. If you enable logging, you must specify this property.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins cloud_watch_logging_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" )
Attributes
- enabled
Indicates whether CloudWatch Logs logging is enabled.
- log_group_name
The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- log_stream_name
The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
CopyCommandProperty
- class CfnDeliveryStreamPropsMixin.CopyCommandProperty(*, copy_options=None, data_table_columns=None, data_table_name=None)
Bases:
objectThe
CopyCommandproperty type configures the Amazon RedshiftCOPYcommand that Amazon Kinesis Data Firehose (Kinesis Data Firehose) uses to load data into an Amazon Redshift cluster from an Amazon S3 bucket.- Parameters:
copy_options (
Optional[str]) – Parameters to use with the Amazon RedshiftCOPYcommand. For examples, see theCopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .data_table_columns (
Optional[str]) – A comma-separated list of column names.data_table_name (
Optional[str]) – The name of the target table. The table must already exist in the database.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins copy_command_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CopyCommandProperty( copy_options="copyOptions", data_table_columns="dataTableColumns", data_table_name="dataTableName" )
Attributes
- copy_options
Parameters to use with the Amazon Redshift
COPYcommand.For examples, see the
CopyOptionscontent for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference .
- data_table_columns
A comma-separated list of column names.
- data_table_name
The name of the target table.
The table must already exist in the database.
DataFormatConversionConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DataFormatConversionConfigurationProperty(*, enabled=None, input_format_configuration=None, output_format_configuration=None, schema_configuration=None)
Bases:
objectSpecifies that you want Kinesis Data Firehose to convert data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
Kinesis Data Firehose uses the serializer and deserializer that you specify, in addition to the column information from the AWS Glue table, to deserialize your input data from JSON and then serialize it to the Parquet or ORC format. For more information, see Kinesis Data Firehose Record Format Conversion .
- Parameters:
enabled (
Union[bool,IResolvable,None]) – Defaults totrue. Set it tofalseif you want to disable format conversion while preserving the configuration details.input_format_configuration (
Union[IResolvable,InputFormatConfigurationProperty,Dict[str,Any],None]) – Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required ifEnabledis set to true.output_format_configuration (
Union[IResolvable,OutputFormatConfigurationProperty,Dict[str,Any],None]) – Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required ifEnabledis set to true.schema_configuration (
Union[IResolvable,SchemaConfigurationProperty,Dict[str,Any],None]) – Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required ifEnabledis set to true.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins data_format_conversion_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DataFormatConversionConfigurationProperty( enabled=False, input_format_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.InputFormatConfigurationProperty( deserializer=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DeserializerProperty( hive_json_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HiveJsonSerDeProperty( timestamp_formats=["timestampFormats"] ), open_xJson_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OpenXJsonSerDeProperty( case_insensitive=False, column_to_json_key_mappings={ "column_to_json_key_mappings_key": "columnToJsonKeyMappings" }, convert_dots_in_json_keys_to_underscores=False ) ) ), output_format_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OutputFormatConfigurationProperty( serializer=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SerializerProperty( orc_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OrcSerDeProperty( block_size_bytes=123, bloom_filter_columns=["bloomFilterColumns"], bloom_filter_false_positive_probability=123, compression="compression", dictionary_key_threshold=123, enable_padding=False, format_version="formatVersion", padding_tolerance=123, row_index_stride=123, stripe_size_bytes=123 ), parquet_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ParquetSerDeProperty( block_size_bytes=123, compression="compression", enable_dictionary_compression=False, max_padding_bytes=123, page_size_bytes=123, writer_version="writerVersion" ) ) ), schema_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SchemaConfigurationProperty( catalog_id="catalogId", database_name="databaseName", region="region", role_arn="roleArn", table_name="tableName", version_id="versionId" ) )
Attributes
- enabled
Defaults to
true.Set it to
falseif you want to disable format conversion while preserving the configuration details.
- input_format_configuration
Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON.
This parameter is required if
Enabledis set to true.
- output_format_configuration
Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format.
This parameter is required if
Enabledis set to true.
- schema_configuration
Specifies the AWS Glue Data Catalog table that contains the column information.
This parameter is required if
Enabledis set to true.
DatabaseColumnsProperty
- class CfnDeliveryStreamPropsMixin.DatabaseColumnsProperty(*, exclude=None, include=None)
Bases:
object- Parameters:
exclude (
Optional[Sequence[str]])include (
Optional[Sequence[str]])
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins database_columns_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseColumnsProperty( exclude=["exclude"], include=["include"] )
Attributes
- exclude
-
- Type:
see
DatabaseSourceAuthenticationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DatabaseSourceAuthenticationConfigurationProperty(*, secrets_manager_configuration=None)
Bases:
objectThe structure to configure the authentication methods for Firehose to connect to source database endpoint.
Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
secrets_manager_configuration (
Union[IResolvable,SecretsManagerConfigurationProperty,Dict[str,Any],None])- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins database_source_authentication_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseSourceAuthenticationConfigurationProperty( secrets_manager_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" ) )
Attributes
- secrets_manager_configuration
-
- Type:
see
DatabaseSourceConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DatabaseSourceConfigurationProperty(*, columns=None, databases=None, database_source_authentication_configuration=None, database_source_vpc_configuration=None, digest=None, endpoint=None, port=None, public_certificate=None, snapshot_watermark_table=None, ssl_mode=None, surrogate_keys=None, tables=None, type=None)
Bases:
objectThe top level object for configuring streams with database as a source.
Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
columns (
Union[IResolvable,DatabaseColumnsProperty,Dict[str,Any],None]) – The list of column patterns in source database endpoint for Firehose to read from. Amazon Data Firehose is in preview release and is subject to change.databases (
Union[IResolvable,DatabasesProperty,Dict[str,Any],None]) – The list of database patterns in source database endpoint for Firehose to read from. Amazon Data Firehose is in preview release and is subject to change.database_source_authentication_configuration (
Union[IResolvable,DatabaseSourceAuthenticationConfigurationProperty,Dict[str,Any],None]) – The structure to configure the authentication methods for Firehose to connect to source database endpoint. Amazon Data Firehose is in preview release and is subject to change.database_source_vpc_configuration (
Union[IResolvable,DatabaseSourceVPCConfigurationProperty,Dict[str,Any],None]) – The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database. Amazon Data Firehose is in preview release and is subject to change.digest (
Optional[str])endpoint (
Optional[str]) – The endpoint of the database server. Amazon Data Firehose is in preview release and is subject to change.port (
Union[int,float,None]) – The port of the database. This can be one of the following values. - 3306 for MySQL database type - 5432 for PostgreSQL database type Amazon Data Firehose is in preview release and is subject to change.public_certificate (
Optional[str])snapshot_watermark_table (
Optional[str]) – The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress. Amazon Data Firehose is in preview release and is subject to change.ssl_mode (
Optional[str]) – The mode to enable or disable SSL when Firehose connects to the database endpoint. Amazon Data Firehose is in preview release and is subject to change.surrogate_keys (
Optional[Sequence[str]]) – The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured. Amazon Data Firehose is in preview release and is subject to change.tables (
Union[IResolvable,DatabaseTablesProperty,Dict[str,Any],None]) – The list of table patterns in source database endpoint for Firehose to read from. Amazon Data Firehose is in preview release and is subject to change.type (
Optional[str]) – The type of database engine. This can be one of the following values. - MySQL - PostgreSQL Amazon Data Firehose is in preview release and is subject to change.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins database_source_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseSourceConfigurationProperty( columns=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseColumnsProperty( exclude=["exclude"], include=["include"] ), databases=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabasesProperty( exclude=["exclude"], include=["include"] ), database_source_authentication_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseSourceAuthenticationConfigurationProperty( secrets_manager_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" ) ), database_source_vpc_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseSourceVPCConfigurationProperty( vpc_endpoint_service_name="vpcEndpointServiceName" ), digest="digest", endpoint="endpoint", port=123, public_certificate="publicCertificate", snapshot_watermark_table="snapshotWatermarkTable", ssl_mode="sslMode", surrogate_keys=["surrogateKeys"], tables=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseTablesProperty( exclude=["exclude"], include=["include"] ), type="type" )
Attributes
- columns
The list of column patterns in source database endpoint for Firehose to read from.
Amazon Data Firehose is in preview release and is subject to change.
- database_source_authentication_configuration
The structure to configure the authentication methods for Firehose to connect to source database endpoint.
Amazon Data Firehose is in preview release and is subject to change.
- database_source_vpc_configuration
The details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database.
Amazon Data Firehose is in preview release and is subject to change.
- databases
The list of database patterns in source database endpoint for Firehose to read from.
Amazon Data Firehose is in preview release and is subject to change.
- digest
-
- Type:
see
- endpoint
The endpoint of the database server.
Amazon Data Firehose is in preview release and is subject to change.
- port
The port of the database. This can be one of the following values.
3306 for MySQL database type
5432 for PostgreSQL database type
Amazon Data Firehose is in preview release and is subject to change.
- public_certificate
-
- Type:
see
- snapshot_watermark_table
The fully qualified name of the table in source database endpoint that Firehose uses to track snapshot progress.
Amazon Data Firehose is in preview release and is subject to change.
- ssl_mode
The mode to enable or disable SSL when Firehose connects to the database endpoint.
Amazon Data Firehose is in preview release and is subject to change.
- surrogate_keys
The optional list of table and column names used as unique key columns when taking snapshot if the tables don’t have primary keys configured.
Amazon Data Firehose is in preview release and is subject to change.
- tables
The list of table patterns in source database endpoint for Firehose to read from.
Amazon Data Firehose is in preview release and is subject to change.
- type
The type of database engine. This can be one of the following values.
MySQL
PostgreSQL
Amazon Data Firehose is in preview release and is subject to change.
DatabaseSourceVPCConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DatabaseSourceVPCConfigurationProperty(*, vpc_endpoint_service_name=None)
Bases:
objectThe structure for details of the VPC Endpoint Service which Firehose uses to create a PrivateLink to the database.
Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
vpc_endpoint_service_name (
Optional[str]) – The VPC endpoint service name which Firehose uses to create a PrivateLink to the database. The endpoint service must have the Firehose service principlefirehose.amazonaws.com.rproxy.govskope.caas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks likecom.amazonaws.vpce.<region>.<vpc-endpoint-service-id>. Amazon Data Firehose is in preview release and is subject to change.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins database_source_vPCConfiguration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseSourceVPCConfigurationProperty( vpc_endpoint_service_name="vpcEndpointServiceName" )
Attributes
- vpc_endpoint_service_name
The VPC endpoint service name which Firehose uses to create a PrivateLink to the database.
The endpoint service must have the Firehose service principle
firehose.amazonaws.com.rproxy.govskope.caas an allowed principal on the VPC endpoint service. The VPC endpoint service name is a string that looks likecom.amazonaws.vpce.<region>.<vpc-endpoint-service-id>.Amazon Data Firehose is in preview release and is subject to change.
DatabaseTablesProperty
- class CfnDeliveryStreamPropsMixin.DatabaseTablesProperty(*, exclude=None, include=None)
Bases:
object- Parameters:
exclude (
Optional[Sequence[str]])include (
Optional[Sequence[str]])
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins database_tables_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabaseTablesProperty( exclude=["exclude"], include=["include"] )
Attributes
- exclude
-
- Type:
see
DatabasesProperty
- class CfnDeliveryStreamPropsMixin.DatabasesProperty(*, exclude=None, include=None)
Bases:
object- Parameters:
exclude (
Optional[Sequence[str]])include (
Optional[Sequence[str]])
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins databases_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DatabasesProperty( exclude=["exclude"], include=["include"] )
Attributes
- exclude
-
- Type:
see
DeliveryStreamEncryptionConfigurationInputProperty
- class CfnDeliveryStreamPropsMixin.DeliveryStreamEncryptionConfigurationInputProperty(*, key_arn=None, key_type=None)
Bases:
objectSpecifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- Parameters:
key_arn (
Optional[str]) – If you setKeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.key_type (
Optional[str]) – Indicates the type of customer master key (CMK) to use for encryption. The default setting isAWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) . You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. .. epigraph:: To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn’t support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins delivery_stream_encryption_configuration_input_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DeliveryStreamEncryptionConfigurationInputProperty( key_arn="keyArn", key_type="keyType" )
Attributes
- key_arn
If you set
KeyTypetoCUSTOMER_MANAGED_CMK, you must specify the Amazon Resource Name (ARN) of the CMK.If you set
KeyTypetoAWS _OWNED_CMK, Firehose uses a service-account CMK.
- key_type
Indicates the type of customer master key (CMK) to use for encryption.
The default setting is
AWS_OWNED_CMK. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams. .. epigraph:
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see `About Symmetric and Asymmetric CMKs <https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-concepts.html>`_ in the AWS Key Management Service developer guide.
DeserializerProperty
- class CfnDeliveryStreamPropsMixin.DeserializerProperty(*, hive_json_ser_de=None, open_x_json_ser_de=None)
Bases:
objectThe deserializer you want Kinesis Data Firehose to use for converting the input data from JSON.
Kinesis Data Firehose then serializes the data to its final format using the
Serializer. Kinesis Data Firehose supports two types of deserializers: the Apache Hive JSON SerDe and the OpenX JSON SerDe .- Parameters:
hive_json_ser_de (
Union[IResolvable,HiveJsonSerDeProperty,Dict[str,Any],None]) – The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.open_x_json_ser_de (
Union[IResolvable,OpenXJsonSerDeProperty,Dict[str,Any],None]) – The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins deserializer_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DeserializerProperty( hive_json_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HiveJsonSerDeProperty( timestamp_formats=["timestampFormats"] ), open_xJson_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OpenXJsonSerDeProperty( case_insensitive=False, column_to_json_key_mappings={ "column_to_json_key_mappings_key": "columnToJsonKeyMappings" }, convert_dots_in_json_keys_to_underscores=False ) )
Attributes
- hive_json_ser_de
The native Hive / HCatalog JsonSerDe.
Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- open_x_json_ser_de
The OpenX SerDe.
Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
DestinationTableConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DestinationTableConfigurationProperty(*, destination_database_name=None, destination_table_name=None, partition_spec=None, s3_error_output_prefix=None, unique_keys=None)
Bases:
objectDescribes the configuration of a destination in Apache Iceberg Tables.
This section is only needed for tables where you want to update or delete data.
- Parameters:
destination_database_name (
Optional[str]) – The name of the Apache Iceberg database.destination_table_name (
Optional[str]) – Specifies the name of the Apache Iceberg Table.partition_spec (
Union[IResolvable,PartitionSpecProperty,Dict[str,Any],None]) – The partition spec configuration for a table that is used by automatic table creation. Amazon Data Firehose is in preview release and is subject to change.s3_error_output_prefix (
Optional[str]) – The table specific S3 error output prefix. All the errors that occurred while delivering to this table will be prefixed with this value in S3 destination.unique_keys (
Optional[Sequence[str]]) – A list of unique keys for a given Apache Iceberg table. Firehose will use these for running Create, Update, or Delete operations on the given Iceberg table.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins destination_table_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DestinationTableConfigurationProperty( destination_database_name="destinationDatabaseName", destination_table_name="destinationTableName", partition_spec=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionSpecProperty( identity=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionFieldProperty( source_name="sourceName" )] ), s3_error_output_prefix="s3ErrorOutputPrefix", unique_keys=["uniqueKeys"] )
Attributes
- destination_database_name
The name of the Apache Iceberg database.
- destination_table_name
Specifies the name of the Apache Iceberg Table.
- partition_spec
The partition spec configuration for a table that is used by automatic table creation.
Amazon Data Firehose is in preview release and is subject to change.
- s3_error_output_prefix
The table specific S3 error output prefix.
All the errors that occurred while delivering to this table will be prefixed with this value in S3 destination.
- unique_keys
A list of unique keys for a given Apache Iceberg table.
Firehose will use these for running Create, Update, or Delete operations on the given Iceberg table.
DirectPutSourceConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DirectPutSourceConfigurationProperty(*, throughput_hint_in_m_bs=None)
Bases:
objectThe structure that configures parameters such as
ThroughputHintInMBsfor a stream configured with Direct PUT as a source.- Parameters:
throughput_hint_in_m_bs (
Union[int,float,None]) – The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit. You can use the Firehose Limits form to request a throughput limit increase.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins direct_put_source_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DirectPutSourceConfigurationProperty( throughput_hint_in_mBs=123 )
Attributes
- throughput_hint_in_m_bs
The value that you configure for this parameter is for information purpose only and does not affect Firehose delivery throughput limit.
You can use the Firehose Limits form to request a throughput limit increase.
DocumentIdOptionsProperty
- class CfnDeliveryStreamPropsMixin.DocumentIdOptionsProperty(*, default_document_id_format=None)
Bases:
objectIndicates the method for setting up document ID.
The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- Parameters:
default_document_id_format (
Optional[str]) – When theFIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs. When theNO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins document_id_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DocumentIdOptionsProperty( default_document_id_format="defaultDocumentIdFormat" )
Attributes
- default_document_id_format
When the
FIREHOSE_DEFAULToption is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier.The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.
When the
NO_DOCUMENT_IDoption is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
DynamicPartitioningConfigurationProperty
- class CfnDeliveryStreamPropsMixin.DynamicPartitioningConfigurationProperty(*, enabled=None, retry_options=None)
Bases:
objectThe
DynamicPartitioningConfigurationproperty type specifies the configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.- Parameters:
enabled (
Union[bool,IResolvable,None]) – Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.retry_options (
Union[IResolvable,RetryOptionsProperty,Dict[str,Any],None]) – Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins dynamic_partitioning_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DynamicPartitioningConfigurationProperty( enabled=False, retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RetryOptionsProperty( duration_in_seconds=123 ) )
Attributes
- enabled
Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retry_options
Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
ElasticsearchBufferingHintsProperty
- class CfnDeliveryStreamPropsMixin.ElasticsearchBufferingHintsProperty(*, interval_in_seconds=None, size_in_m_bs=None)
Bases:
objectThe
ElasticsearchBufferingHintsproperty type specifies how Amazon Kinesis Data Firehose (Kinesis Data Firehose) buffers incoming data while delivering it to the destination.The first buffer condition that is satisfied triggers Kinesis Data Firehose to deliver the data.
ElasticsearchBufferingHints is the property type for the
BufferingHintsproperty of the Amazon Kinesis Data Firehose DeliveryStream ElasticsearchDestinationConfiguration property type.- Parameters:
interval_in_seconds (
Union[int,float,None]) –The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .size_in_m_bs (
Union[int,float,None]) –The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins elasticsearch_buffering_hints_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ElasticsearchBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 )
Attributes
- interval_in_seconds
The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination.
For valid values, see the
IntervalInSecondscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- size_in_m_bs
The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination.
For valid values, see the
SizeInMBscontent for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
ElasticsearchDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.ElasticsearchDestinationConfigurationProperty(*, buffering_hints=None, cloud_watch_logging_options=None, cluster_endpoint=None, document_id_options=None, domain_arn=None, index_name=None, index_rotation_period=None, processing_configuration=None, retry_options=None, role_arn=None, s3_backup_mode=None, s3_configuration=None, type_name=None, vpc_configuration=None)
Bases:
objectThe
ElasticsearchDestinationConfigurationproperty type specifies an Amazon Elasticsearch Service (Amazon ES) domain that Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data to.- Parameters:
buffering_hints (
Union[IResolvable,ElasticsearchBufferingHintsProperty,Dict[str,Any],None]) – Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – The Amazon CloudWatch Logs logging options for the delivery stream.cluster_endpoint (
Optional[str]) – The endpoint to use when communicating with the cluster. Specify either thisClusterEndpointor theDomainARNfield.document_id_options (
Union[IResolvable,DocumentIdOptionsProperty,Dict[str,Any],None]) – Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.domain_arn (
Optional[str]) – The ARN of the Amazon ES domain. The IAM role must have permissions forDescribeElasticsearchDomain,DescribeElasticsearchDomains, andDescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN . Specify eitherClusterEndpointorDomainARN.index_name (
Optional[str]) – The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.index_rotation_period (
Optional[str]) – The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None]) – The data processing configuration for the Kinesis Data Firehose delivery stream.retry_options (
Union[IResolvable,ElasticsearchRetryOptionsProperty,Dict[str,Any],None]) – The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.role_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .s3_backup_mode (
Optional[str]) – The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see theS3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – The S3 bucket where Kinesis Data Firehose backs up incoming data.type_name (
Optional[str]) – The Elasticsearch type name that Amazon ES adds to documents when indexing data.vpc_configuration (
Union[IResolvable,VpcConfigurationProperty,Dict[str,Any],None]) – The details of the VPC of the Amazon ES destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins elasticsearch_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ElasticsearchDestinationConfigurationProperty( buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ElasticsearchBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), cluster_endpoint="clusterEndpoint", document_id_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DocumentIdOptionsProperty( default_document_id_format="defaultDocumentIdFormat" ), domain_arn="domainArn", index_name="indexName", index_rotation_period="indexRotationPeriod", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ElasticsearchRetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), type_name="typeName", vpc_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.VpcConfigurationProperty( role_arn="roleArn", security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] ) )
Attributes
- buffering_hints
Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloud_watch_logging_options
The Amazon CloudWatch Logs logging options for the delivery stream.
- cluster_endpoint
The endpoint to use when communicating with the cluster.
Specify either this
ClusterEndpointor theDomainARNfield.
- document_id_options
Indicates the method for setting up document ID.
The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain_arn
The ARN of the Amazon ES domain.
The IAM role must have permissions for
DescribeElasticsearchDomain,DescribeElasticsearchDomains, andDescribeElasticsearchDomainConfigafter assuming the role specified in RoleARN .Specify either
ClusterEndpointorDomainARN.
- index_name
The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- index_rotation_period
The frequency of Elasticsearch index rotation.
If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processing_configuration
The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry_options
The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- role_arn
The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents.
For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3_backup_mode
The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3).
You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupModecontent for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- s3_configuration
The S3 bucket where Kinesis Data Firehose backs up incoming data.
- type_name
The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpc_configuration
The details of the VPC of the Amazon ES destination.
ElasticsearchRetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.ElasticsearchRetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectThe
ElasticsearchRetryOptionsproperty type configures the retry behavior for when Amazon Kinesis Data Firehose (Kinesis Data Firehose) can’t deliver data to Amazon Elasticsearch Service (Amazon ES).- Parameters:
duration_in_seconds (
Union[int,float,None]) – After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can’t deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see theDurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins elasticsearch_retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ElasticsearchRetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt).
If Kinesis Data Firehose can’t deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSecondscontent for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
EncryptionConfigurationProperty
- class CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty(*, kms_encryption_config=None, no_encryption_config=None)
Bases:
objectThe
EncryptionConfigurationproperty type specifies the encryption settings that Amazon Kinesis Data Firehose (Kinesis Data Firehose) uses when delivering data to Amazon Simple Storage Service (Amazon S3).- Parameters:
kms_encryption_config (
Union[IResolvable,KMSEncryptionConfigProperty,Dict[str,Any],None]) – The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.no_encryption_config (
Optional[str]) – Disables encryption. For valid values, see theNoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins encryption_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" )
Attributes
- kms_encryption_config
The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- no_encryption_config
Disables encryption.
For valid values, see the
NoEncryptionConfigcontent for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
ExtendedS3DestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.ExtendedS3DestinationConfigurationProperty(*, bucket_arn=None, buffering_hints=None, cloud_watch_logging_options=None, compression_format=None, custom_time_zone=None, data_format_conversion_configuration=None, dynamic_partitioning_configuration=None, encryption_configuration=None, error_output_prefix=None, file_extension=None, prefix=None, processing_configuration=None, role_arn=None, s3_backup_configuration=None, s3_backup_mode=None)
Bases:
objectThe
ExtendedS3DestinationConfigurationproperty type configures an Amazon S3 destination for an Amazon Kinesis Data Firehose delivery stream.- Parameters:
bucket_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .buffering_hints (
Union[IResolvable,BufferingHintsProperty,Dict[str,Any],None]) – The buffering option.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – The Amazon CloudWatch logging options for your Firehose stream.compression_format (
Optional[str]) – The compression format. If no value is specified, the default isUNCOMPRESSED.custom_time_zone (
Optional[str]) – The time zone you prefer. UTC is the default.data_format_conversion_configuration (
Union[IResolvable,DataFormatConversionConfigurationProperty,Dict[str,Any],None]) – The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.dynamic_partitioning_configuration (
Union[IResolvable,DynamicPartitioningConfigurationProperty,Dict[str,Any],None]) – The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.encryption_configuration (
Union[IResolvable,EncryptionConfigurationProperty,Dict[str,Any],None]) – The encryption configuration for the Kinesis Data Firehose delivery stream. The default value isNoEncryption.error_output_prefix (
Optional[str]) – A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .file_extension (
Optional[str]) – Specify a file extension. It will override the default file extensionprefix (
Optional[str]) –The
YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None]) – The data processing configuration for the Kinesis Data Firehose delivery stream.role_arn (
Optional[str]) –The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
s3_backup_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – The configuration for backup in Amazon S3.s3_backup_mode (
Optional[str]) – The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can’t update the Firehose stream to disable it.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins extended_s3_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ExtendedS3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", custom_time_zone="customTimeZone", data_format_conversion_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DataFormatConversionConfigurationProperty( enabled=False, input_format_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.InputFormatConfigurationProperty( deserializer=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DeserializerProperty( hive_json_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HiveJsonSerDeProperty( timestamp_formats=["timestampFormats"] ), open_xJson_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OpenXJsonSerDeProperty( case_insensitive=False, column_to_json_key_mappings={ "column_to_json_key_mappings_key": "columnToJsonKeyMappings" }, convert_dots_in_json_keys_to_underscores=False ) ) ), output_format_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OutputFormatConfigurationProperty( serializer=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SerializerProperty( orc_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OrcSerDeProperty( block_size_bytes=123, bloom_filter_columns=["bloomFilterColumns"], bloom_filter_false_positive_probability=123, compression="compression", dictionary_key_threshold=123, enable_padding=False, format_version="formatVersion", padding_tolerance=123, row_index_stride=123, stripe_size_bytes=123 ), parquet_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ParquetSerDeProperty( block_size_bytes=123, compression="compression", enable_dictionary_compression=False, max_padding_bytes=123, page_size_bytes=123, writer_version="writerVersion" ) ) ), schema_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SchemaConfigurationProperty( catalog_id="catalogId", database_name="databaseName", region="region", role_arn="roleArn", table_name="tableName", version_id="versionId" ) ), dynamic_partitioning_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DynamicPartitioningConfigurationProperty( enabled=False, retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RetryOptionsProperty( duration_in_seconds=123 ) ), encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", file_extension="fileExtension", prefix="prefix", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), role_arn="roleArn", s3_backup_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), s3_backup_mode="s3BackupMode" )
Attributes
- bucket_arn
The Amazon Resource Name (ARN) of the Amazon S3 bucket.
For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- buffering_hints
The buffering option.
- cloud_watch_logging_options
The Amazon CloudWatch logging options for your Firehose stream.
- compression_format
The compression format.
If no value is specified, the default is
UNCOMPRESSED.
- custom_time_zone
The time zone you prefer.
UTC is the default.
- data_format_conversion_configuration
The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamic_partitioning_configuration
The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryption_configuration
The encryption configuration for the Kinesis Data Firehose delivery stream.
The default value is
NoEncryption.
- error_output_prefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3.
This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- file_extension
Specify a file extension.
It will override the default file extension
- prefix
The
YYYY/MM/DD/HHtime format prefix is automatically used for delivered Amazon S3 files.For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- processing_configuration
The data processing configuration for the Kinesis Data Firehose delivery stream.
- role_arn
The Amazon Resource Name (ARN) of the AWS credentials.
For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- s3_backup_configuration
The configuration for backup in Amazon S3.
- s3_backup_mode
The Amazon S3 backup mode.
After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can’t update the Firehose stream to disable it.
HiveJsonSerDeProperty
- class CfnDeliveryStreamPropsMixin.HiveJsonSerDeProperty(*, timestamp_formats=None)
Bases:
objectThe native Hive / HCatalog JsonSerDe.
Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- Parameters:
timestamp_formats (
Optional[Sequence[str]]) – Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime’s DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special valuemillisto parse timestamps in epoch milliseconds. If you don’t specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins hive_json_ser_de_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HiveJsonSerDeProperty( timestamp_formats=["timestampFormats"] )
Attributes
- timestamp_formats
Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON.
To specify these format strings, follow the pattern syntax of JodaTime’s DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millisto parse timestamps in epoch milliseconds. If you don’t specify a format, Firehose usesjava.sql.Timestamp::valueOfby default.
HttpEndpointCommonAttributeProperty
- class CfnDeliveryStreamPropsMixin.HttpEndpointCommonAttributeProperty(*, attribute_name=None, attribute_value=None)
Bases:
objectDescribes the metadata that’s delivered to the specified HTTP endpoint destination.
Kinesis Firehose supports any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, MongoDB, and New Relic.
- Parameters:
attribute_name (
Optional[str]) – The name of the HTTP endpoint common attribute.attribute_value (
Optional[str]) – The value of the HTTP endpoint common attribute.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins http_endpoint_common_attribute_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointCommonAttributeProperty( attribute_name="attributeName", attribute_value="attributeValue" )
Attributes
- attribute_name
The name of the HTTP endpoint common attribute.
- attribute_value
The value of the HTTP endpoint common attribute.
HttpEndpointConfigurationProperty
- class CfnDeliveryStreamPropsMixin.HttpEndpointConfigurationProperty(*, access_key=None, name=None, url=None)
Bases:
objectDescribes the configuration of the HTTP endpoint to which Kinesis Firehose delivers data.
Kinesis Firehose supports any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, MongoDB, and New Relic.
- Parameters:
access_key (
Optional[str]) – The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.name (
Optional[str]) – The name of the HTTP endpoint selected as the destination.url (
Optional[str]) – The URL of the HTTP endpoint selected as the destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins http_endpoint_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointConfigurationProperty( access_key="accessKey", name="name", url="url" )
Attributes
- access_key
The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
- name
The name of the HTTP endpoint selected as the destination.
- url
The URL of the HTTP endpoint selected as the destination.
HttpEndpointDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.HttpEndpointDestinationConfigurationProperty(*, buffering_hints=None, cloud_watch_logging_options=None, endpoint_configuration=None, processing_configuration=None, request_configuration=None, retry_options=None, role_arn=None, s3_backup_mode=None, s3_configuration=None, secrets_manager_configuration=None)
Bases:
objectDescribes the configuration of the HTTP endpoint destination.
Kinesis Firehose supports any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, MongoDB, and New Relic.
- Parameters:
buffering_hints (
Union[IResolvable,BufferingHintsProperty,Dict[str,Any],None]) – The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – Describes the Amazon CloudWatch logging options for your delivery stream.endpoint_configuration (
Union[IResolvable,HttpEndpointConfigurationProperty,Dict[str,Any],None]) – The configuration of the HTTP endpoint selected as the destination.processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None]) – Describes the data processing configuration.request_configuration (
Union[IResolvable,HttpEndpointRequestConfigurationProperty,Dict[str,Any],None]) – The configuration of the request sent to the HTTP endpoint specified as the destination.retry_options (
Union[IResolvable,RetryOptionsProperty,Dict[str,Any],None]) – Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn’t receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.role_arn (
Optional[str]) – Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.s3_backup_mode (
Optional[str]) – Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – Describes the configuration of a destination in Amazon S3.secrets_manager_configuration (
Union[IResolvable,SecretsManagerConfigurationProperty,Dict[str,Any],None]) – The configuration that defines how you access secrets for HTTP Endpoint destination.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins http_endpoint_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointDestinationConfigurationProperty( buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), endpoint_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointConfigurationProperty( access_key="accessKey", name="name", url="url" ), processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), request_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointRequestConfigurationProperty( common_attributes=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointCommonAttributeProperty( attribute_name="attributeName", attribute_value="attributeValue" )], content_encoding="contentEncoding" ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), secrets_manager_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" ) )
Attributes
- buffering_hints
The buffering options that can be used before data is delivered to the specified destination.
Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloud_watch_logging_options
Describes the Amazon CloudWatch logging options for your delivery stream.
- endpoint_configuration
The configuration of the HTTP endpoint selected as the destination.
- processing_configuration
Describes the data processing configuration.
- request_configuration
The configuration of the request sent to the HTTP endpoint specified as the destination.
- retry_options
Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn’t receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- role_arn
Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3_backup_mode
Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination.
You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- s3_configuration
Describes the configuration of a destination in Amazon S3.
- secrets_manager_configuration
The configuration that defines how you access secrets for HTTP Endpoint destination.
HttpEndpointRequestConfigurationProperty
- class CfnDeliveryStreamPropsMixin.HttpEndpointRequestConfigurationProperty(*, common_attributes=None, content_encoding=None)
Bases:
objectThe configuration of the HTTP endpoint request.
Kinesis Firehose supports any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, MongoDB, and New Relic.
- Parameters:
common_attributes (
Union[IResolvable,Sequence[Union[IResolvable,HttpEndpointCommonAttributeProperty,Dict[str,Any]]],None]) – Describes the metadata sent to the HTTP endpoint destination.content_encoding (
Optional[str]) – Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins http_endpoint_request_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointRequestConfigurationProperty( common_attributes=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HttpEndpointCommonAttributeProperty( attribute_name="attributeName", attribute_value="attributeValue" )], content_encoding="contentEncoding" )
Attributes
- common_attributes
Describes the metadata sent to the HTTP endpoint destination.
- content_encoding
Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination.
For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
IcebergDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.IcebergDestinationConfigurationProperty(*, append_only=None, buffering_hints=None, catalog_configuration=None, cloud_watch_logging_options=None, destination_table_configuration_list=None, processing_configuration=None, retry_options=None, role_arn=None, s3_backup_mode=None, s3_configuration=None, schema_evolution_configuration=None, table_creation_configuration=None)
Bases:
objectSpecifies the destination configure settings for Apache Iceberg Table.
- Parameters:
append_only (
Union[bool,IResolvable,None]) – Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery. This feature is only applicable for Apache Iceberg Tables. The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery.buffering_hints (
Union[IResolvable,BufferingHintsProperty,Dict[str,Any],None])catalog_configuration (
Union[IResolvable,CatalogConfigurationProperty,Dict[str,Any],None]) – Configuration describing where the destination Apache Iceberg Tables are persisted.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None])destination_table_configuration_list (
Union[IResolvable,Sequence[Union[IResolvable,DestinationTableConfigurationProperty,Dict[str,Any]]],None]) – Provides a list ofDestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None])retry_options (
Union[IResolvable,RetryOptionsProperty,Dict[str,Any],None])role_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.s3_backup_mode (
Optional[str]) – Describes how Firehose will backup records. Currently,S3 backup only supportsFailedDataOnly.s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None])schema_evolution_configuration (
Union[IResolvable,SchemaEvolutionConfigurationProperty,Dict[str,Any],None]) – The configuration to enable automatic schema evolution. Amazon Data Firehose is in preview release and is subject to change.table_creation_configuration (
Union[IResolvable,TableCreationConfigurationProperty,Dict[str,Any],None]) – The configuration to enable automatic table creation. Amazon Data Firehose is in preview release and is subject to change.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins iceberg_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.IcebergDestinationConfigurationProperty( append_only=False, buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), catalog_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CatalogConfigurationProperty( catalog_arn="catalogArn", warehouse_location="warehouseLocation" ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), destination_table_configuration_list=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DestinationTableConfigurationProperty( destination_database_name="destinationDatabaseName", destination_table_name="destinationTableName", partition_spec=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionSpecProperty( identity=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionFieldProperty( source_name="sourceName" )] ), s3_error_output_prefix="s3ErrorOutputPrefix", unique_keys=["uniqueKeys"] )], processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), schema_evolution_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SchemaEvolutionConfigurationProperty( enabled=False ), table_creation_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.TableCreationConfigurationProperty( enabled=False ) )
Attributes
- append_only
Describes whether all incoming data for this delivery stream will be append only (inserts only and not for updates and deletes) for Iceberg delivery.
This feature is only applicable for Apache Iceberg Tables.
The default value is false. If you set this value to true, Firehose automatically increases the throughput limit of a stream based on the throttling levels of the stream. If you set this parameter to true for a stream with updates and deletes, you will see out of order delivery.
- buffering_hints
-
- Type:
see
- catalog_configuration
Configuration describing where the destination Apache Iceberg Tables are persisted.
- cloud_watch_logging_options
-
- Type:
see
- destination_table_configuration_list
Provides a list of
DestinationTableConfigurationswhich Firehose uses to deliver data to Apache Iceberg Tables.Firehose will write data with insert if table specific configuration is not provided here.
- processing_configuration
-
- Type:
see
- retry_options
-
- Type:
see
- role_arn
The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
- s3_backup_mode
Describes how Firehose will backup records.
Currently,S3 backup only supports
FailedDataOnly.
- s3_configuration
-
- Type:
see
- schema_evolution_configuration
The configuration to enable automatic schema evolution.
Amazon Data Firehose is in preview release and is subject to change.
- table_creation_configuration
The configuration to enable automatic table creation.
Amazon Data Firehose is in preview release and is subject to change.
InputFormatConfigurationProperty
- class CfnDeliveryStreamPropsMixin.InputFormatConfigurationProperty(*, deserializer=None)
Bases:
objectSpecifies the deserializer you want to use to convert the format of the input data.
This parameter is required if
Enabledis set to true.- Parameters:
deserializer (
Union[IResolvable,DeserializerProperty,Dict[str,Any],None]) – Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins input_format_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.InputFormatConfigurationProperty( deserializer=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.DeserializerProperty( hive_json_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.HiveJsonSerDeProperty( timestamp_formats=["timestampFormats"] ), open_xJson_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OpenXJsonSerDeProperty( case_insensitive=False, column_to_json_key_mappings={ "column_to_json_key_mappings_key": "columnToJsonKeyMappings" }, convert_dots_in_json_keys_to_underscores=False ) ) )
Attributes
- deserializer
Specifies which deserializer to use.
You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
KMSEncryptionConfigProperty
- class CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty(*, awskms_key_arn=None)
Bases:
objectThe
KMSEncryptionConfigproperty type specifies the AWS Key Management Service ( AWS KMS) encryption key that Amazon Simple Storage Service (Amazon S3) uses to encrypt data delivered by the Amazon Kinesis Data Firehose (Kinesis Data Firehose) stream.- Parameters:
awskms_key_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins k_mSEncryption_config_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" )
Attributes
- awskms_key_arn
The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream.
The key must belong to the same region as the destination S3 bucket.
KinesisStreamSourceConfigurationProperty
- class CfnDeliveryStreamPropsMixin.KinesisStreamSourceConfigurationProperty(*, kinesis_stream_arn=None, role_arn=None)
Bases:
objectThe
KinesisStreamSourceConfigurationproperty type specifies the stream and role Amazon Resource Names (ARNs) for a Kinesis stream used as the source for a delivery stream.- Parameters:
kinesis_stream_arn (
Optional[str]) – The ARN of the source Kinesis data stream.role_arn (
Optional[str]) – The ARN of the role that provides access to the source Kinesis data stream.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins kinesis_stream_source_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KinesisStreamSourceConfigurationProperty( kinesis_stream_arn="kinesisStreamArn", role_arn="roleArn" )
Attributes
- kinesis_stream_arn
The ARN of the source Kinesis data stream.
- role_arn
The ARN of the role that provides access to the source Kinesis data stream.
MSKSourceConfigurationProperty
- class CfnDeliveryStreamPropsMixin.MSKSourceConfigurationProperty(*, authentication_configuration=None, msk_cluster_arn=None, read_from_timestamp=None, topic_name=None)
Bases:
objectThe configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- Parameters:
authentication_configuration (
Union[IResolvable,AuthenticationConfigurationProperty,Dict[str,Any],None]) – The authentication configuration of the Amazon MSK cluster.msk_cluster_arn (
Optional[str]) – The ARN of the Amazon MSK cluster.read_from_timestamp (
Optional[str]) – The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active. If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set theReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).topic_name (
Optional[str]) – The topic name within the Amazon MSK cluster.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins m_sKSource_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.MSKSourceConfigurationProperty( authentication_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.AuthenticationConfigurationProperty( connectivity="connectivity", role_arn="roleArn" ), msk_cluster_arn="mskClusterArn", read_from_timestamp="readFromTimestamp", topic_name="topicName" )
Attributes
- authentication_configuration
The authentication configuration of the Amazon MSK cluster.
- msk_cluster_arn
The ARN of the Amazon MSK cluster.
- read_from_timestamp
The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read.
By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestampparameter to Epoch (1970-01-01T00:00:00Z).
- topic_name
The topic name within the Amazon MSK cluster.
OpenXJsonSerDeProperty
- class CfnDeliveryStreamPropsMixin.OpenXJsonSerDeProperty(*, case_insensitive=None, column_to_json_key_mappings=None, convert_dots_in_json_keys_to_underscores=None)
Bases:
objectThe OpenX SerDe.
Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- Parameters:
case_insensitive (
Union[bool,IResolvable,None]) – When set totrue, which is the default, Firehose converts JSON keys to lowercase before deserializing them.column_to_json_key_mappings (
Union[Mapping[str,str],IResolvable,None]) – Maps column names to JSON keys that aren’t identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.convert_dots_in_json_keys_to_underscores (
Union[bool,IResolvable,None]) – When set totrue, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is “a.b”, you can define the column name to be “a_b” when using this option. The default isfalse.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins open_xJson_ser_de_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OpenXJsonSerDeProperty( case_insensitive=False, column_to_json_key_mappings={ "column_to_json_key_mappings_key": "columnToJsonKeyMappings" }, convert_dots_in_json_keys_to_underscores=False )
Attributes
- case_insensitive
When set to
true, which is the default, Firehose converts JSON keys to lowercase before deserializing them.
- column_to_json_key_mappings
Maps column names to JSON keys that aren’t identical to the column names.
This is useful when the JSON contains keys that are Hive keywords. For example,
timestampis a Hive keyword. If you have a JSON key namedtimestamp, set this parameter to{"ts": "timestamp"}to map this key to a column namedts.
- convert_dots_in_json_keys_to_underscores
When set to
true, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores.This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is “a.b”, you can define the column name to be “a_b” when using this option.
The default is
false.
OrcSerDeProperty
- class CfnDeliveryStreamPropsMixin.OrcSerDeProperty(*, block_size_bytes=None, bloom_filter_columns=None, bloom_filter_false_positive_probability=None, compression=None, dictionary_key_threshold=None, enable_padding=None, format_version=None, padding_tolerance=None, row_index_stride=None, stripe_size_bytes=None)
Bases:
objectA serializer to use for converting data to the ORC format before storing it in Amazon S3.
For more information, see Apache ORC .
- Parameters:
block_size_bytes (
Union[int,float,None]) – The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.bloom_filter_columns (
Optional[Sequence[str]]) – The column names for which you want Firehose to create bloom filters. The default isnull.bloom_filter_false_positive_probability (
Union[int,float,None]) – The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.compression (
Optional[str]) – The compression code to use over data blocks. The default isSNAPPY.dictionary_key_threshold (
Union[int,float,None]) – Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.enable_padding (
Union[bool,IResolvable,None]) – Set this totrueto indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse.format_version (
Optional[str]) – The version of the file to write. The possible values areV0_11andV0_12. The default isV0_12.padding_tolerance (
Union[int,float,None]) – A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size. For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task. Kinesis Data Firehose ignores this parameter whenEnablePaddingisfalse.row_index_stride (
Union[int,float,None]) – The number of rows between index entries. The default is 10,000 and the minimum is 1,000.stripe_size_bytes (
Union[int,float,None]) – The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins orc_ser_de_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OrcSerDeProperty( block_size_bytes=123, bloom_filter_columns=["bloomFilterColumns"], bloom_filter_false_positive_probability=123, compression="compression", dictionary_key_threshold=123, enable_padding=False, format_version="formatVersion", padding_tolerance=123, row_index_stride=123, stripe_size_bytes=123 )
Attributes
- block_size_bytes
The Hadoop Distributed File System (HDFS) block size.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloom_filter_columns
The column names for which you want Firehose to create bloom filters.
The default is
null.
- bloom_filter_false_positive_probability
The Bloom filter false positive probability (FPP).
The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression
The compression code to use over data blocks.
The default is
SNAPPY.
- dictionary_key_threshold
Represents the fraction of the total number of non-null rows.
To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enable_padding
Set this to
trueto indicate that you want stripes to be padded to the HDFS block boundaries.This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is
false.
- format_version
The version of the file to write.
The possible values are
V0_11andV0_12. The default isV0_12.
- padding_tolerance
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size.
The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePaddingisfalse.
- row_index_stride
The number of rows between index entries.
The default is 10,000 and the minimum is 1,000.
- stripe_size_bytes
The number of bytes in each stripe.
The default is 64 MiB and the minimum is 8 MiB.
OutputFormatConfigurationProperty
- class CfnDeliveryStreamPropsMixin.OutputFormatConfigurationProperty(*, serializer=None)
Bases:
objectSpecifies the serializer that you want Firehose to use to convert the format of your data before it writes it to Amazon S3.
This parameter is required if
Enabledis set to true.- Parameters:
serializer (
Union[IResolvable,SerializerProperty,Dict[str,Any],None]) – Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins output_format_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OutputFormatConfigurationProperty( serializer=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SerializerProperty( orc_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OrcSerDeProperty( block_size_bytes=123, bloom_filter_columns=["bloomFilterColumns"], bloom_filter_false_positive_probability=123, compression="compression", dictionary_key_threshold=123, enable_padding=False, format_version="formatVersion", padding_tolerance=123, row_index_stride=123, stripe_size_bytes=123 ), parquet_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ParquetSerDeProperty( block_size_bytes=123, compression="compression", enable_dictionary_compression=False, max_padding_bytes=123, page_size_bytes=123, writer_version="writerVersion" ) ) )
Attributes
- serializer
Specifies which serializer to use.
You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
ParquetSerDeProperty
- class CfnDeliveryStreamPropsMixin.ParquetSerDeProperty(*, block_size_bytes=None, compression=None, enable_dictionary_compression=None, max_padding_bytes=None, page_size_bytes=None, writer_version=None)
Bases:
objectA serializer to use for converting data to the Parquet format before storing it in Amazon S3.
For more information, see Apache Parquet .
- Parameters:
block_size_bytes (
Union[int,float,None]) – The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.compression (
Optional[str]) – The compression code to use over data blocks. The possible values areUNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.enable_dictionary_compression (
Union[bool,IResolvable,None]) – Indicates whether to enable dictionary compression.max_padding_bytes (
Union[int,float,None]) – The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.page_size_bytes (
Union[int,float,None]) – The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.writer_version (
Optional[str]) – Indicates the version of row format to output. The possible values areV1andV2. The default isV1.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins parquet_ser_de_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ParquetSerDeProperty( block_size_bytes=123, compression="compression", enable_dictionary_compression=False, max_padding_bytes=123, page_size_bytes=123, writer_version="writerVersion" )
Attributes
- block_size_bytes
The Hadoop Distributed File System (HDFS) block size.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression
The compression code to use over data blocks.
The possible values are
UNCOMPRESSED,SNAPPY, andGZIP, with the default beingSNAPPY. UseSNAPPYfor higher decompression speed. UseGZIPif the compression ratio is more important than speed.
- enable_dictionary_compression
Indicates whether to enable dictionary compression.
- max_padding_bytes
The maximum amount of padding to apply.
This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- page_size_bytes
The Parquet page size.
Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writer_version
Indicates the version of row format to output.
The possible values are
V1andV2. The default isV1.
PartitionFieldProperty
- class CfnDeliveryStreamPropsMixin.PartitionFieldProperty(*, source_name=None)
Bases:
objectRepresents a single field in a
PartitionSpec.Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
source_name (
Optional[str]) – The column name to be configured in partition spec. Amazon Data Firehose is in preview release and is subject to change.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins partition_field_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionFieldProperty( source_name="sourceName" )
Attributes
- source_name
The column name to be configured in partition spec.
Amazon Data Firehose is in preview release and is subject to change.
PartitionSpecProperty
- class CfnDeliveryStreamPropsMixin.PartitionSpecProperty(*, identity=None)
Bases:
objectRepresents how to produce partition data for a table.
Partition data is produced by transforming columns in a table. Each column transform is represented by a named
PartitionField.Here is an example of the schema in JSON.
"partitionSpec": { "identity": [ {"sourceName": "column1"}, {"sourceName": "column2"}, {"sourceName": "column3"} ] }Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
identity (
Union[IResolvable,Sequence[Union[IResolvable,PartitionFieldProperty,Dict[str,Any]]],None]) – List of identity transforms that performs an identity transformation. The transform takes the source value, and does not modify it. Result type is the source type. Amazon Data Firehose is in preview release and is subject to change.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins partition_spec_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionSpecProperty( identity=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.PartitionFieldProperty( source_name="sourceName" )] )
Attributes
- identity
List of identity transforms that performs an identity transformation. The transform takes the source value, and does not modify it. Result type is the source type.
Amazon Data Firehose is in preview release and is subject to change.
ProcessingConfigurationProperty
- class CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty(*, enabled=None, processors=None)
Bases:
objectThe
ProcessingConfigurationproperty configures data processing for an Amazon Kinesis Data Firehose delivery stream.- Parameters:
enabled (
Union[bool,IResolvable,None]) – Indicates whether data processing is enabled (true) or disabled (false).processors (
Union[IResolvable,Sequence[Union[IResolvable,ProcessorProperty,Dict[str,Any]]],None]) – The data processors.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins processing_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] )
Attributes
- enabled
Indicates whether data processing is enabled (true) or disabled (false).
ProcessorParameterProperty
- class CfnDeliveryStreamPropsMixin.ProcessorParameterProperty(*, parameter_name=None, parameter_value=None)
Bases:
objectThe
ProcessorParameterproperty specifies a processor parameter in a data processor for an Amazon Kinesis Data Firehose delivery stream.- Parameters:
parameter_name (
Optional[str]) – The name of the parameter. Currently the following default values are supported: 3 forNumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.parameter_value (
Optional[str]) – The parameter value.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins processor_parameter_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )
Attributes
- parameter_name
The name of the parameter.
Currently the following default values are supported: 3 for
NumberOfRetriesand 60 for theBufferIntervalInSeconds. TheBufferSizeInMBsranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB.
ProcessorProperty
- class CfnDeliveryStreamPropsMixin.ProcessorProperty(*, parameters=None, type=None)
Bases:
objectThe
Processorproperty specifies a data processor for an Amazon Kinesis Data Firehose delivery stream.- Parameters:
parameters (
Union[IResolvable,Sequence[Union[IResolvable,ProcessorParameterProperty,Dict[str,Any]]],None]) – The processor parameters.type (
Optional[str]) – The type of processor. Valid values:Lambda.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins processor_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )
Attributes
- parameters
The processor parameters.
- type
The type of processor.
Valid values:
Lambda.
RedshiftDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.RedshiftDestinationConfigurationProperty(*, cloud_watch_logging_options=None, cluster_jdbcurl=None, copy_command=None, password=None, processing_configuration=None, retry_options=None, role_arn=None, s3_backup_configuration=None, s3_backup_mode=None, s3_configuration=None, secrets_manager_configuration=None, username=None)
Bases:
objectThe
RedshiftDestinationConfigurationproperty type specifies an Amazon Redshift cluster to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Parameters:
cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – The CloudWatch logging options for your Firehose stream.cluster_jdbcurl (
Optional[str]) – The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.copy_command (
Union[IResolvable,CopyCommandProperty,Dict[str,Any],None]) – Configures the Amazon RedshiftCOPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.password (
Optional[str]) – The password for the Amazon Redshift user that you specified in theUsernameproperty.processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None]) – The data processing configuration for the Kinesis Data Firehose delivery stream.retry_options (
Union[IResolvable,RedshiftRetryOptionsProperty,Dict[str,Any],None]) – The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).role_arn (
Optional[str]) – The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .s3_backup_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – The configuration for backup in Amazon S3.s3_backup_mode (
Optional[str]) – The Amazon S3 backup mode. After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can’t update the Firehose stream to disable it.s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses theCOPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket’s compression format, don’t specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn’t support them.secrets_manager_configuration (
Union[IResolvable,SecretsManagerConfigurationProperty,Dict[str,Any],None]) – The configuration that defines how you access secrets for Amazon Redshift.username (
Optional[str]) – The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must haveINSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins redshift_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RedshiftDestinationConfigurationProperty( cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), cluster_jdbcurl="clusterJdbcurl", copy_command=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CopyCommandProperty( copy_options="copyOptions", data_table_columns="dataTableColumns", data_table_name="dataTableName" ), password="password", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RedshiftRetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), secrets_manager_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" ), username="username" )
Attributes
- cloud_watch_logging_options
The CloudWatch logging options for your Firehose stream.
- cluster_jdbcurl
The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copy_command
Configures the Amazon Redshift
COPYcommand that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket.
- password
The password for the Amazon Redshift user that you specified in the
Usernameproperty.
- processing_configuration
The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry_options
The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift.
Default value is 3600 (60 minutes).
- role_arn
The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption).
For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3_backup_configuration
The configuration for backup in Amazon S3.
- s3_backup_mode
The Amazon S3 backup mode.
After you create a Firehose stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can’t update the Firehose stream to disable it.
- s3_configuration
The S3 bucket where Kinesis Data Firehose first delivers data.
After the data is in the bucket, Kinesis Data Firehose uses the
COPYcommand to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket’s compression format, don’t specifySNAPPYorZIPbecause the Amazon RedshiftCOPYcommand doesn’t support them.
- secrets_manager_configuration
The configuration that defines how you access secrets for Amazon Redshift.
- username
The Amazon Redshift user that has permission to access the Amazon Redshift cluster.
This user must have
INSERTprivileges for copying data from the Amazon S3 bucket to the cluster.
RedshiftRetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.RedshiftRetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectConfigures retry behavior in case Firehose is unable to deliver documents to Amazon Redshift.
- Parameters:
duration_in_seconds (
Union[int,float,None]) – The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value ofDurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins redshift_retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RedshiftRetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt.
The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSecondsis 0 (zero) or if the first delivery attempt takes longer than the current value.
RetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.RetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectDescribes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn’t receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
Kinesis Firehose supports any custom HTTP endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog, MongoDB, and New Relic.
- Parameters:
duration_in_seconds (
Union[int,float,None]) – The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn’t include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.RetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
The total amount of time that Kinesis Data Firehose spends on retries.
This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn’t include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
S3DestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty(*, bucket_arn=None, buffering_hints=None, cloud_watch_logging_options=None, compression_format=None, encryption_configuration=None, error_output_prefix=None, prefix=None, role_arn=None)
Bases:
objectThe
S3DestinationConfigurationproperty type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.- Parameters:
bucket_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.buffering_hints (
Union[IResolvable,BufferingHintsProperty,Dict[str,Any],None]) – Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – The CloudWatch logging options for your Firehose stream.compression_format (
Optional[str]) – The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see theCompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .encryption_configuration (
Union[IResolvable,EncryptionConfigurationProperty,Dict[str,Any],None]) – Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.error_output_prefix (
Optional[str]) –A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
prefix (
Optional[str]) – A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.role_arn (
Optional[str]) – The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins s3_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" )
Attributes
- bucket_arn
The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- buffering_hints
Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloud_watch_logging_options
The CloudWatch logging options for your Firehose stream.
- compression_format
The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket.
For valid values, see the
CompressionFormatcontent for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- encryption_configuration
Configures Amazon Simple Storage Service (Amazon S3) server-side encryption.
Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- error_output_prefix
A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3.
This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix
A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket.
The prefix helps you identify the files that Kinesis Data Firehose delivered.
- role_arn
The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption).
For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
SchemaConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SchemaConfigurationProperty(*, catalog_id=None, database_name=None, region=None, role_arn=None, table_name=None, version_id=None)
Bases:
objectSpecifies the schema to which you want Firehose to configure your data before it writes it to Amazon S3.
This parameter is required if
Enabledis set to true.- Parameters:
catalog_id (
Optional[str]) – The ID of the AWS Glue Data Catalog. If you don’t supply this, the AWS account ID is used by default.database_name (
Optional[str]) – Specifies the name of the AWS Glue database that contains the schema for the output data. .. epigraph:: If theSchemaConfigurationrequest parameter is used as part of invoking theCreateDeliveryStreamAPI, then theDatabaseNameproperty is required and its value must be specified.region (
Optional[str]) – If you don’t specify an AWS Region, the default is the current Region.role_arn (
Optional[str]) – The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren’t allowed. .. epigraph:: If theSchemaConfigurationrequest parameter is used as part of invoking theCreateDeliveryStreamAPI, then theRoleARNproperty is required and its value must be specified.table_name (
Optional[str]) – Specifies the AWS Glue table that contains the column information that constitutes your data schema. .. epigraph:: If theSchemaConfigurationrequest parameter is used as part of invoking theCreateDeliveryStreamAPI, then theTableNameproperty is required and its value must be specified.version_id (
Optional[str]) – Specifies the table version for the output data schema. If you don’t specify this version ID, or if you set it toLATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins schema_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SchemaConfigurationProperty( catalog_id="catalogId", database_name="databaseName", region="region", role_arn="roleArn", table_name="tableName", version_id="versionId" )
Attributes
- catalog_id
The ID of the AWS Glue Data Catalog.
If you don’t supply this, the AWS account ID is used by default.
- database_name
Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfigurationrequest parameter is used as part of invoking theCreateDeliveryStreamAPI, then theDatabaseNameproperty is required and its value must be specified.
- region
If you don’t specify an AWS Region, the default is the current Region.
- role_arn
The role that Firehose can use to access AWS Glue.
This role must be in the same account you use for Firehose. Cross-account roles aren’t allowed. .. epigraph:
If the ``SchemaConfiguration`` request parameter is used as part of invoking the ``CreateDeliveryStream`` API, then the ``RoleARN`` property is required and its value must be specified.
- table_name
Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfigurationrequest parameter is used as part of invoking theCreateDeliveryStreamAPI, then theTableNameproperty is required and its value must be specified.
- version_id
Specifies the table version for the output data schema.
If you don’t specify this version ID, or if you set it to
LATEST, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
SchemaEvolutionConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SchemaEvolutionConfigurationProperty(*, enabled=None)
Bases:
objectThe configuration to enable schema evolution.
Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
enabled (
Union[bool,IResolvable,None]) – Specify whether you want to enable schema evolution. Amazon Data Firehose is in preview release and is subject to change.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins schema_evolution_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SchemaEvolutionConfigurationProperty( enabled=False )
Attributes
- enabled
Specify whether you want to enable schema evolution.
Amazon Data Firehose is in preview release and is subject to change.
SecretsManagerConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty(*, enabled=None, role_arn=None, secret_arn=None)
Bases:
objectThe structure that defines how Firehose accesses the secret.
- Parameters:
enabled (
Union[bool,IResolvable,None]) – Specifies whether you want to use the secrets manager feature. When set asTruethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it’s set toFalseFirehose falls back to the credentials in the destination configuration.role_arn (
Optional[str]) – Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.secret_arn (
Optional[str]) – The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set toTrue.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins secrets_manager_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" )
Attributes
- enabled
Specifies whether you want to use the secrets manager feature.
When set as
Truethe secrets manager configuration overwrites the existing secrets in the destination configuration. When it’s set toFalseFirehose falls back to the credentials in the destination configuration.
- role_arn
Specifies the role that Firehose assumes when calling the Secrets Manager API operation.
When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secret_arn
The ARN of the secret that stores your credentials.
It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the Firehose stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True.
SerializerProperty
- class CfnDeliveryStreamPropsMixin.SerializerProperty(*, orc_ser_de=None, parquet_ser_de=None)
Bases:
objectThe serializer that you want Firehose to use to convert data to the target format before writing it to Amazon S3.
Firehose supports two types of serializers: the ORC SerDe and the Parquet SerDe.
- Parameters:
orc_ser_de (
Union[IResolvable,OrcSerDeProperty,Dict[str,Any],None]) –A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
parquet_ser_de (
Union[IResolvable,ParquetSerDeProperty,Dict[str,Any],None]) –A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins serializer_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SerializerProperty( orc_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.OrcSerDeProperty( block_size_bytes=123, bloom_filter_columns=["bloomFilterColumns"], bloom_filter_false_positive_probability=123, compression="compression", dictionary_key_threshold=123, enable_padding=False, format_version="formatVersion", padding_tolerance=123, row_index_stride=123, stripe_size_bytes=123 ), parquet_ser_de=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ParquetSerDeProperty( block_size_bytes=123, compression="compression", enable_dictionary_compression=False, max_padding_bytes=123, page_size_bytes=123, writer_version="writerVersion" ) )
Attributes
- orc_ser_de
A serializer to use for converting data to the ORC format before storing it in Amazon S3.
For more information, see Apache ORC .
- parquet_ser_de
A serializer to use for converting data to the Parquet format before storing it in Amazon S3.
For more information, see Apache Parquet .
SnowflakeBufferingHintsProperty
- class CfnDeliveryStreamPropsMixin.SnowflakeBufferingHintsProperty(*, interval_in_seconds=None, size_in_m_bs=None)
Bases:
objectDescribes the buffering to perform before delivering data to the Snowflake destination.
If you do not specify any value, Firehose uses the default values.
- Parameters:
interval_in_seconds (
Union[int,float,None]) – Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.size_in_m_bs (
Union[int,float,None]) – Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins snowflake_buffering_hints_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 )
Attributes
- interval_in_seconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination.
The default value is 0.
- size_in_m_bs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination.
The default value is 128.
SnowflakeDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SnowflakeDestinationConfigurationProperty(*, account_url=None, buffering_hints=None, cloud_watch_logging_options=None, content_column_name=None, database=None, data_loading_option=None, key_passphrase=None, meta_data_column_name=None, private_key=None, processing_configuration=None, retry_options=None, role_arn=None, s3_backup_mode=None, s3_configuration=None, schema=None, secrets_manager_configuration=None, snowflake_role_configuration=None, snowflake_vpc_configuration=None, table=None, user=None)
Bases:
objectConfigure Snowflake destination.
- Parameters:
account_url (
Optional[str]) – URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.buffering_hints (
Union[IResolvable,SnowflakeBufferingHintsProperty,Dict[str,Any],None]) – Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None])content_column_name (
Optional[str]) – The name of the record content column.database (
Optional[str]) – All data in Snowflake is maintained in databases.data_loading_option (
Optional[str]) – Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.key_passphrase (
Optional[str]) – Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .meta_data_column_name (
Optional[str]) – Specify a column name in the table, where the metadata information has to be loaded. When you enable this field, you will see the following column in the snowflake table, which differs based on the source type. For Direct PUT as source{ "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }For Kinesis Data Stream as source"kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }private_key (
Optional[str]) –The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None])retry_options (
Union[IResolvable,SnowflakeRetryOptionsProperty,Dict[str,Any],None]) – The time period where Firehose will retry sending data to the chosen HTTP endpoint.role_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the Snowflake role.s3_backup_mode (
Optional[str]) – Choose an S3 backup mode.s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None])schema (
Optional[str]) – Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views.secrets_manager_configuration (
Union[IResolvable,SecretsManagerConfigurationProperty,Dict[str,Any],None]) – The configuration that defines how you access secrets for Snowflake.snowflake_role_configuration (
Union[IResolvable,SnowflakeRoleConfigurationProperty,Dict[str,Any],None]) – Optionally configure a Snowflake role. Otherwise the default user role will be used.snowflake_vpc_configuration (
Union[IResolvable,SnowflakeVpcConfigurationProperty,Dict[str,Any],None]) – The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflaketable (
Optional[str]) – All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.user (
Optional[str]) – User login name for the Snowflake account.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins snowflake_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeDestinationConfigurationProperty( account_url="accountUrl", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), content_column_name="contentColumnName", database="database", data_loading_option="dataLoadingOption", key_passphrase="keyPassphrase", meta_data_column_name="metaDataColumnName", private_key="privateKey", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeRetryOptionsProperty( duration_in_seconds=123 ), role_arn="roleArn", s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), schema="schema", secrets_manager_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" ), snowflake_role_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeRoleConfigurationProperty( enabled=False, snowflake_role="snowflakeRole" ), snowflake_vpc_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeVpcConfigurationProperty( private_link_vpce_id="privateLinkVpceId" ), table="table", user="user" )
Attributes
- account_url
URL for accessing your Snowflake account.
This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- buffering_hints
Describes the buffering to perform before delivering data to the Snowflake destination.
If you do not specify any value, Firehose uses the default values.
- cloud_watch_logging_options
-
- Type:
see
- content_column_name
The name of the record content column.
- data_loading_option
Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- database
All data in Snowflake is maintained in databases.
- key_passphrase
Passphrase to decrypt the private key when the key is encrypted.
For information, see Using Key Pair Authentication & Key Rotation .
- meta_data_column_name
Specify a column name in the table, where the metadata information has to be loaded.
When you enable this field, you will see the following column in the snowflake table, which differs based on the source type.
For Direct PUT as source
{ "firehoseDeliveryStreamName" : "streamname", "IngestionTime" : "timestamp" }For Kinesis Data Stream as source
"kinesisStreamName" : "streamname", "kinesisShardId" : "Id", "kinesisPartitionKey" : "key", "kinesisSequenceNumber" : "1234", "subsequenceNumber" : "2334", "IngestionTime" : "timestamp" }
- private_key
The private key used to encrypt your Snowflake client.
For information, see Using Key Pair Authentication & Key Rotation .
- processing_configuration
-
- Type:
see
- retry_options
The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- role_arn
The Amazon Resource Name (ARN) of the Snowflake role.
- s3_backup_mode
Choose an S3 backup mode.
- s3_configuration
-
- Type:
see
- schema
Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views.
- secrets_manager_configuration
The configuration that defines how you access secrets for Snowflake.
- snowflake_role_configuration
Optionally configure a Snowflake role.
Otherwise the default user role will be used.
- snowflake_vpc_configuration
The VPCE ID for Firehose to privately connect with Snowflake.
The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- table
All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
SnowflakeRetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.SnowflakeRetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectSpecify how long Firehose retries sending data to the New Relic HTTP endpoint.
After sending data, Firehose first waits for an acknowledgment from the HTTP endpoint. If an error occurs or the acknowledgment doesn’t arrive within the acknowledgment timeout period, Firehose starts the retry duration counter. It keeps retrying until the retry duration expires. After that, Firehose considers it a data delivery failure and backs up the data to your Amazon S3 bucket. Every time that Firehose sends data to the HTTP endpoint (either the initial attempt or a retry), it restarts the acknowledgement timeout counter and waits for an acknowledgement from the HTTP endpoint. Even if the retry duration expires, Firehose still waits for the acknowledgment until it receives it or the acknowledgement timeout period is reached. If the acknowledgment times out, Firehose determines whether there’s time left in the retry counter. If there is time left, it retries again and repeats the logic until it receives an acknowledgment or determines that the retry time has expired. If you don’t want Firehose to retry sending data, set this value to 0.
- Parameters:
duration_in_seconds (
Union[int,float,None]) – the time period where Firehose will retry sending data to the chosen HTTP endpoint.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins snowflake_retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeRetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
the time period where Firehose will retry sending data to the chosen HTTP endpoint.
SnowflakeRoleConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SnowflakeRoleConfigurationProperty(*, enabled=None, snowflake_role=None)
Bases:
objectOptionally configure a Snowflake role.
Otherwise the default user role will be used.
- Parameters:
enabled (
Union[bool,IResolvable,None]) – Enable Snowflake role.snowflake_role (
Optional[str]) – The Snowflake role you wish to configure.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins snowflake_role_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeRoleConfigurationProperty( enabled=False, snowflake_role="snowflakeRole" )
Attributes
- enabled
Enable Snowflake role.
- snowflake_role
The Snowflake role you wish to configure.
SnowflakeVpcConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SnowflakeVpcConfigurationProperty(*, private_link_vpce_id=None)
Bases:
objectConfigure a Snowflake VPC.
- Parameters:
private_link_vpce_id (
Optional[str]) –The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins snowflake_vpc_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SnowflakeVpcConfigurationProperty( private_link_vpce_id="privateLinkVpceId" )
Attributes
- private_link_vpce_id
The VPCE ID for Firehose to privately connect with Snowflake.
The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
SplunkBufferingHintsProperty
- class CfnDeliveryStreamPropsMixin.SplunkBufferingHintsProperty(*, interval_in_seconds=None, size_in_m_bs=None)
Bases:
objectThe buffering options.
If no value is specified, the default values for Splunk are used.
- Parameters:
interval_in_seconds (
Union[int,float,None]) – Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).size_in_m_bs (
Union[int,float,None]) – Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins splunk_buffering_hints_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SplunkBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 )
Attributes
- interval_in_seconds
Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination.
The default value is 60 (1 minute).
- size_in_m_bs
Buffer incoming data to the specified size, in MBs, before delivering it to the destination.
The default value is 5.
SplunkDestinationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.SplunkDestinationConfigurationProperty(*, buffering_hints=None, cloud_watch_logging_options=None, hec_acknowledgment_timeout_in_seconds=None, hec_endpoint=None, hec_endpoint_type=None, hec_token=None, processing_configuration=None, retry_options=None, s3_backup_mode=None, s3_configuration=None, secrets_manager_configuration=None)
Bases:
objectThe
SplunkDestinationConfigurationproperty type specifies the configuration of a destination in Splunk for a Kinesis Data Firehose delivery stream.- Parameters:
buffering_hints (
Union[IResolvable,SplunkBufferingHintsProperty,Dict[str,Any],None]) – The buffering options. If no value is specified, the default values for Splunk are used.cloud_watch_logging_options (
Union[IResolvable,CloudWatchLoggingOptionsProperty,Dict[str,Any],None]) – The Amazon CloudWatch logging options for your Firehose stream.hec_acknowledgment_timeout_in_seconds (
Union[int,float,None]) – The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.hec_endpoint (
Optional[str]) – The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.hec_endpoint_type (
Optional[str]) – This type can be eitherRaworEvent.hec_token (
Optional[str]) – This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.processing_configuration (
Union[IResolvable,ProcessingConfigurationProperty,Dict[str,Any],None]) – The data processing configuration.retry_options (
Union[IResolvable,SplunkRetryOptionsProperty,Dict[str,Any],None]) – The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn’t receive an acknowledgment of receipt from Splunk.s3_backup_mode (
Optional[str]) – Defines how documents should be delivered to Amazon S3. When set toFailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly. You can update this backup mode fromFailedEventsOnlytoAllEvents. You can’t update it fromAllEventstoFailedEventsOnly.s3_configuration (
Union[IResolvable,S3DestinationConfigurationProperty,Dict[str,Any],None]) – The configuration for the backup Amazon S3 location.secrets_manager_configuration (
Union[IResolvable,SecretsManagerConfigurationProperty,Dict[str,Any],None]) – The configuration that defines how you access secrets for Splunk.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins splunk_destination_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SplunkDestinationConfigurationProperty( buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SplunkBufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), hec_acknowledgment_timeout_in_seconds=123, hec_endpoint="hecEndpoint", hec_endpoint_type="hecEndpointType", hec_token="hecToken", processing_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessingConfigurationProperty( enabled=False, processors=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorProperty( parameters=[kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.ProcessorParameterProperty( parameter_name="parameterName", parameter_value="parameterValue" )], type="type" )] ), retry_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SplunkRetryOptionsProperty( duration_in_seconds=123 ), s3_backup_mode="s3BackupMode", s3_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.S3DestinationConfigurationProperty( bucket_arn="bucketArn", buffering_hints=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.BufferingHintsProperty( interval_in_seconds=123, size_in_mBs=123 ), cloud_watch_logging_options=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.CloudWatchLoggingOptionsProperty( enabled=False, log_group_name="logGroupName", log_stream_name="logStreamName" ), compression_format="compressionFormat", encryption_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.EncryptionConfigurationProperty( kms_encryption_config=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.KMSEncryptionConfigProperty( awskms_key_arn="awskmsKeyArn" ), no_encryption_config="noEncryptionConfig" ), error_output_prefix="errorOutputPrefix", prefix="prefix", role_arn="roleArn" ), secrets_manager_configuration=kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SecretsManagerConfigurationProperty( enabled=False, role_arn="roleArn", secret_arn="secretArn" ) )
Attributes
- buffering_hints
The buffering options.
If no value is specified, the default values for Splunk are used.
- cloud_watch_logging_options
The Amazon CloudWatch logging options for your Firehose stream.
- hec_acknowledgment_timeout_in_seconds
The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data.
At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hec_endpoint
The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hec_endpoint_type
This type can be either
RaworEvent.
- hec_token
This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processing_configuration
The data processing configuration.
- retry_options
The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn’t receive an acknowledgment of receipt from Splunk.
- s3_backup_mode
Defines how documents should be delivered to Amazon S3.
When set to
FailedEventsOnly, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly.You can update this backup mode from
FailedEventsOnlytoAllEvents. You can’t update it fromAllEventstoFailedEventsOnly.
- s3_configuration
The configuration for the backup Amazon S3 location.
- secrets_manager_configuration
The configuration that defines how you access secrets for Splunk.
SplunkRetryOptionsProperty
- class CfnDeliveryStreamPropsMixin.SplunkRetryOptionsProperty(*, duration_in_seconds=None)
Bases:
objectThe
SplunkRetryOptionsproperty type specifies retry behavior in case Kinesis Data Firehose is unable to deliver documents to Splunk or if it doesn’t receive an acknowledgment from Splunk.- Parameters:
duration_in_seconds (
Union[int,float,None]) – The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn’t include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins splunk_retry_options_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.SplunkRetryOptionsProperty( duration_in_seconds=123 )
Attributes
- duration_in_seconds
The total amount of time that Firehose spends on retries.
This duration starts after the initial attempt to send data to Splunk fails. It doesn’t include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
TableCreationConfigurationProperty
- class CfnDeliveryStreamPropsMixin.TableCreationConfigurationProperty(*, enabled=None)
Bases:
objectThe configuration to enable automatic table creation.
Amazon Data Firehose is in preview release and is subject to change.
- Parameters:
enabled (
Union[bool,IResolvable,None]) – Specify whether you want to enable automatic table creation. Amazon Data Firehose is in preview release and is subject to change.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins table_creation_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.TableCreationConfigurationProperty( enabled=False )
Attributes
- enabled
Specify whether you want to enable automatic table creation.
Amazon Data Firehose is in preview release and is subject to change.
VpcConfigurationProperty
- class CfnDeliveryStreamPropsMixin.VpcConfigurationProperty(*, role_arn=None, security_group_ids=None, subnet_ids=None)
Bases:
objectThe details of the VPC of the Amazon ES destination.
- Parameters:
role_arn (
Optional[str]) – The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions: -ec2:DescribeVpcs-ec2:DescribeVpcAttribute-ec2:DescribeSubnets-ec2:DescribeSecurityGroups-ec2:DescribeNetworkInterfaces-ec2:CreateNetworkInterface-ec2:CreateNetworkInterfacePermission-ec2:DeleteNetworkInterfaceIf you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can’t scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.security_group_ids (
Optional[Sequence[str]]) – The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain’s security group. Also ensure that the Amazon ES domain’s security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.subnet_ids (
Optional[Sequence[str]]) – The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs. The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_kinesisfirehose import mixins as kinesisfirehose_mixins vpc_configuration_property = kinesisfirehose_mixins.CfnDeliveryStreamPropsMixin.VpcConfigurationProperty( role_arn="roleArn", security_group_ids=["securityGroupIds"], subnet_ids=["subnetIds"] )
Attributes
- role_arn
The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC.
You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcsec2:DescribeVpcAttributeec2:DescribeSubnetsec2:DescribeSecurityGroupsec2:DescribeNetworkInterfacesec2:CreateNetworkInterfaceec2:CreateNetworkInterfacePermissionec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can’t scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- security_group_ids
The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination.
You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain’s security group. Also ensure that the Amazon ES domain’s security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnet_ids
The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination.
Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.