DestinationS3BackupProps
- class aws_cdk.aws_kinesisfirehose.DestinationS3BackupProps(*, buffering_interval=None, buffering_size=None, compression=None, data_output_prefix=None, encryption_key=None, error_output_prefix=None, bucket=None, logging_config=None, mode=None)
Bases:
CommonDestinationS3PropsProperties for defining an S3 backup destination.
S3 backup is available for all destinations, regardless of whether the final destination is S3 or not.
- Parameters:
buffering_interval (
Optional[Duration]) – The length of time that Firehose buffers incoming data before delivering it to the S3 bucket. Minimum: Duration.seconds(0) Maximum: Duration.seconds(900) Default: Duration.seconds(300)buffering_size (
Optional[Size]) – The size of the buffer that Amazon Data Firehose uses for incoming data before delivering it to the S3 bucket. Minimum: Size.mebibytes(1) when record data format conversion is disabled, Size.mebibytes(64) when it is enabled Maximum: Size.mebibytes(128) Default: Size.mebibytes(5) when record data format conversion is disabled, Size.mebibytes(128) when it is enabledcompression (
Optional[Compression]) – The type of compression that Amazon Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift destinations because they are not supported by the Amazon Redshift COPY operation that reads from the S3 bucket. Default: - UNCOMPRESSEDdata_output_prefix (
Optional[str]) – A prefix that Amazon Data Firehose evaluates and adds to records before writing them to S3. This prefix appears immediately following the bucket name. Default: “YYYY/MM/DD/HH”encryption_key (
Optional[IKey]) – The AWS KMS key used to encrypt the data that it delivers to your Amazon S3 bucket. Default: - Data is not encrypted.error_output_prefix (
Optional[str]) – A prefix that Amazon Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. Default: “YYYY/MM/DD/HH”bucket (
Optional[IBucket]) – The S3 bucket that will store data and failed records. Default: - Ifmodeis set toBackupMode.ALLorBackupMode.FAILED, a bucket will be created for you.logging_config (
Optional[ILoggingConfig]) – Configuration that determines whether to log errors during data transformation or delivery failures, and specifies the CloudWatch log group for storing error logs. Default: - errors will be logged and a log group will be created for you.mode (
Optional[BackupMode]) – Indicates the mode by which incoming records should be backed up to S3, if any. Ifbucketis provided, this will be implicitly set toBackupMode.ALL. Default: - Ifbucketis provided, the default will beBackupMode.ALL. Otherwise, source records are not backed up to S3.
- ExampleMetadata:
infused
Example:
# Enable backup of all source records (to an S3 bucket created by CDK). # bucket: s3.Bucket # Explicitly provide an S3 bucket to which all source records will be backed up. # backup_bucket: s3.Bucket firehose.DeliveryStream(self, "Delivery Stream Backup All", destination= firehose.S3Bucket(bucket, s3_backup=firehose.DestinationS3BackupProps( mode=firehose.BackupMode.ALL ) ) ) firehose.DeliveryStream(self, "Delivery Stream Backup All Explicit Bucket", destination= firehose.S3Bucket(bucket, s3_backup=firehose.DestinationS3BackupProps( bucket=backup_bucket ) ) ) # Explicitly provide an S3 prefix under which all source records will be backed up. firehose.DeliveryStream(self, "Delivery Stream Backup All Explicit Prefix", destination= firehose.S3Bucket(bucket, s3_backup=firehose.DestinationS3BackupProps( mode=firehose.BackupMode.ALL, data_output_prefix="mybackup" ) ) )
Attributes
- bucket
The S3 bucket that will store data and failed records.
- Default:
If
modeis set toBackupMode.ALLorBackupMode.FAILED, a bucket will be created for you.
- buffering_interval
The length of time that Firehose buffers incoming data before delivering it to the S3 bucket.
Minimum: Duration.seconds(0) Maximum: Duration.seconds(900)
- Default:
Duration.seconds(300)
- buffering_size
The size of the buffer that Amazon Data Firehose uses for incoming data before delivering it to the S3 bucket.
Minimum: Size.mebibytes(1) when record data format conversion is disabled, Size.mebibytes(64) when it is enabled Maximum: Size.mebibytes(128)
- Default:
Size.mebibytes(5) when record data format conversion is disabled, Size.mebibytes(128) when it is enabled
- compression
The type of compression that Amazon Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket.
The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift destinations because they are not supported by the Amazon Redshift COPY operation that reads from the S3 bucket.
- Default:
UNCOMPRESSED
- data_output_prefix
A prefix that Amazon Data Firehose evaluates and adds to records before writing them to S3.
This prefix appears immediately following the bucket name.
- Default:
“YYYY/MM/DD/HH”
- See:
https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
- encryption_key
The AWS KMS key used to encrypt the data that it delivers to your Amazon S3 bucket.
- Default:
Data is not encrypted.
- error_output_prefix
A prefix that Amazon Data Firehose evaluates and adds to failed records before writing them to S3.
This prefix appears immediately following the bucket name.
- Default:
“YYYY/MM/DD/HH”
- See:
https://docs.aws.amazon.com/firehose/latest/dev/s3-prefixes.html
- logging_config
Configuration that determines whether to log errors during data transformation or delivery failures, and specifies the CloudWatch log group for storing error logs.
- Default:
errors will be logged and a log group will be created for you.
- mode
Indicates the mode by which incoming records should be backed up to S3, if any.
If
bucketis provided, this will be implicitly set toBackupMode.ALL.- Default:
If
bucketis provided, the default will beBackupMode.ALL. Otherwise,
source records are not backed up to S3.