Interface KafkaEventSourceProps
- All Superinterfaces:
BaseStreamEventSourceProps,software.amazon.jsii.JsiiSerializable
- All Known Subinterfaces:
ManagedKafkaEventSourceProps,SelfManagedKafkaEventSourceProps
- All Known Implementing Classes:
KafkaEventSourceProps.Jsii$Proxy,ManagedKafkaEventSourceProps.Jsii$Proxy,SelfManagedKafkaEventSourceProps.Jsii$Proxy
Example:
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import software.amazon.awscdk.*;
import software.amazon.awscdk.services.kms.*;
import software.amazon.awscdk.services.lambda.*;
import software.amazon.awscdk.services.lambda.eventsources.*;
import software.amazon.awscdk.services.secretsmanager.*;
IEventSourceDlq eventSourceDlq;
Object filters;
Key key;
ISchemaRegistry schemaRegistry;
Secret secret;
KafkaEventSourceProps kafkaEventSourceProps = KafkaEventSourceProps.builder()
.startingPosition(StartingPosition.TRIM_HORIZON)
.topic("topic")
// the properties below are optional
.batchSize(123)
.bisectBatchOnError(false)
.consumerGroupId("consumerGroupId")
.enabled(false)
.filterEncryption(key)
.filters(List.of(Map.of(
"filtersKey", filters)))
.maxBatchingWindow(Duration.minutes(30))
.maxRecordAge(Duration.minutes(30))
.onFailure(eventSourceDlq)
.provisionedPollerConfig(ProvisionedPollerConfig.builder()
.maximumPollers(123)
.minimumPollers(123)
// the properties below are optional
.pollerGroupName("pollerGroupName")
.build())
.reportBatchItemFailures(false)
.retryAttempts(123)
.schemaRegistryConfig(schemaRegistry)
.secret(secret)
.startingPositionTimestamp(123)
.build();
-
Nested Class Summary
Nested ClassesModifier and TypeInterfaceDescriptionstatic final classA builder forKafkaEventSourcePropsstatic final classAn implementation forKafkaEventSourceProps -
Method Summary
Modifier and TypeMethodDescriptionbuilder()default BooleanIf the function returns an error, split the batch in two and retry.default StringThe identifier for the Kafka consumer group to join.default IKeyAdd Customer managed KMS key to encrypt Filter Criteria.Add filter criteria to Event Source.default DurationThe maximum age of a record that Lambda sends to a function for processing.default IEventSourceDlqAdd an on Failure Destination for this Kafka event.default BooleanAllow functions to return partially successful responses for a batch of records.default NumberMaximum number of retry attempts.default ISchemaRegistrySpecific configuration settings for a Kafka schema registry.default ISecretThe secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.default NumberThe time from which to start reading, in Unix time seconds.getTopic()The Kafka topic to subscribe to.Methods inherited from interface software.amazon.awscdk.services.lambda.eventsources.BaseStreamEventSourceProps
getBatchSize, getEnabled, getMaxBatchingWindow, getProvisionedPollerConfig, getStartingPositionMethods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getTopic
The Kafka topic to subscribe to. -
getBisectBatchOnError
- If the function returns an error, split the batch in two and retry.
Default: false
-
getConsumerGroupId
The identifier for the Kafka consumer group to join.The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a length between 1 and 200 and full the pattern '[a-zA-Z0-9-/:_+=.@-]'.
Default: - none
- See Also:
-
getFilterEncryption
Add Customer managed KMS key to encrypt Filter Criteria.Default: - none
- See Also:
-
getFilters
Add filter criteria to Event Source.Default: - none
- See Also:
-
getMaxRecordAge
The maximum age of a record that Lambda sends to a function for processing.The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. Record are valid until it expires in the event source.
Default: -1
-
getOnFailure
Add an on Failure Destination for this Kafka event.Supported destinations: *
KafkaDlq- Send failed records to a Kafka topic * SNS topics - Send failed records to an SNS topic * SQS queues - Send failed records to an SQS queue * S3 buckets - Send failed records to an S3 bucketDefault: - discarded records are ignored
-
getReportBatchItemFailures
- Allow functions to return partially successful responses for a batch of records.
Default: false
-
getRetryAttempts
- Maximum number of retry attempts.
Set to -1 for infinite retries (until the record expires in the event source).
Default: -1 (infinite retries)
-
getSchemaRegistryConfig
Specific configuration settings for a Kafka schema registry.Default: - none
-
getSecret
The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.Default: none
-
getStartingPositionTimestamp
The time from which to start reading, in Unix time seconds.Default: - no timestamp
-
builder
- Returns:
- a
KafkaEventSourceProps.BuilderofKafkaEventSourceProps
-