Interface KafkaEventSourceProps

All Superinterfaces:
BaseStreamEventSourceProps, software.amazon.jsii.JsiiSerializable
All Known Subinterfaces:
ManagedKafkaEventSourceProps, SelfManagedKafkaEventSourceProps
All Known Implementing Classes:
KafkaEventSourceProps.Jsii$Proxy, ManagedKafkaEventSourceProps.Jsii$Proxy, SelfManagedKafkaEventSourceProps.Jsii$Proxy

@Generated(value="jsii-pacmak/1.121.0 (build d7af9b9)", date="2025-12-18T18:20:12.859Z") @Stability(Stable) public interface KafkaEventSourceProps extends software.amazon.jsii.JsiiSerializable, BaseStreamEventSourceProps
Properties for a Kafka event source.

Example:

 // The code below shows an example of how to instantiate this type.
 // The values are placeholders you should change.
 import software.amazon.awscdk.*;
 import software.amazon.awscdk.services.kms.*;
 import software.amazon.awscdk.services.lambda.*;
 import software.amazon.awscdk.services.lambda.eventsources.*;
 import software.amazon.awscdk.services.secretsmanager.*;
 IEventSourceDlq eventSourceDlq;
 Object filters;
 Key key;
 ISchemaRegistry schemaRegistry;
 Secret secret;
 KafkaEventSourceProps kafkaEventSourceProps = KafkaEventSourceProps.builder()
         .startingPosition(StartingPosition.TRIM_HORIZON)
         .topic("topic")
         // the properties below are optional
         .batchSize(123)
         .bisectBatchOnError(false)
         .consumerGroupId("consumerGroupId")
         .enabled(false)
         .filterEncryption(key)
         .filters(List.of(Map.of(
                 "filtersKey", filters)))
         .maxBatchingWindow(Duration.minutes(30))
         .maxRecordAge(Duration.minutes(30))
         .onFailure(eventSourceDlq)
         .provisionedPollerConfig(ProvisionedPollerConfig.builder()
                 .maximumPollers(123)
                 .minimumPollers(123)
                 // the properties below are optional
                 .pollerGroupName("pollerGroupName")
                 .build())
         .reportBatchItemFailures(false)
         .retryAttempts(123)
         .schemaRegistryConfig(schemaRegistry)
         .secret(secret)
         .startingPositionTimestamp(123)
         .build();
 
  • Method Details

    • getTopic

      @Stability(Stable) @NotNull String getTopic()
      The Kafka topic to subscribe to.
    • getBisectBatchOnError

      @Stability(Stable) @Nullable default Boolean getBisectBatchOnError()
      • If the function returns an error, split the batch in two and retry.

      Default: false

    • getConsumerGroupId

      @Stability(Stable) @Nullable default String getConsumerGroupId()
      The identifier for the Kafka consumer group to join.

      The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a length between 1 and 200 and full the pattern '[a-zA-Z0-9-/:_+=.@-]'.

      Default: - none

      See Also:
    • getFilterEncryption

      @Stability(Stable) @Nullable default IKey getFilterEncryption()
      Add Customer managed KMS key to encrypt Filter Criteria.

      Default: - none

      See Also:
    • getFilters

      @Stability(Stable) @Nullable default List<Map<String,Object>> getFilters()
      Add filter criteria to Event Source.

      Default: - none

      See Also:
    • getMaxRecordAge

      @Stability(Stable) @Nullable default Duration getMaxRecordAge()
      The maximum age of a record that Lambda sends to a function for processing.

      The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. Record are valid until it expires in the event source.

      Default: -1

    • getOnFailure

      @Stability(Stable) @Nullable default IEventSourceDlq getOnFailure()
      Add an on Failure Destination for this Kafka event.

      Supported destinations: * KafkaDlq - Send failed records to a Kafka topic * SNS topics - Send failed records to an SNS topic * SQS queues - Send failed records to an SQS queue * S3 buckets - Send failed records to an S3 bucket

      Default: - discarded records are ignored

    • getReportBatchItemFailures

      @Stability(Stable) @Nullable default Boolean getReportBatchItemFailures()
      • Allow functions to return partially successful responses for a batch of records.

      Default: false

    • getRetryAttempts

      @Stability(Stable) @Nullable default Number getRetryAttempts()
      • Maximum number of retry attempts.

      Set to -1 for infinite retries (until the record expires in the event source).

      Default: -1 (infinite retries)

    • getSchemaRegistryConfig

      @Stability(Stable) @Nullable default ISchemaRegistry getSchemaRegistryConfig()
      Specific configuration settings for a Kafka schema registry.

      Default: - none

    • getSecret

      @Stability(Stable) @Nullable default ISecret getSecret()
      The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.

      Default: none

    • getStartingPositionTimestamp

      @Stability(Stable) @Nullable default Number getStartingPositionTimestamp()
      The time from which to start reading, in Unix time seconds.

      Default: - no timestamp

    • builder

      @Stability(Stable) static KafkaEventSourceProps.Builder builder()
      Returns:
      a KafkaEventSourceProps.Builder of KafkaEventSourceProps