KafkaEventSourceProps

class aws_cdk.aws_lambda_event_sources.KafkaEventSourceProps(*, starting_position, batch_size=None, enabled=None, max_batching_window=None, provisioned_poller_config=None, topic, bisect_batch_on_error=None, consumer_group_id=None, filter_encryption=None, filters=None, max_record_age=None, on_failure=None, report_batch_item_failures=None, retry_attempts=None, schema_registry_config=None, secret=None, starting_position_timestamp=None)

Bases: BaseStreamEventSourceProps

Properties for a Kafka event source.

Parameters:
  • starting_position (StartingPosition) – Where to begin consuming the stream.

  • batch_size (Union[int, float, None]) – The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function. Your function receives an event with all the retrieved records. Valid Range: - Minimum value of 1 - Maximum value of: - 1000 for DynamoEventSource - 10000 for KinesisEventSource, ManagedKafkaEventSource and SelfManagedKafkaEventSource Default: 100

  • enabled (Optional[bool]) – If the stream event source mapping should be enabled. Default: true

  • max_batching_window (Optional[Duration]) – The maximum amount of time to gather records before invoking the function. Maximum of Duration.minutes(5). Default: - Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.

  • provisioned_poller_config (Union[ProvisionedPollerConfig, Dict[str, Any], None]) – Configuration for provisioned pollers that read from the event source. When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source. Default: - no provisioned pollers

  • topic (str) – The Kafka topic to subscribe to.

  • bisect_batch_on_error (Optional[bool]) –

    • If the function returns an error, split the batch in two and retry. Default: false

  • consumer_group_id (Optional[str]) – The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a length between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’. Default: - none

  • filter_encryption (Optional[IKey]) – Add Customer managed KMS key to encrypt Filter Criteria. Default: - none

  • filters (Optional[Sequence[Mapping[str, Any]]]) – Add filter criteria to Event Source. Default: - none

  • max_record_age (Optional[Duration]) – The maximum age of a record that Lambda sends to a function for processing. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. Record are valid until it expires in the event source. Default: -1

  • on_failure (Optional[IEventSourceDlq]) – Add an on Failure Destination for this Kafka event. Supported destinations: - {@link KafkaDlq } - Send failed records to a Kafka topic - SNS topics - Send failed records to an SNS topic - SQS queues - Send failed records to an SQS queue - S3 buckets - Send failed records to an S3 bucket Default: - discarded records are ignored

  • report_batch_item_failures (Optional[bool]) –

    • Allow functions to return partially successful responses for a batch of records. Default: false

  • retry_attempts (Union[int, float, None]) –

    • Maximum number of retry attempts. Set to -1 for infinite retries (until the record expires in the event source). Default: -1 (infinite retries)

  • schema_registry_config (Optional[ISchemaRegistry]) – Specific configuration settings for a Kafka schema registry. Default: - none

  • secret (Optional[ISecret]) – The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet. Default: none

  • starting_position_timestamp (Union[int, float, None]) – The time from which to start reading, in Unix time seconds. Default: - no timestamp

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
import aws_cdk as cdk
from aws_cdk import aws_kms as kms
from aws_cdk import aws_lambda as lambda_
from aws_cdk import aws_lambda_event_sources as lambda_event_sources
from aws_cdk import aws_secretsmanager as secretsmanager

# event_source_dlq: lambda.IEventSourceDlq
# filters: Any
# key: kms.Key
# schema_registry: lambda.ISchemaRegistry
# secret: secretsmanager.Secret

kafka_event_source_props = lambda_event_sources.KafkaEventSourceProps(
    starting_position=lambda_.StartingPosition.TRIM_HORIZON,
    topic="topic",

    # the properties below are optional
    batch_size=123,
    bisect_batch_on_error=False,
    consumer_group_id="consumerGroupId",
    enabled=False,
    filter_encryption=key,
    filters=[{
        "filters_key": filters
    }],
    max_batching_window=cdk.Duration.minutes(30),
    max_record_age=cdk.Duration.minutes(30),
    on_failure=event_source_dlq,
    provisioned_poller_config=lambda_event_sources.ProvisionedPollerConfig(
        maximum_pollers=123,
        minimum_pollers=123,

        # the properties below are optional
        poller_group_name="pollerGroupName"
    ),
    report_batch_item_failures=False,
    retry_attempts=123,
    schema_registry_config=schema_registry,
    secret=secret,
    starting_position_timestamp=123
)

Attributes

batch_size

The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function.

Your function receives an event with all the retrieved records.

Valid Range:

  • Minimum value of 1

  • Maximum value of:

    • 1000 for DynamoEventSource

    • 10000 for KinesisEventSource, ManagedKafkaEventSource and SelfManagedKafkaEventSource

Default:

100

bisect_batch_on_error
  • If the function returns an error, split the batch in two and retry.

Default:

false

consumer_group_id

The identifier for the Kafka consumer group to join.

The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a length between 1 and 200 and full the pattern ‘[a-zA-Z0-9-/:_+=.@-]’.

Default:
  • none

See:

https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html#services-msk-consumer-group-id

enabled

If the stream event source mapping should be enabled.

Default:

true

filter_encryption

Add Customer managed KMS key to encrypt Filter Criteria.

Default:
  • none

See:

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk

filters

Add filter criteria to Event Source.

Default:
  • none

See:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html

max_batching_window

The maximum amount of time to gather records before invoking the function.

Maximum of Duration.minutes(5).

Default:
  • Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.

See:

https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html#invocation-eventsourcemapping-batching

max_record_age

The maximum age of a record that Lambda sends to a function for processing.

The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. Record are valid until it expires in the event source.

Default:

-1

on_failure

Add an on Failure Destination for this Kafka event.

Supported destinations:

  • {@link KafkaDlq } - Send failed records to a Kafka topic

  • SNS topics - Send failed records to an SNS topic

  • SQS queues - Send failed records to an SQS queue

  • S3 buckets - Send failed records to an S3 bucket

Default:
  • discarded records are ignored

provisioned_poller_config

Configuration for provisioned pollers that read from the event source.

When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source.

Default:
  • no provisioned pollers

report_batch_item_failures
  • Allow functions to return partially successful responses for a batch of records.

Default:

false

retry_attempts
  • Maximum number of retry attempts.

Set to -1 for infinite retries (until the record expires in the event source).

Default:

-1 (infinite retries)

schema_registry_config

Specific configuration settings for a Kafka schema registry.

Default:
  • none

secret

//docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.

Default:

none

Type:

The secret with the Kafka credentials, see https

starting_position

Where to begin consuming the stream.

starting_position_timestamp

The time from which to start reading, in Unix time seconds.

Default:
  • no timestamp

topic

The Kafka topic to subscribe to.