interface SelfManagedKafkaEventSourceProps
| Language | Type name |
|---|---|
.NET | Amazon.CDK.AWS.Lambda.EventSources.SelfManagedKafkaEventSourceProps |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awslambdaeventsources#SelfManagedKafkaEventSourceProps |
Java | software.amazon.awscdk.services.lambda.eventsources.SelfManagedKafkaEventSourceProps |
Python | aws_cdk.aws_lambda_event_sources.SelfManagedKafkaEventSourceProps |
TypeScript (source) | aws-cdk-lib » aws_lambda_event_sources » SelfManagedKafkaEventSourceProps |
Properties for a self managed Kafka cluster event source.
If your Kafka cluster is only reachable via VPC make sure to configure it.
Example
import { SelfManagedKafkaEventSource, AuthenticationMethod } from 'aws-cdk-lib/aws-lambda-event-sources';
import { StartingPosition, Function } from 'aws-cdk-lib/aws-lambda';
import { ISecret } from 'aws-cdk-lib/aws-secretsmanager';
// With provisioned pollers and poller group for cost optimization
declare const myFunction: Function;
declare const kafkaCredentials: ISecret;
myFunction.addEventSource(new SelfManagedKafkaEventSource({
bootstrapServers: ['kafka-broker1.example.com:9092', 'kafka-broker2.example.com:9092'],
topic: 'events-topic',
secret: kafkaCredentials,
startingPosition: StartingPosition.LATEST,
authenticationMethod: AuthenticationMethod.SASL_SCRAM_512_AUTH,
provisionedPollerConfig: {
minimumPollers: 1,
maximumPollers: 8,
pollerGroupName: 'self-managed-kafka-group', // Group pollers to reduce costs
},
}));
Properties
| Name | Type | Description |
|---|---|---|
| bootstrap | string[] | The list of host and port pairs that are the addresses of the Kafka brokers in a "bootstrap" Kafka cluster that a Kafka client connects to initially to bootstrap itself. |
| starting | Starting | Where to begin consuming the stream. |
| topic | string | The Kafka topic to subscribe to. |
| authentication | Authentication | The authentication method for your Kafka cluster. |
| batch | number | The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function. |
| bisect | boolean | * If the function returns an error, split the batch in two and retry. |
| consumer | string | The identifier for the Kafka consumer group to join. |
| enabled? | boolean | If the stream event source mapping should be enabled. |
| filter | IKey | Add Customer managed KMS key to encrypt Filter Criteria. |
| filters? | { [string]: any }[] | Add filter criteria to Event Source. |
| max | Duration | The maximum amount of time to gather records before invoking the function. |
| max | Duration | The maximum age of a record that Lambda sends to a function for processing. |
| on | IEvent | Add an on Failure Destination for this Kafka event. |
| provisioned | Provisioned | Configuration for provisioned pollers that read from the event source. |
| report | boolean | * Allow functions to return partially successful responses for a batch of records. |
| retry | number | * Maximum number of retry attempts. |
| root | ISecret | The secret with the root CA certificate used by your Kafka brokers for TLS encryption This field is required if your Kafka brokers use certificates signed by a private CA. |
| schema | ISchema | Specific configuration settings for a Kafka schema registry. |
| secret? | ISecret | The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet. |
| security | ISecurity | If your Kafka brokers are only reachable via VPC, provide the security group here. |
| starting | number | The time from which to start reading, in Unix time seconds. |
| vpc? | IVpc | If your Kafka brokers are only reachable via VPC provide the VPC here. |
| vpc | Subnet | If your Kafka brokers are only reachable via VPC, provide the subnets selection here. |
bootstrapServers
Type:
string[]
The list of host and port pairs that are the addresses of the Kafka brokers in a "bootstrap" Kafka cluster that a Kafka client connects to initially to bootstrap itself.
They are in the format abc.xyz.com:xxxx.
startingPosition
Type:
Starting
Where to begin consuming the stream.
topic
Type:
string
The Kafka topic to subscribe to.
authenticationMethod?
Type:
Authentication
(optional, default: AuthenticationMethod.SASL_SCRAM_512_AUTH)
The authentication method for your Kafka cluster.
batchSize?
Type:
number
(optional, default: 100)
The largest number of records that AWS Lambda will retrieve from your event source at the time of invoking your function.
Your function receives an event with all the retrieved records.
Valid Range:
- Minimum value of 1
- Maximum value of:
- 1000 for
DynamoEventSource - 10000 for
KinesisEventSource,ManagedKafkaEventSourceandSelfManagedKafkaEventSource
- 1000 for
bisectBatchOnError?
Type:
boolean
(optional, default: false)
- If the function returns an error, split the batch in two and retry.
consumerGroupId?
Type:
string
(optional, default: none)
The identifier for the Kafka consumer group to join.
The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. The value must have a length between 1 and 200 and full the pattern '[a-zA-Z0-9-/:_+=.@-]'.
enabled?
Type:
boolean
(optional, default: true)
If the stream event source mapping should be enabled.
filterEncryption?
Type:
IKey
(optional, default: none)
Add Customer managed KMS key to encrypt Filter Criteria.
filters?
Type:
{ [string]: any }[]
(optional, default: none)
Add filter criteria to Event Source.
See also: https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html
maxBatchingWindow?
Type:
Duration
(optional, default: Duration.seconds(0) for Kinesis, DynamoDB, and SQS event sources, Duration.millis(500) for MSK, self-managed Kafka, and Amazon MQ.)
The maximum amount of time to gather records before invoking the function.
Maximum of Duration.minutes(5).
maxRecordAge?
Type:
Duration
(optional, default: -1)
The maximum age of a record that Lambda sends to a function for processing.
The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. Record are valid until it expires in the event source.
onFailure?
Type:
IEvent
(optional, default: discarded records are ignored)
Add an on Failure Destination for this Kafka event.
Supported destinations:
- {@link KafkaDlq } - Send failed records to a Kafka topic
- SNS topics - Send failed records to an SNS topic
- SQS queues - Send failed records to an SQS queue
- S3 buckets - Send failed records to an S3 bucket
provisionedPollerConfig?
Type:
Provisioned
(optional, default: no provisioned pollers)
Configuration for provisioned pollers that read from the event source.
When specified, allows control over the minimum and maximum number of pollers that can be provisioned to process events from the source.
reportBatchItemFailures?
Type:
boolean
(optional, default: false)
- Allow functions to return partially successful responses for a batch of records.
retryAttempts?
Type:
number
(optional, default: -1 (infinite retries))
- Maximum number of retry attempts.
Set to -1 for infinite retries (until the record expires in the event source).
rootCACertificate?
Type:
ISecret
(optional, default: none)
The secret with the root CA certificate used by your Kafka brokers for TLS encryption This field is required if your Kafka brokers use certificates signed by a private CA.
schemaRegistryConfig?
Type:
ISchema
(optional, default: none)
Specific configuration settings for a Kafka schema registry.
secret?
Type:
ISecret
(optional, default: none)
The secret with the Kafka credentials, see https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html for details This field is required if your Kafka brokers are accessed over the Internet.
securityGroup?
Type:
ISecurity
(optional, default: none, required if setting vpc)
If your Kafka brokers are only reachable via VPC, provide the security group here.
startingPositionTimestamp?
Type:
number
(optional, default: no timestamp)
The time from which to start reading, in Unix time seconds.
vpc?
Type:
IVpc
(optional, default: none)
If your Kafka brokers are only reachable via VPC provide the VPC here.
vpcSubnets?
Type:
Subnet
(optional, default: none, required if setting vpc)
If your Kafka brokers are only reachable via VPC, provide the subnets selection here.

.NET
Go
Java
Python
TypeScript (