

After careful consideration, we have decided to discontinue Amazon Kinesis Data Analytics for SQL applications:

1. From **September 1, 2025**, we won't provide any bug fixes for Amazon Kinesis Data Analytics for SQL applications because we will have limited support for it, given the upcoming discontinuation.

2. From **October 15, 2025**, you will not be able to create new Kinesis Data Analytics for SQL applications.

3. We will delete your applications starting **January 27, 2026**. You will not be able to start or operate your Amazon Kinesis Data Analytics for SQL applications. Support will no longer be available for Amazon Kinesis Data Analytics for SQL from that time. For more information, see [Amazon Kinesis Data Analytics for SQL Applications discontinuation](discontinuation.md).

# Configuring Application Output
Output



In your application code, you write the output of SQL statements to one or more in-application streams. You can optionally add an output configuration to your application. to persist everything written to an in-application stream to an external destination such as an Amazon Kinesis data stream, a Firehose delivery stream, or an AWS Lambda function. 

There is a limit on the number of external destinations you can use to persist an application output. For more information, see [Limits](limits.md). 

**Note**  
We recommend that you use one external destination to persist in-application error stream data so that you can investigate the errors. 



In each of these output configurations, you provide the following:
+ **In-application stream name** – The stream that you want to persist to an external destination. 

  Kinesis Data Analytics looks for the in-application stream that you specified in the output configuration. (The stream name is case sensitive and must match exactly.) Make sure that your application code creates this in-application stream. 
+ **External destination** – You can persist data to a Kinesis data stream, a Firehose delivery stream, or a Lambda function. You provide the Amazon Resource Name (ARN) of the stream or function. You also provide an IAM role that Kinesis Data Analytics can assume to write to the stream or function on your behalf. You describe the record format (JSON, CSV) to Kinesis Data Analytics to use when writing to the external destination.

If Kinesis Data Analytics can't write to the streaming or Lambda destination, the service continues to try indefinitely. This creates back pressure, causing your application to fall behind. If this issue is not resolved, your application eventually stops processing new data. You can monitor [Kinesis Data Analytics Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aka-metricscollected.html) and set alarms for failures. For more information about metrics and alarms, see [Using Amazon CloudWatch Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) and [Creating Amazon CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).

You can configure the application output using the AWS Management Console. The console makes the API call to save the configuration. 

## Creating an Output Using the AWS CLI


This section describes how to create the `Outputs` section of the request body for a `CreateApplication` or `AddApplicationOutput` operation.

### Creating a Kinesis Stream Output


The following JSON fragment shows the `Outputs` section in the `CreateApplication` request body for creating an Amazon Kinesis data stream destination.

```
"Outputs": [
   {
       "DestinationSchema": {
           "RecordFormatType": "string"
       },
       "KinesisStreamsOutput": {
           "ResourceARN": "string",
           "RoleARN": "string"
       },
       "Name": "string"
   }
 
]
```

### Creating a Firehose Delivery Stream Output


The following JSON fragment shows the `Outputs` section in the `CreateApplication` request body for creating an Amazon Data Firehose delivery stream destination.

```
"Outputs": [
   {
       "DestinationSchema": {
           "RecordFormatType": "string"
       },
       "KinesisFirehoseOutput": {
           "ResourceARN": "string",
           "RoleARN": "string"
       },
       "Name": "string"
   }
]
```

### Creating a Lambda Function Output


The following JSON fragment shows the `Outputs` section in the `CreateApplication` request body for creating an AWS Lambda function destination.

```
"Outputs": [
   {
       "DestinationSchema": {
           "RecordFormatType": "string"
       },
       "LambdaOutput": {
           "ResourceARN": "string",
           "RoleARN": "string"
       },
       "Name": "string"
   }
]
```

# Using a Lambda Function as Output


Using AWS Lambda as a destination allows you to more easily perform post-processing of your SQL results before sending them to a final destination. Common post-processing tasks include the following:
+ Aggregating multiple rows into a single record
+ Combining current results with past results to address late-arriving data
+ Delivering to different destinations based on the type of information
+ Record format translation (such as translating to Protobuf)
+ String manipulation or transformation
+ Data enrichment after analytical processing
+ Custom processing for geospatial use cases
+ Data encryption

Lambda functions can deliver analytic information to a variety of AWS services and other destinations, including the following:
+ [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)
+ Custom APIs
+ [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/)
+ [Amazon Aurora](http://aurora.apache.org/)
+ [Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/dg/)
+ [Amazon Simple Notification Service (Amazon SNS)](https://docs.aws.amazon.com/sns/latest/dg/)
+ [Amazon Simple Queue Service (Amazon SQS)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/)
+ [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/)

For more information about creating Lambda applications, see [Getting Started with AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html).

**Topics**
+ [

## Lambda as Output Permissions
](#how-it-works-output-lambda-perms)
+ [

## Lambda as Output Metrics
](#how-it-works-output-lambda-metrics)
+ [

## Lambda as Output Event Input Data Model and Record Response Model
](#how-it-works-output-lambda-model)
+ [

## Lambda Output Invocation Frequency
](#how-it-works-output-lambda-frequency)
+ [

## Adding a Lambda Function for Use as an Output
](#how-it-works-output-lambda-procedure)
+ [

## Common Lambda as Output Failures
](#how-it-works-output-lambda-troubleshooting)
+ [

# Creating Lambda Functions for Application Destinations
](how-it-works-output-lambda-functions.md)

## Lambda as Output Permissions


To use Lambda as output, the application’s Lambda output IAM role requires the following permissions policy:

```
{
   "Sid": "UseLambdaFunction",
   "Effect": "Allow",
   "Action": [
       "lambda:InvokeFunction",
       "lambda:GetFunctionConfiguration"
   ],
   "Resource": "FunctionARN"
}
```

## Lambda as Output Metrics


You use Amazon CloudWatch to monitor the number of bytes sent, successes and failures, and so on. For information about CloudWatch metrics that are emitted by Kinesis Data Analytics using Lambda as output, see [Amazon Kinesis Analytics Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aka-metricscollected.html).

## Lambda as Output Event Input Data Model and Record Response Model


To send Kinesis Data Analytics output records, your Lambda function must be compliant with the required event input data and record response models. 

### Event Input Data Model


Kinesis Data Analytics continuously sends the output records from the application to the Lambda as an output function with the following request model. Within your function, you iterate through the list and apply your business logic to accomplish your output requirements (such as data transformation before sending to a final destination).


| Field | Description | 
| --- | --- | 
| Field | Description | 
| --- | --- | 
| Field | Description | 
| --- | --- | 
| invocationId | The Lambda invocation ID (random GUID). | 
| applicationArn | The Kinesis Data Analytics application Amazon Resource Name (ARN). | 
| records [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output-lambda.html)  | 
| recordId | record ID (random GUID) | 
| lambdaDeliveryRecordMetadata |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output-lambda.html)  | 
| data | Base64-encoded output record payload | 
| retryHint | Number of delivery retries | 

**Note**  
The `retryHint` is a value that increases for every delivery failure. This value is not durably persisted, and resets if the application is disrupted.

### Record Response Model


Each record sent to your Lambda as an output function (with record IDs) must be acknowledged with either `Ok` or `DeliveryFailed`, and it must contain the following parameters. Otherwise, Kinesis Data Analytics treats them as a delivery failure.


| Field | Description | 
| --- | --- | 
| records [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output-lambda.html)  | 
| recordId | The record ID is passed from Kinesis Data Analytics to Lambda during the invocation. Any mismatch between the ID of the original record and the ID of the acknowledged record is treated as a delivery failure. | 
| result | The status of the delivery of the record. The following are possible values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/how-it-works-output-lambda.html)  | 

## Lambda Output Invocation Frequency


A Kinesis Data Analytics application buffers the output records and invokes the AWS Lambda destination function frequently.
+ If records are emitted to the destination in-application stream within the data analytics application as a tumbling window, the AWS Lambda destination function is invoked per tumbling window trigger. For example, if a tumbling window of 60 seconds is used to emit the records to the destination in-application stream, the Lambda function is invoked once every 60 seconds.
+ If records are emitted to the destination in-application stream within the application as a continuous query or a sliding window, the Lambda destination function is invoked about once per second.

**Note**  
[Per-Lambda function invoke request payload size limits](https://docs.aws.amazon.com/lambda/latest/dg/limits.html) apply. Exceeding those limits results in output records being split and sent across multiple Lambda function calls.

## Adding a Lambda Function for Use as an Output


The following procedure demonstrates how to add a Lambda function as an output for a Kinesis Data Analytics application.

1. Sign in to the AWS Management Console and open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose the application in the list, and then choose **Application details**.

1. In the **Destination** section, choose **Connect new destination**.

1. For the **Destination** item, choose **AWS Lambda function**.

1. In the **Deliver records to AWS Lambda** section, either choose an existing Lambda function and version, or choose **Create new**.

1. If you are creating a new Lambda function, do the following:

   1. Choose one of the templates provided. For more information, [Creating Lambda Functions for Application Destinations](how-it-works-output-lambda-functions.md).

   1. The **Create Function** page opens in a new browser tab. In the **Name** box, give the function a meaningful name (for example, **myLambdaFunction**).

   1. Update the template with post-processing functionality for your application. For information about creating a Lambda function, see [Getting Started](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) in the *AWS Lambda Developer Guide*.

   1. On the Kinesis Data Analytics console, in the **Lambda function** list, choose the Lambda function that you just created. Choose **\$1LATEST** for the Lambda function version.

1. In the **In-application stream** section, choose **Choose an existing in-application stream**. For **In-application stream name**, choose your application's output stream. The results from the selected output stream are sent to the Lambda output function.

1. Leave the rest of the form with the default values, and choose **Save and continue**.

Your application now sends records from the in-application stream to your Lambda function. You can see the results of the default template in the Amazon CloudWatch console. Monitor the `AWS/KinesisAnalytics/LambdaDelivery.OkRecords` metric to see the number of records being delivered to the Lambda function.

## Common Lambda as Output Failures


The following are common reasons why delivery to a Lambda function can fail.
+ Not all records (with record IDs) in a batch that are sent to the Lambda function are returned to the Kinesis Data Analytics service. 
+ The response is missing either the record ID or the status field. 
+ The Lambda function timeouts are not sufficient to accomplish the business logic within the Lambda function.
+ The business logic within the Lambda function does not catch all the errors, resulting in a timeout and backpressure due to unhandled exceptions. These are often referred as “poison pill” messages.

For data delivery failures, Kinesis Data Analytics continues to retry Lambda invocations on the same set of records until successful. To gain insight into failures, you can monitor the following CloudWatch metrics: 
+ Kinesis Data Analytics application Lambda as Output CloudWatch metrics: Indicates the number of successes and failures, among other statistics. For more information, see [Amazon Kinesis Analytics Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aka-metricscollected.html).
+ AWS Lambda function CloudWatch metrics and logs.

# Creating Lambda Functions for Application Destinations


Your Kinesis Data Analytics application can use AWS Lambda functions as an output. Kinesis Data Analytics provides templates for creating Lambda functions to use as a destination for your applications. Use these templates as a starting point for post-processing output from your application. 

**Topics**
+ [

## Creating a Lambda Function Destination in Node.js
](#how-it-works-lambda-dest-nodejs)
+ [

## Creating a Lambda Function Destination in Python
](#how-it-works-lambda-dest-python)
+ [

## Creating a Lambda Function Destination in Java
](#how-it-works-lambda-dest-java)
+ [

## Creating a Lambda Function Destination in .NET
](#how-it-works-lambda-net)

## Creating a Lambda Function Destination in Node.js


The following template for creating a destination Lambda function in Node.js is available on the console:


| Lambda as Output Blueprint | Language and Version | Description | 
| --- | --- | --- | 
| kinesis-analytics-output | Node.js 12.x | Deliver output records from a Kinesis Data Analytics application to a custom destination. | 

## Creating a Lambda Function Destination in Python


The following templates for creating a destination Lambda function in Python are available on the console:


| Lambda as Output Blueprint | Language and Version | Description | 
| --- | --- | --- | 
| kinesis-analytics-output-sns | Python 2.7 | Deliver output records from a Kinesis Data Analytics application to Amazon SNS. | 
| kinesis-analytics-output-ddb | Python 2.7 | Deliver output records from a Kinesis Data Analytics application to Amazon DynamoDB. | 

## Creating a Lambda Function Destination in Java


To create a destination Lambda function in Java, use the [Java events](https://github.com/aws/aws-lambda-java-libs/tree/master/aws-lambda-java-events/src/main/java/com/amazonaws/services/lambda/runtime/events) classes.

The following code demonstrates a sample destination Lambda function using Java:

```
public class LambdaFunctionHandler
        implements RequestHandler<KinesisAnalyticsOutputDeliveryEvent, KinesisAnalyticsOutputDeliveryResponse> {

    @Override
    public KinesisAnalyticsOutputDeliveryResponse handleRequest(KinesisAnalyticsOutputDeliveryEvent event,
            Context context) {
        context.getLogger().log("InvocatonId is : " + event.invocationId);
        context.getLogger().log("ApplicationArn is : " + event.applicationArn);

        List<KinesisAnalyticsOutputDeliveryResponse.Record> records = new ArrayList<KinesisAnalyticsOutputDeliveryResponse.Record>();
        KinesisAnalyticsOutputDeliveryResponse response = new KinesisAnalyticsOutputDeliveryResponse(records);

        event.records.stream().forEach(record -> {
            context.getLogger().log("recordId is : " + record.recordId);
            context.getLogger().log("record retryHint is :" + record.lambdaDeliveryRecordMetadata.retryHint);
            // Add logic here to transform and send the record to final destination of your choice.
            response.records.add(new Record(record.recordId, KinesisAnalyticsOutputDeliveryResponse.Result.Ok));
        });
        return response;
    }

}
```

## Creating a Lambda Function Destination in .NET


To create a destination Lambda function in .NET, use the [.NET events ](https://github.com/aws/aws-lambda-dotnet/tree/master/Libraries/src/Amazon.Lambda.KinesisAnalyticsEvents) classes.

The following code demonstrates a sample destination Lambda function using C\$1:

```
public class Function
    {
        public KinesisAnalyticsOutputDeliveryResponse FunctionHandler(KinesisAnalyticsOutputDeliveryEvent evnt, ILambdaContext context)
        {
            context.Logger.LogLine($"InvocationId: {evnt.InvocationId}");
            context.Logger.LogLine($"ApplicationArn: {evnt.ApplicationArn}");

            var response = new KinesisAnalyticsOutputDeliveryResponse
            {
                Records = new List<KinesisAnalyticsOutputDeliveryResponse.Record>()
            };

            foreach (var record in evnt.Records)
            {
                context.Logger.LogLine($"\tRecordId: {record.RecordId}");
                context.Logger.LogLine($"\tRetryHint: {record.RecordMetadata.RetryHint}");
                context.Logger.LogLine($"\tData: {record.DecodeData()}");

                // Add logic here to send to the record to final destination of your choice.

                var deliveredRecord = new KinesisAnalyticsOutputDeliveryResponse.Record
                {
                    RecordId = record.RecordId,
                    Result = KinesisAnalyticsOutputDeliveryResponse.OK
                };
                response.Records.Add(deliveredRecord);
            }
            return response;
        }
    }
```

For more information about creating Lambda functions for pre-processing and destinations in .NET, see [https://github.com/aws/aws-lambda-dotnet/tree/master/Libraries/src/Amazon.Lambda.KinesisAnalyticsEvents](https://github.com/aws/aws-lambda-dotnet/tree/master/Libraries/src/Amazon.Lambda.KinesisAnalyticsEvents).

# Delivery Model for Persisting Application Output to an External Destination
Application Output Delivery Model

Amazon Kinesis Data Analytics uses an "at least once" delivery model for application output to the configured destinations. When an application is running, Kinesis Data Analytics takes internal checkpoints. These checkpoints are points in time when output records have been delivered to the destinations without data loss. The service uses the checkpoints as needed to ensure that your application output is delivered at least once to the configured destinations.

In a normal situation, your application processes incoming data continuously. Kinesis Data Analytics writes the output to the configured destinations, such as a Kinesis data stream or a Firehose delivery stream. However, your application can be interrupted occasionally, for example:
+ You choose to stop your application and restart it later.
+ You delete the IAM role that Kinesis Data Analytics needs to write your application output to the configured destination. Without the IAM role, Kinesis Data Analytics doesn't have any permissions to write to the external destination on your behalf.
+ A network outage or other internal service failure causes your application to stop running momentarily. 

When your application restarts, Kinesis Data Analytics ensures that it continues to process and write output from a point before or equal to when the failure occurred. This helps ensure that it doesn't miss delivering any application output to the configured destinations. 

Suppose that you configured multiple destinations from the same in-application stream. After the application recovers from failure, Kinesis Data Analytics resumes persisting output to the configured destinations from the last record that was delivered to the slowest destination. This might result in the same output record delivered more than once to other destinations. In this case, you must handle potential duplications in the destination externally. 