

After careful consideration, we have decided to discontinue Amazon Kinesis Data Analytics for SQL applications:

1. From **September 1, 2025**, we won't provide any bug fixes for Amazon Kinesis Data Analytics for SQL applications because we will have limited support for it, given the upcoming discontinuation.

2. From **October 15, 2025**, you will not be able to create new Kinesis Data Analytics for SQL applications.

3. We will delete your applications starting **January 27, 2026**. You will not be able to start or operate your Amazon Kinesis Data Analytics for SQL applications. Support will no longer be available for Amazon Kinesis Data Analytics for SQL from that time. For more information, see [Amazon Kinesis Data Analytics for SQL Applications discontinuation](discontinuation.md).

# Getting Started with Amazon Kinesis Data Analytics for SQL Applications
Getting Started

Following, you can find topics to help get you started using Amazon Kinesis Data Analytics for SQL Applications. If you are new to Kinesis Data Analytics for SQL Applications, we recommend that you review the concepts and terminology presented in [Amazon Kinesis Data Analytics for SQL Applications: How It Works](how-it-works.md) before performing the steps in the Getting Started section.

**Topics**
+ [

## Sign up for an AWS account
](#sign-up-for-aws)
+ [

## Create a user with administrative access
](#create-an-admin)
+ [

# Step 1: Set Up an Account and Create an Administrator User
](setting-up.md)
+ [

## Sign up for an AWS account
](#sign-up-for-aws)
+ [

## Create a user with administrative access
](#create-an-admin)
+ [

# Step 2: Set Up the AWS Command Line Interface (AWS CLI)
](setup-awscli.md)
+ [

# Step 3: Create Your Starter Amazon Kinesis Data Analytics Application
](get-started-exercise.md)
+ [

# Step 4 (Optional) Edit the Schema and SQL Code Using the Console
](console-feature-summary.md)

## Sign up for an AWS account


If you do not have an AWS account, complete the following steps to create one.

**To sign up for an AWS account**

1. Open [https://portal.aws.amazon.com/billing/signup](https://portal.aws.amazon.com/billing/signup).

1. Follow the online instructions.

   Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.

   When you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform [tasks that require root user access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks).

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [https://aws.amazon.com/](https://aws.amazon.com/) and choosing **My Account**.

## Create a user with administrative access


After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.

**Secure your AWS account root user**

1.  Sign in to the [AWS Management Console](https://console.aws.amazon.com/) as the account owner by choosing **Root user** and entering your AWS account email address. On the next page, enter your password.

   For help signing in by using root user, see [Signing in as the root user](https://docs.aws.amazon.com/signin/latest/userguide/console-sign-in-tutorials.html#introduction-to-root-user-sign-in-tutorial) in the *AWS Sign-In User Guide*.

1. Turn on multi-factor authentication (MFA) for your root user.

   For instructions, see [Enable a virtual MFA device for your AWS account root user (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/enable-virt-mfa-for-root.html) in the *IAM User Guide*.

**Create a user with administrative access**

1. Enable IAM Identity Center.

   For instructions, see [Enabling AWS IAM Identity Center](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-set-up-for-idc.html) in the *AWS IAM Identity Center User Guide*.

1. In IAM Identity Center, grant administrative access to a user.

   For a tutorial about using the IAM Identity Center directory as your identity source, see [ Configure user access with the default IAM Identity Center directory](https://docs.aws.amazon.com//singlesignon/latest/userguide/quick-start-default-idc.html) in the *AWS IAM Identity Center User Guide*.

**Sign in as the user with administrative access**
+ To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.

  For help signing in using an IAM Identity Center user, see [Signing in to the AWS access portal](https://docs.aws.amazon.com/signin/latest/userguide/iam-id-center-sign-in-tutorial.html) in the *AWS Sign-In User Guide*.

**Assign access to additional users**

1. In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.

   For instructions, see [ Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-started-create-a-permission-set.html) in the *AWS IAM Identity Center User Guide*.

1. Assign users to a group, and then assign single sign-on access to the group.

   For instructions, see [ Add groups](https://docs.aws.amazon.com//singlesignon/latest/userguide/addgroups.html) in the *AWS IAM Identity Center User Guide*.

# Step 1: Set Up an Account and Create an Administrator User
Step 1: Set Up an Account

Before you use Amazon Kinesis Data Analytics for the first time, complete the following tasks:

1. [Sign Up for AWS](#setting-up-signup)

1. [Create an IAM User](#setting-up-iam)

## Sign Up for AWS


When you sign up for Amazon Web Services, your AWS account is automatically signed up for all services in AWS, including Amazon Kinesis Data Analytics. You are charged only for the services that you use.

With Kinesis Data Analytics, you pay only for the resources you use. If you are a new AWS customer, you can get started with Kinesis Data Analytics for free. For more information, see [AWS Free Usage Tier](https://aws.amazon.com/free/).

If you already have an AWS account, skip to the next task. If you don't have an AWS account, perform the steps in the following procedure to create one.

**To create an AWS account**

1. Open [https://portal.aws.amazon.com/billing/signup](https://portal.aws.amazon.com/billing/signup).

1. Follow the online instructions.

   Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.

   When you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform [tasks that require root user access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks).

Note your AWS account ID because you'll need it for the next task.

## Create an IAM User


Services in AWS, such as Amazon Kinesis Data Analytics, require that you provide credentials when you access them so that the service can determine whether you have permissions to access the resources owned by that service. The console requires your password. You can create access keys for your AWS account to access the AWS CLI or API. However, we don't recommend that you access AWS using the credentials for your AWS account. Instead, we recommend that you use AWS Identity and Access Management (IAM). Create an IAM user, add the user to an IAM group with administrative permissions, and then grant administrative permissions to the IAM user that you created. You can then access AWS using a special URL and that IAM user's credentials.

If you signed up for AWS, but you haven't created an IAM user for yourself, you can create one using the IAM console.

The Getting Started exercises in this guide assume that you have a user (`adminuser`) with administrator privileges. Follow the procedure to create `adminuser` in your account.





**To create an administrator user and sign in to the console**

1. Create an administrator user called `adminuser` in your AWS account. For instructions, see [Creating Your First IAM User and Administrators Group](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html) in the *IAM User Guide*.

1. A user can sign in to the AWS Management Console using a special URL. For more information, [How Users Sign In to Your Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html) in the *IAM User Guide*.

For more information about IAM, see the following:
+ [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/)
+ [Getting started with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started.html)
+ [IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/)

## Next Step


[Step 2: Set Up the AWS Command Line Interface (AWS CLI)](setup-awscli.md)

## Sign up for an AWS account


If you do not have an AWS account, complete the following steps to create one.

**To sign up for an AWS account**

1. Open [https://portal.aws.amazon.com/billing/signup](https://portal.aws.amazon.com/billing/signup).

1. Follow the online instructions.

   Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.

   When you sign up for an AWS account, an *AWS account root user* is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform [tasks that require root user access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#root-user-tasks).

AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to [https://aws.amazon.com/](https://aws.amazon.com/) and choosing **My Account**.

## Create a user with administrative access


After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.

**Secure your AWS account root user**

1.  Sign in to the [AWS Management Console](https://console.aws.amazon.com/) as the account owner by choosing **Root user** and entering your AWS account email address. On the next page, enter your password.

   For help signing in by using root user, see [Signing in as the root user](https://docs.aws.amazon.com/signin/latest/userguide/console-sign-in-tutorials.html#introduction-to-root-user-sign-in-tutorial) in the *AWS Sign-In User Guide*.

1. Turn on multi-factor authentication (MFA) for your root user.

   For instructions, see [Enable a virtual MFA device for your AWS account root user (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/enable-virt-mfa-for-root.html) in the *IAM User Guide*.

**Create a user with administrative access**

1. Enable IAM Identity Center.

   For instructions, see [Enabling AWS IAM Identity Center](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-set-up-for-idc.html) in the *AWS IAM Identity Center User Guide*.

1. In IAM Identity Center, grant administrative access to a user.

   For a tutorial about using the IAM Identity Center directory as your identity source, see [ Configure user access with the default IAM Identity Center directory](https://docs.aws.amazon.com//singlesignon/latest/userguide/quick-start-default-idc.html) in the *AWS IAM Identity Center User Guide*.

**Sign in as the user with administrative access**
+ To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.

  For help signing in using an IAM Identity Center user, see [Signing in to the AWS access portal](https://docs.aws.amazon.com/signin/latest/userguide/iam-id-center-sign-in-tutorial.html) in the *AWS Sign-In User Guide*.

**Assign access to additional users**

1. In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions.

   For instructions, see [ Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/get-started-create-a-permission-set.html) in the *AWS IAM Identity Center User Guide*.

1. Assign users to a group, and then assign single sign-on access to the group.

   For instructions, see [ Add groups](https://docs.aws.amazon.com//singlesignon/latest/userguide/addgroups.html) in the *AWS IAM Identity Center User Guide*.

# Step 2: Set Up the AWS Command Line Interface (AWS CLI)
Step 2: Set Up the AWS CLI

Follow the steps to download and configure the AWS Command Line Interface (AWS CLI).

**Important**  
You don't need the AWS CLI to perform the steps in the Getting Started exercise. However, some of the exercises in this guide use the AWS CLI. You can skip this step and go to [Step 3: Create Your Starter Amazon Kinesis Data Analytics Application](get-started-exercise.md), and then set up the AWS CLI later when you need it.

**To set up the AWS CLI**

1. Download and configure the AWS CLI. For instructions, see the following topics in the *AWS Command Line Interface User Guide*: 
   + [Getting Set Up with the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html)
   + [Configuring the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html)

1. Add a named profile for the administrator user in the AWS CLI config file. You use this profile when executing the AWS CLI commands. For more information about named profiles, see [Named Profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles) in the *AWS Command Line Interface User Guide*.

   ```
   [profile adminuser]
   aws_access_key_id = adminuser access key ID
   aws_secret_access_key = adminuser secret access key
   region = aws-region
   ```

   For a list of available AWS Regions, see [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) in the *Amazon Web Services General Reference*.

1. Verify the setup by entering the following help command at the command prompt: 

   ```
   aws help
   ```

## Next Step


[Step 3: Create Your Starter Amazon Kinesis Data Analytics Application](get-started-exercise.md)

# Step 3: Create Your Starter Amazon Kinesis Data Analytics Application
Step 3: Create Your Starter Analytics Application

By following the steps in this section, you can create your first Kinesis Data Analytics application using the console. 

**Note**  
We suggest that you review [Amazon Kinesis Data Analytics for SQL Applications: How It Works](how-it-works.md) before trying the Getting Started exercise.

For this Getting Started exercise, you can use the console to work with either the demo stream or templates with application code.
+ If you choose to use the demo stream, the console creates a Kinesis data stream in your account that is called `kinesis-analytics-demo-stream`.

  A Kinesis Data Analytics application requires a streaming source. For this source, several SQL examples in this guide use the demo stream `kinesis-analytics-demo-stream`. The console also runs a script that continuously adds sample data (simulated stock trade records) to this stream, as shown following.  
![\[Formatted stream sample table showing stock symbols, sectors, and prices.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-30.png)

  You can use `kinesis-analytics-demo-stream` as the streaming source for your application in this exercise.
**Note**  
The demo stream remains in your account. You can use it to test other examples in this guide. However, when you leave the console, the script that the console uses stops populating the data. When needed, the console provides the option to start populating the stream again. 
+ If you choose to use the templates with example application code, you use template code that the console provides to perform simple analytics on the demo stream. 

You use these features to quickly set up your first application as follows:

1. **Create an application** – You only need to provide a name. The console creates the application and the service sets the application state to `READY`.

    

1. **Configure input** – First, you add a streaming source, the demo stream. You must create a demo stream in the console before you can use it. Then, the console takes a random sample of records on the demo stream and infers a schema for the in-application input stream that is created. The console names the in-application stream `SOURCE_SQL_STREAM_001`.

   The console uses the discovery API to infer the schema. If necessary, you can edit the inferred schema. For more information, see [DiscoverInputSchema](API_DiscoverInputSchema.md). Kinesis Data Analytics uses this schema to create an in-application stream.

    

   When you start the application, Kinesis Data Analytics reads the demo stream continuously on your behalf and inserts rows in the `SOURCE_SQL_STREAM_001` in-application input stream. 

    

1. **Specify application code** – You use a template (called **Continuous filter**) that provides the following code:

   ```
   CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" 
     (symbol VARCHAR(4), sector VARCHAR(12), CHANGE DOUBLE, price DOUBLE);
    
   -- Create pump to insert into output. 
   CREATE OR REPLACE PUMP "STREAM_PUMP" AS 
      INSERT INTO "DESTINATION_SQL_STREAM"  
         SELECT STREAM ticker_symbol, sector, CHANGE, price
         FROM "SOURCE_SQL_STREAM_001"
         WHERE sector SIMILAR TO '%TECH%';
   ```

   The application code queries the in-application stream `SOURCE_SQL_STREAM_001`. The code then inserts the resulting rows in another in-application stream `DESTINATION_SQL_STREAM`, using pumps. For more information about this coding pattern, see [Application Code](how-it-works-app-code.md). 

   For information about the SQL language elements that are supported by Kinesis Data Analytics, see [Amazon Kinesis Data Analytics SQL Reference](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/analytics-sql-reference.html).

    

1. **Configuring output** – In this exercise, you don't configure any output. That is, you don't persist data in the in-application stream that your application creates to any external destination. Instead, you verify query results in the console. Additional examples in this guide show how to configure output. For one example, see [Example: Creating Simple Alerts](app-simple-alerts.md).

   



**Important**  
The exercise uses the US East (N. Virginia) Region (us-east-1) to set up the application. You can use any of the supported AWS Regions.

**Next Step**  
[Step 3.1: Create an Application](get-started-create-app.md)

# Step 3.1: Create an Application


In this section, you create an Amazon Kinesis Data Analytics application. You configure application input in the next step.

**To create a data analytics application**

1. Sign in to the AWS Management Console and open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose **Create application**.

1. On the **Create application** page, type an application name, type a description, choose **SQL** for the application's **Runtime** setting, and then choose **Create application**.  
![\[Screenshot of New application page with application name and description.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-10.png)

   Doing this creates a Kinesis Data Analytics application with a status of READY. The console shows the application hub where you can configure input and output.
**Note**  
To create an application, the [CreateApplication](API_CreateApplication.md) operation requires only the application name. You can add input and output configuration after you create an application in the console.

   

   In the next step, you configure input for the application. In the input configuration, you add a streaming data source to the application and discover a schema for an in-application input stream by sampling data on the streaming source.

**Next Step**  
[Step 3.2: Configure Input](get-started-configure-input.md)

# Step 3.2: Configure Input


Your application needs a streaming source. To help you get started, the console can create a demo stream (called `kinesis-analytics-demo-stream`). The console also runs a script that populates records in the stream.

**To add a streaming source to your application**

1. On the application hub page in the console, choose **Connect streaming data**.  
![\[Screenshot of the example app and the connect to a sourceGS button.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-20.png)

1. On the page that appears, review the following:
   + **Source** section, where you specify a streaming source for your application. You can select an existing stream source or create one. In this exercise, you create a new stream, the demo stream. 

      

     By default the console names the in-application input stream that is created as `INPUT_SQL_STREAM_001`. For this exercise, keep this name as it appears.

      
     + **Stream reference name** – This option shows the name of the in-application input stream that is created, `SOURCE_SQL_STREAM_001`. You can change the name, but for this exercise, keep this name.

        

       In the input configuration, you map the demo stream to an in-application input stream that is created. When you start the application, Amazon Kinesis Data Analytics continuously reads the demo stream and insert rows in the in-application input stream. You query this in-application input stream in your application code. 

        
     + **Record pre-processing with AWS Lambda**: This option is where you specify an AWS Lambda expression that modifies the records in the input stream before your application code executes. In this exercise, leave the **Disabled** option selected. For more information about Lambda preprocessing, see [Preprocessing Data Using a Lambda Function](lambda-preprocessing.md).

   After you provide all the information on this page, the console sends an update request (see [UpdateApplication](API_UpdateApplication.md)) to add the input configuration the application. 

1. On the **Source **page, choose **Configure a new stream**.

1. Choose **Create demo stream**. The console configures the application input by doing the following:
   + The console creates a Kinesis data stream called `kinesis-analytics-demo-stream`. 
   + The console populates the stream with sample stock ticker data.
   + Using the [DiscoverInputSchema](API_DiscoverInputSchema.md) input action, the console infers a schema by reading sample records on the stream. The schema that is inferred is the schema for the in-application input stream that is created. For more information, see [Configuring Application Input](how-it-works-input.md).
   + The console shows the inferred schema and the sample data it read from the streaming source to infer the schema.

   The console displays the sample records on the streaming source.  
![\[Formatted stream sample tab showing stock symbols, sectors, and prices in tabular format.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-30.png)

   The following appear on the **Stream sample** console page:
   + The **Raw stream sample** tab shows the raw stream records sampled by the [DiscoverInputSchema](API_DiscoverInputSchema.md) API action to infer the schema.
   + The **Formatted stream sample** tab shows the tabular version of the data in the **Raw stream sample** tab.
   + If you choose **Edit schema**, you can edit the inferred schema. For this exercise, don't change the inferred schema. For more information about editing a schema, see [Working with the Schema Editor](console-summary-edit-schema.md).

     If you choose **Rediscover schema**, you can request the console to run [DiscoverInputSchema](API_DiscoverInputSchema.md) again and infer the schema. 

     

   

1. Choose **Save and continue**.

   You now have an application with input configuration added to it. In the next step, you add SQL code to perform some analytics on the data in-application input stream.

**Next Step**  
[Step 3.3: Add Real-Time Analytics (Add Application Code)](get-started-add-realtime-analytics.md)

# Step 3.3: Add Real-Time Analytics (Add Application Code)


You can write your own SQL queries against the in-application stream, but for the following step you use one of the templates that provides sample code.

1. On the application hub page, choose **Go to SQL editor**.   
![\[Screenshot of the example application page with Go to SQL editor button.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-40.png)

1. In the **Would you like to start running "ExampleApp"?** dialog box, choose **Yes, start application**.

   The console sends a request to start the application (see [StartApplication](API_StartApplication.md)), and then the SQL editor page appears.

   

1. The console opens the SQL editor page. Review the page, including the buttons (**Add SQL from templates**, **Save and run SQL**) and various tabs.

1. In the SQL editor, choose **Add SQL from templates**.

1. From the available template list, choose **Continuous filter**. The sample code reads data from one in-application stream (the `WHERE` clause filters the rows) and inserts it in another in-application stream as follows:
   + It creates the in-application stream `DESTINATION_SQL_STREAM`.
   + It creates a pump `STREAM_PUMP`, and uses it to select rows from `SOURCE_SQL_STREAM_001` and insert them in the `DESTINATION_SQL_STREAM`. 

   

1. Choose **Add this SQL to editor**. 

1. Test the application code as follows:

   Remember, you already started the application (status is RUNNING). Therefore, Amazon Kinesis Data Analytics is already continuously reading from the streaming source and adding rows to the in-application stream `SOURCE_SQL_STREAM_001`.

   1. In the SQL Editor, choose **Save and run SQL**. The console first sends update request to save the application code. Then, the code continuously executes.

   1. You can see the results in the **Real-time analytics** tab.   
![\[Screenshot of the SQL editor with results shown in the real-time analytics tab.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-50.png)

      The SQL editor has the following tabs:
      + The **Source data** tab shows an in-application input stream that is mapped to the streaming source. Choose the in-application stream, and you can see data coming in. Note the additional columns in the in-application input stream that weren't specified in the input configuration. These include the following timestamp columns:

         
        + **ROWTIME** – Each row in an in-application stream has a special column called `ROWTIME`. This column is the timestamp when Amazon Kinesis Data Analytics inserted the row in the first in-application stream (the in-application input stream that is mapped to the streaming source).

           
        + **Approximate\$1Arrival\$1Time** – Each Kinesis Data Analytics record includes a value called `Approximate_Arrival_Time`. This value is the approximate arrival timestamp that is set when the streaming source successfully receives and stores the record. When Kinesis Data Analytics reads records from a streaming source, it fetches this column into the in-application input stream. 

        These timestamp values are useful in windowed queries that are time-based. For more information, see [Windowed Queries](windowed-sql.md).

         
      + The **Real-time analytics** tab shows all the other in-application streams created by your application code. It also includes the error stream. Kinesis Data Analytics sends any rows it cannot process to the error stream. For more information, see [Error Handling](error-handling.md).

         

        Choose `DESTINATION_SQL_STREAM` to view the rows your application code inserted. Note the additional columns that your application code didn't create. These columns include the `ROWTIME` timestamp column. Kinesis Data Analytics simply copies these values from the source (`SOURCE_SQL_STREAM_001`).

         
      + The **Destination** tab shows the external destination where Kinesis Data Analytics writes the query results. You haven't configured any external destination for your application output yet.

      

**Next Step**  
[Step 3.4: (Optional) Update the Application Code](get-started-update-appcode.md)

# Step 3.4: (Optional) Update the Application Code


In this step, you explore how to update the application code. 

**To update application code**

1. Create another in-application stream as follows:
   + Create another in-application stream called `DESTINATION_SQL_STREAM_2`.
   + Create a pump, and then use it to insert rows in the newly created stream by selecting rows from the `DESTINATION_SQL_STREAM`.

   In the SQL editor, append the following code to the existing application code:

   ```
   CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM_2" 
              (ticker_symbol VARCHAR(4), 
               change        DOUBLE, 
               price         DOUBLE);
   
   CREATE OR REPLACE PUMP "STREAM_PUMP_2" AS 
      INSERT INTO "DESTINATION_SQL_STREAM_2"
         SELECT STREAM ticker_symbol, change, price 
         FROM   "DESTINATION_SQL_STREAM";
   ```

   Save and run the code. Additional in-application streams appear on the **Real-time analytics** tab.

1. Create two in-application streams. Filter rows in the `SOURCE_SQL_STREAM_001` based on the stock ticker, and then insert them in to these separate streams. 

   Append the following SQL statements to your application code:

   ```
   CREATE OR REPLACE STREAM "AMZN_STREAM" 
              (ticker_symbol VARCHAR(4), 
               change        DOUBLE, 
               price         DOUBLE);
   
   CREATE OR REPLACE PUMP "AMZN_PUMP" AS 
      INSERT INTO "AMZN_STREAM"
         SELECT STREAM ticker_symbol, change, price 
         FROM   "SOURCE_SQL_STREAM_001"
         WHERE  ticker_symbol SIMILAR TO '%AMZN%';
   
   CREATE OR REPLACE STREAM "TGT_STREAM" 
              (ticker_symbol VARCHAR(4), 
               change        DOUBLE, 
               price         DOUBLE);
   
   CREATE OR REPLACE PUMP "TGT_PUMP" AS 
      INSERT INTO "TGT_STREAM"
         SELECT STREAM ticker_symbol, change, price 
         FROM   "SOURCE_SQL_STREAM_001"
         WHERE  ticker_symbol SIMILAR TO '%TGT%';
   ```

   Save and run the code. Notice additional in-application streams on the **Real-time analytics** tab.

You now have your first working Amazon Kinesis Data Analytics application. In this exercise, you did the following: 
+ Created your first Kinesis Data Analytics application.

   
+ Configured application input that identified the demo stream as the streaming source and mapped it to an in-application stream (`SOURCE_SQL_STREAM_001`) that is created. Kinesis Data Analytics continuously reads the demo stream and inserts records in the in-application stream.

   
+ Your application code queried the `SOURCE_SQL_STREAM_001` and wrote output to another in-application stream called `DESTINATION_SQL_STREAM`. 



Now you can optionally configure application output to write the application output to an external destination. That is, you can configure the application output to write records in the `DESTINATION_SQL_STREAM` to an external destination. For this exercise, this is an optional step. To learn how to configure the destination, go to the next step.

**Next Step**  
[Step 4 (Optional) Edit the Schema and SQL Code Using the Console](console-feature-summary.md).

# Step 4 (Optional) Edit the Schema and SQL Code Using the Console


Following, you can find information about how to edit an inferred schema and how to edit SQL code for Amazon Kinesis Data Analytics. You do so by working with the schema editor and SQL editor that are part of the Kinesis Data Analytics console. 

**Note**  
To access or sample data in the console, your login user's role must have the `kinesisanalytics:GetApplicationState` permission. For more information about Kinesis Data Analytics application permissions, see [Overview of Managing Access](access-control-overview.md).

**Topics**
+ [

# Working with the Schema Editor
](console-summary-edit-schema.md)
+ [

# Working with the SQL Editor
](console-summary-sql-editor.md)

# Working with the Schema Editor


The schema for an Amazon Kinesis Data Analytics application's input stream defines how data from the stream is made available to SQL queries in the application. 

![\[Diagram showing relationship between streaming input, source schema configuration, and in-application input streams\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-diagram.png)


The schema contains selection criteria for determining what part of the streaming input is transformed into a data column in the in-application input stream. This input can be one of the following: 
+ A JSONPath expression for JSON input streams. JSONPath is a tool for querying JSON data.
+ A column number for input streams in comma-separated values (CSV) format.
+ A column name and a SQL data type for presenting the data in the in-application data stream. The data type also contains a length for character or binary data.

The console attempts to generate the schema using [DiscoverInputSchema](API_DiscoverInputSchema.md). If schema discovery fails or returns an incorrect or incomplete schema, you must edit the schema manually by using the schema editor.

## Schema Editor Main Screen


The following screenshot shows the main screen for the Schema Editor.

![\[Screenshot of edit schema page.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-overview.png)


You can apply the following edits to the schema:
+ Add a column (1): You might need to add a data column if a data item is not detected automatically.
+ Delete a column (2): You can exclude data from the source stream if your application doesn't require it. This exclusion doesn't affect the data in the source stream. If data is excluded, that data simply isn't made available to the application.
+ Rename a column (3). A column name can't be blank, must be longer than a single character, and must not contain reserved SQL keywords. The name must also meet naming criteria for SQL ordinary identifiers: The name must start with a letter and contain only letters, underscore characters, and digits.
+ Change the data type (4) or length (5) of a column: You can specify a compatible data type for a column. If you specify an incompatible data type, the column is either populated with NULL or the in-application stream is not populated at all. In the latter case, errors are written to the error stream. If you specify a length for a column that is too small, the incoming data is truncated.
+ Change the selection criteria of a column (6): You can edit the JSONPath expression or CSV column order used to determine the source of the data in a column. To change the selection criteria for a JSON schema, enter a new value for the row path expression. A CSV schema uses the column order as selection criteria. To change the selection criteria for a CSV schema, change the order of the columns.

## Editing the Schema for a Streaming Source


If you need to edit a schema for a streaming source, follow these steps.

**To edit the schema for a streaming source**

1. On the **Source** page, choose **Edit schema**.  
![\[Screenshot of formatted stream sample tab containing stock data, with the edit schema button highlighted.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-1.png)

1. On the **Edit schema** page, edit the source schema.  
![\[Screenshot of edit schema page.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-0.png)

1. For **Format**, choose **JSON** or **CSV**. For JSON or CSV format, the supported encoding is ISO 8859-1.

For further information on editing the schema for JSON or CSV format, see the procedures in the next sections.

### Editing a JSON Schema


You can edit a JSON schema by using the following steps.

**To edit a JSON schema**

1. In the schema editor, choose **Add column** to add a column. 

   A new column appears in the first column position. To change the column order, choose the up and down arrows next to the column name. 

   For a new column, provide the following information:
   + For **Column name**, type a name. 

     A column name cannot be blank, must be longer than a single character, and must not contain reserved SQL keywords. It must also meet naming criteria for SQL ordinary identifiers: It must start with a letter and contain only letters, underscore characters, and digits.
   + For **Column type**, type an SQL data type. 

     A column type can be any supported SQL data type. If the new data type is CHAR, VARBINARY, or VARCHAR, specify a data length for **Length**. For more information, see [Data Types](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-data-types.html).
   + For **Row path**, provide a row path. A row path is a valid JSONPath expression that maps to a JSON element. 
**Note**  
The base **Row path** value is the path to the top-level parent that contains the data to be imported. This value is **\$1** by default. For more information, see `RecordRowPath` in `[JSONMappingParameters](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/API_JSONMappingParameters.html)`.

1. To delete a column, choose the **x** icon next to the column number.  
![\[Screenshot of schema editor showing the x icon next to the column number.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-delete.png)

1. To rename a column, enter a new name for **Column name**. The new column name cannot be blank, must be longer than a single character, and must not contain reserved SQL keywords. It must also meet naming criteria for SQL ordinary identifiers: It must start with a letter and contain only letters, underscore characters, and digits.

1. To change the data type of a column, choose a new data type for **Column type**. If the new data type is `CHAR`, `VARBINARY`, or `VARCHAR`, specify a data length for **Length**. For more information, see [Data Types](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-data-types.html).

1. Choose **Save schema and update stream** to save your changes.

The modified schema appears in the editor and looks similar to the following.

![\[Screenshot of schema editor showing the modified schema.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-2.png)


If your schema has many rows, you can filter the rows using **Filter by column name**. For example, to edit column names that start with `P`, such as a `Price` column, enter `P` in the **Filter by column name** box.

### Editing a CSV Schema


You can edit a CSV schema by using the following steps.

**To edit a CSV schema**

1. In the schema editor, for **Row delimiter**, choose the delimiter used by your incoming data stream. This is the delimiter between records of data in your stream, such as a newline character.

1. For **Column delimiter**, choose the delimiter used by your incoming data stream. This is the delimiter between fields of data in your stream, such as a comma.

1. To add a column, choose **Add column**. 

   A new column appears in the first column position. To change the column order, choose the up and down arrows next to the column name. 

   For a new column, provide the following information:
   + For **Column name**, enter a name. 

     A column name cannot be blank, must be longer than a single character, and must not contain reserved SQL keywords. It must also meet naming criteria for SQL ordinary identifiers: It must start with a letter and contain only letters, underscore characters, and digits.
   + For **Column type**, enter a SQL data type. 

     A column type can be any supported SQL data type. If the new data type is CHAR, VARBINARY, or VARCHAR, specify a data length for **Length**. For more information, see [Data Types](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-data-types.html).

1. To delete a column, choose the **x** icon next to the column number.  
![\[Screenshot of schema editor showing the x icon next to the column number.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-delete.png)

1. To rename a column, enter a new name in **Column name**. The new column name cannot be blank, must be longer than a single character, and must not contain reserved SQL keywords. It must also meet naming criteria for SQL ordinary identifiers: It must start with a letter and contain only letters, underscore characters, and digits.

1. To change the data type of a column, choose a new data type for **Column type**. If the new data type is CHAR, VARBINARY, or VARCHAR, specify a data length for **Length**. For more information, see [Data Types](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-data-types.html).

1. Choose **Save schema and update stream** to save your changes.

The modified schema appears in the editor and looks similar to the following.

![\[Screenshot of schema editor showing the modified schema.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/edit-schema-3.png)


If your schema has many rows, you can filter the rows using **Filter by column name**. For example, to edit column names that start with `P`, such as a `Price` column, enter `P` in the **Filter by column name** box.

# Working with the SQL Editor


Following, you can find information about sections of the SQL editor and how each works. In the SQL editor, you can either author your own code yourself or choose **Add SQL from templates**. A SQL template gives you example SQL code that can help you write common Amazon Kinesis Data Analytics applications. The example applications in this guide use some of these templates. For more information, see [Kinesis Data Analytics for SQL examples](examples.md).

![\[Screenshot of the SQL editor showing the real-time analytics tab and in-application streams.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-50.png)


## Source Data Tab


The **Source data** tab identifies a streaming source. It also identifies the in-application input stream that this source maps to and that provides the application input configuration. 

![\[Screenshot of the SQL editor showing the source data tab with the streaming source highlighted.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-60.png)


Amazon Kinesis Data Analytics provides the following timestamp columns, so that you don't need to provide explicit mapping in your input configuration: 
+ **ROWTIME** – Each row in an in-application stream has a special column called `ROWTIME`. This column is the timestamp for the point when Kinesis Data Analytics inserted the row in the first in-application stream. 
+ **Approximate\$1Arrival\$1Time** – Records on your streaming source include the `Approximate_Arrival_Timestamp` column. It is the approximate arrival timestamp that is set when the streaming source successfully receives and stores the related record. Kinesis Data Analytics fetches this column into the in-application input stream as `Approximate_Arrival_Time`. Amazon Kinesis Data Analytics provides this column only in the in-application input stream that is mapped to the streaming source. 

These timestamp values are useful in windowed queries that are time-based. For more information, see [Windowed Queries](windowed-sql.md).

## Real-Time Analytics Tab


The **Real-time analytics** tab shows all the in-application streams that your application code creates. This group of streams includes the error stream (`error_stream`) that Amazon Kinesis Data Analytics provides for all applications. 

![\[Screenshot of the SQL editor showing the real-time analytics tab with in-application streams highlighted.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/gs-v2-70.png)


## Destination Tab


The **Destination** tab enables you to configure the application output to persist in-application streams to external destinations. You can configure output to persist data in any of the in-application streams to external destinations. For more information, see [Configuring Application Output](how-it-works-output.md).