

After careful consideration, we have decided to discontinue Amazon Kinesis Data Analytics for SQL applications:

1. From **September 1, 2025**, we won't provide any bug fixes for Amazon Kinesis Data Analytics for SQL applications because we will have limited support for it, given the upcoming discontinuation.

2. From **October 15, 2025**, you will not be able to create new Kinesis Data Analytics for SQL applications.

3. We will delete your applications starting **January 27, 2026**. You will not be able to start or operate your Amazon Kinesis Data Analytics for SQL applications. Support will no longer be available for Amazon Kinesis Data Analytics for SQL from that time. For more information, see [Amazon Kinesis Data Analytics for SQL Applications discontinuation](discontinuation.md).

# Examples: Transforming String Values
<a name="examples-transforming-strings"></a>

Amazon Kinesis Data Analytics supports formats such as JSON and CSV for records on a streaming source. For details, see [RecordFormat](API_RecordFormat.md). These records then map to rows in an in-application stream as per the input configuration. For details, see [Configuring Application Input](how-it-works-input.md). The input configuration specifies how record fields in the streaming source map to columns in an in-application stream. 

This mapping works when records on the streaming source follow the supported formats, which results in an in-application stream with normalized data. But what if data on your streaming source does not conform to supported standards? For example, what if your streaming source contains data such as clickstream data, IoT sensors, and application logs? 

Consider these examples:
+ Streaming source contains application logs – The application logs follow the standard Apache log format, and are written to the stream using JSON format. 

  ```
  {
     "Log":"192.168.254.30 - John [24/May/2004:22:01:02 -0700] "GET /icons/apache_pb.gif HTTP/1.1" 304 0"
  }
  ```

  For more information about the standard Apache log format, see [Log Files](https://httpd.apache.org/docs/2.4/logs.html) on the Apache website. 

   
+ Streaming source contains semi-structured data – The following example shows two records. The `Col_E_Unstructured` field value is a series of comma-separated values. There are five columns: the first four have string type values, and the last column contains comma-separated values.

  ```
  { "Col_A" : "string",
    "Col_B" : "string",
    "Col_C" : "string",
    "Col_D" : "string",
    "Col_E_Unstructured" : "value,value,value,value"}
  
  { "Col_A" : "string",
    "Col_B" : "string",
    "Col_C" : "string",
    "Col_D" : "string",
    "Col_E_Unstructured" : "value,value,value,value"}
  ```
+ Records on your streaming source contain URLs, and you need a portion of the URL domain name for analytics.

  ```
  { "referrer" : "http://www.amazon.com"}
  { "referrer" : "http://www.stackoverflow.com" }
  ```

In such cases, the following two-step process generally works for creating in-application streams that contain normalized data:

1. Configure application input to map the unstructured field to a column of the `VARCHAR(N)` type in the in-application input stream that is created.

1. In your application code, use string functions to split this single column into multiple columns and then save the rows in another in-application stream. This in-application stream that your application code creates will have normalized data. You can then perform analytics on this in-application stream.

Amazon Kinesis Data Analytics provides the following string operations, standard SQL functions, and extensions to the SQL standard for working with string columns: 
+ **String operators** – Operators such as `LIKE` and `SIMILAR` are useful in comparing strings. For more information, see [String Operators](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-string-operators.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.
+ **SQL functions** – The following functions are useful when manipulating individual strings. For more information, see [String and Search Functions](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-string-and-search-functions.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.
  + `CHAR_LENGTH` – Provides the length of a string. 
  + `INITCAP` – Returns a converted version of the input string such that the first character of each space-delimited word is uppercase, and all other characters are lowercase. 
  + `LOWER/UPPER` – Converts a string to lowercase or uppercase. 
  + `OVERLAY` – Replaces a portion of the first string argument (the original string) with the second string argument (the replacement string).
  + `POSITION` – Searches for a string within another string. 
  + `REGEX_REPLACE` – Replaces a substring with an alternative substring.
  + `SUBSTRING` – Extracts a portion of a source string starting at a specific position. 
  + `TRIM` – Removes instances of the specified character from the beginning or end of the source string. 
+ **SQL extensions** – These are useful for working with unstructured strings such as logs and URIs. For more information, see [Log Parsing Functions](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-pattern-matching-functions.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.
  + `FAST_REGEX_LOG_PARSER` – Works similar to the regex parser, but it takes several shortcuts to ensure faster results. For example, the fast regex parser stops at the first match it finds (known as *lazy semantics*).
  + `FIXED_COLUMN_LOG_PARSE` – Parses fixed-width fields and automatically converts them to the given SQL types.
  + `REGEX_LOG_PARSE` – Parses a string based on default Java regular expression patterns.
  + `SYS_LOG_PARSE` – Parses entries commonly found in UNIX/Linux system logs.
  + `VARIABLE_COLUMN_LOG_PARSE` – Splits an input string into fields separated by a delimiter character or a delimiter string.
  + `W3C_LOG_PARSE` – Can be used for quickly formatting Apache logs.

For examples using these functions, see the following topics:

**Topics**
+ [Example: Extracting a Portion of a String (SUBSTRING Function)](examples-transforming-strings-substring.md)
+ [Example: Replacing a Substring using Regex (REGEX\$1REPLACE Function)](examples-transforming-strings-regexreplace.md)
+ [Example: Parsing Log Strings Based on Regular Expressions (REGEX\$1LOG\$1PARSE Function)](examples-transforming-strings-regexlogparse.md)
+ [Example: Parsing Web Logs (W3C\$1LOG\$1PARSE Function)](examples-transforming-strings-w3clogparse.md)
+ [Example: Split Strings into Multiple Fields (VARIABLE\$1COLUMN\$1LOG\$1PARSE Function)](examples-transforming-strings-variablecolumnlogparse.md)

# Example: Extracting a Portion of a String (SUBSTRING Function)
<a name="examples-transforming-strings-substring"></a>

This example uses the `SUBSTRING` function to transform a string in Amazon Kinesis Data Analytics. The `SUBSTRING` function extracts a portion of a source string starting at a specific position. For more information, see [SUBSTRING](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-substring.html) in the *Amazon Managed Service for Apache Flink SQL Reference*. 

In this example, you write the following records to an Amazon Kinesis data stream. 

```
{ "REFERRER" : "http://www.amazon.com" }
{ "REFERRER" : "http://www.amazon.com"}
{ "REFERRER" : "http://www.amazon.com"}
...
```



You then create an Kinesis Data Analytics application on the console, using the Kinesis data stream as the streaming source. The discovery process reads sample records on the streaming source and infers an in-application schema with one column (`REFERRER`), as shown.

![\[Console screenshot showing the in-application schema with a list of URLs in the referrer column.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/referrer-10.png)


Then, you use the application code with the `SUBSTRING` function to parse the URL string to retrieve the company name. Then you insert the resulting data into another in-application stream, as shown following: 



![\[Console screenshot showing real-time analytics tab with resulting data in the in-application stream.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/referrer-20.png)


**Topics**
+ [Step 1: Create a Kinesis Data Stream](#examples-transforming-strings-substring-1)
+ [Step 2: Create the Kinesis Data Analytics Application](#examples-transforming-strings-substring-2)

## Step 1: Create a Kinesis Data Stream
<a name="examples-transforming-strings-substring-1"></a>

Create an Amazon Kinesis data stream and populate the log records as follows:

1. Sign in to the AWS Management Console and open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose **Data Streams** in the navigation pane.

1. Choose **Create Kinesis stream**, and create a stream with one shard. For more information, see [Create a Stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html) in the *Amazon Kinesis Data Streams Developer Guide*.

1. Run the following Python code to populate sample log records. This simple code continuously writes the same log record to the stream.

   ```
    
   import json
   import boto3
   
   STREAM_NAME = "ExampleInputStream"
   
   
   def get_data():
       return {"REFERRER": "http://www.amazon.com"}
   
   
   def generate(stream_name, kinesis_client):
       while True:
           data = get_data()
           print(data)
           kinesis_client.put_record(
               StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
           )
   
   
   if __name__ == "__main__":
       generate(STREAM_NAME, boto3.client("kinesis"))
   ```

## Step 2: Create the Kinesis Data Analytics Application
<a name="examples-transforming-strings-substring-2"></a>

Next, create an Kinesis Data Analytics application as follows:

1. Open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose **Create application**, type an application name, and choose **Create application**.

1. On the application details page, choose **Connect streaming data**.

1. On the **Connect to source** page, do the following:

   1. Choose the stream that you created in the preceding section. 

   1. Choose the option to create an IAM role.

   1. Choose **Discover schema**. Wait for the console to show the inferred schema and samples records used to infer the schema for the in-application stream created. The inferred schema has only one column.

   1. Choose **Save and continue**.

   

1. On the application details page, choose **Go to SQL editor**. To start the application, choose **Yes, start application** in the dialog box that appears.

1. In the SQL editor, write the application code, and verify the results as follows:

   1. Copy the following application code and paste it into the editor.

      ```
      -- CREATE OR REPLACE STREAM for cleaned up referrer
      CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
          "ingest_time" TIMESTAMP,
          "referrer" VARCHAR(32));
          
      CREATE OR REPLACE PUMP "myPUMP" AS 
         INSERT INTO "DESTINATION_SQL_STREAM"
            SELECT STREAM 
               "APPROXIMATE_ARRIVAL_TIME", 
               SUBSTRING("referrer", 12, (POSITION('.com' IN "referrer") - POSITION('www.' IN "referrer") - 4)) 
            FROM "SOURCE_SQL_STREAM_001";
      ```

   1. Choose **Save and run SQL**. On the **Real-time analytics **tab, you can see all the in-application streams that the application created and verify the data. 

# Example: Replacing a Substring using Regex (REGEX\$1REPLACE Function)
<a name="examples-transforming-strings-regexreplace"></a>

This example uses the `REGEX_REPLACE` function to transform a string in Amazon Kinesis Data Analytics. `REGEX_REPLACE` replaces a substring with an alternative substring. For more information, see [REGEX\$1REPLACE](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-regex-replace.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.

In this example, you write the following records to an Amazon Kinesis data stream: 

```
{ "REFERRER" : "http://www.amazon.com" }
{ "REFERRER" : "http://www.amazon.com"}
{ "REFERRER" : "http://www.amazon.com"}
...
```



You then create an Kinesis Data Analytics application on the console, with the Kinesis data stream as the streaming source. The discovery process reads sample records on the streaming source and infers an in-application schema with one column (REFERRER) as shown.

![\[Console screenshot showing in-application schema with list of URLs in the referrer column.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/referrer-10.png)


Then, you use the application code with the `REGEX_REPLACE` function to convert the URL to use `https://` instead of `http://`. You insert the resulting data into another in-application stream, as shown following: 



![\[Console screenshot showing resulting data table with ROWTIME, ingest_time, and referrer columns.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/ex_regex_replace.png)


**Topics**
+ [Step 1: Create a Kinesis Data Stream](#examples-transforming-strings-regexreplace-1)
+ [Step 2: Create the Kinesis Data Analytics Application](#examples-transforming-strings-regexreplace-2)

## Step 1: Create a Kinesis Data Stream
<a name="examples-transforming-strings-regexreplace-1"></a>

Create an Amazon Kinesis data stream and populate the log records as follows:

1. Sign in to the AWS Management Console and open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose **Data Streams** in the navigation pane.

1. Choose **Create Kinesis stream**, and create a stream with one shard. For more information, see [Create a Stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html) in the *Amazon Kinesis Data Streams Developer Guide*.

1. Run the following Python code to populate the sample log records. This simple code continuously writes the same log record to the stream.

   ```
    
   import json
   import boto3
   
   STREAM_NAME = "ExampleInputStream"
   
   
   def get_data():
       return {"REFERRER": "http://www.amazon.com"}
   
   
   def generate(stream_name, kinesis_client):
       while True:
           data = get_data()
           print(data)
           kinesis_client.put_record(
               StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
           )
   
   
   if __name__ == "__main__":
       generate(STREAM_NAME, boto3.client("kinesis"))
   ```

## Step 2: Create the Kinesis Data Analytics Application
<a name="examples-transforming-strings-regexreplace-2"></a>

Next, create an Kinesis Data Analytics application as follows:

1. Open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose **Create application**, type an application name, and choose **Create application**.

1. On the application details page, choose **Connect streaming data**. 

1. On the **Connect to source** page, do the following:

   1. Choose the stream that you created in the preceding section. 

   1. Choose the option to create an IAM role.

   1. Choose **Discover schema**. Wait for the console to show the inferred schema and samples records used to infer the schema for the in-application stream created. The inferred schema has only one column.

   1. Choose **Save and continue**.

   

1. On the application details page, choose **Go to SQL editor**. To start the application, choose **Yes, start application** in the dialog box that appears.

1. In the SQL editor, write the application code and verify the results as follows:

   1. Copy the following application code, and paste it into the editor:

      ```
      -- CREATE OR REPLACE STREAM for cleaned up referrer
      CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
          "ingest_time" TIMESTAMP,
          "referrer" VARCHAR(32));
          
      CREATE OR REPLACE PUMP "myPUMP" AS 
         INSERT INTO "DESTINATION_SQL_STREAM"
            SELECT STREAM 
               "APPROXIMATE_ARRIVAL_TIME", 
               REGEX_REPLACE("REFERRER", 'http://', 'https://', 1, 0)
            FROM "SOURCE_SQL_STREAM_001";
      ```

   1. Choose **Save and run SQL**. On the **Real-time analytics **tab, you can see all the in-application streams that the application created and verify the data. 

# Example: Parsing Log Strings Based on Regular Expressions (REGEX\$1LOG\$1PARSE Function)
<a name="examples-transforming-strings-regexlogparse"></a>

This example uses the `REGEX_LOG_PARSE` function to transform a string in Amazon Kinesis Data Analytics. `REGEX_LOG_PARSE` parses a string based on default Java regular expression patterns. For more information, see [REGEX\$1LOG\$1PARSE](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-regex-log-parse.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.

In this example, you write the following records to an Amazon Kinesis stream: 

```
{"LOGENTRY": "203.0.113.24 - - [25/Mar/2018:15:25:37 -0700] \"GET /index.php HTTP/1.1\" 200 125 \"-\" \"Mozilla/5.0 [en] Gecko/20100101 Firefox/52.0\""}
{"LOGENTRY": "203.0.113.24 - - [25/Mar/2018:15:25:37 -0700] \"GET /index.php HTTP/1.1\" 200 125 \"-\" \"Mozilla/5.0 [en] Gecko/20100101 Firefox/52.0\""}
{"LOGENTRY": "203.0.113.24 - - [25/Mar/2018:15:25:37 -0700] \"GET /index.php HTTP/1.1\" 200 125 \"-\" \"Mozilla/5.0 [en] Gecko/20100101 Firefox/52.0\""}
...
```



You then create an Kinesis Data Analytics application on the console, with the Kinesis data stream as the streaming source. The discovery process reads sample records on the streaming source and infers an in-application schema with one column (LOGENTRY), as shown following.

![\[Console screenshot showing in-application schema with LOGENTRY column.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/ex_regex_log_parse_0.png)


Then, you use the application code with the `REGEX_LOG_PARSE` function to parse the log string to retrieve the data elements. You insert the resulting data into another in-application stream, as shown in the following screenshot: 



![\[Console screenshot showing the resulting data table with ROWTIME, LOGENTRY, MATCH1, and MATCH2 columns.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/ex_regex_log_parse_1.png)


**Topics**
+ [Step 1: Create a Kinesis Data Stream](#examples-transforming-strings-regexlogparse-1)
+ [Step 2: Create the Kinesis Data Analytics Application](#examples-transforming-strings-regexlogparse-2)

## Step 1: Create a Kinesis Data Stream
<a name="examples-transforming-strings-regexlogparse-1"></a>

Create an Amazon Kinesis data stream and populate the log records as follows:

1. Sign in to the AWS Management Console and open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose **Data Streams** in the navigation pane.

1. Choose **Create Kinesis stream**, and create a stream with one shard. For more information, see [Create a Stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html) in the *Amazon Kinesis Data Streams Developer Guide*.

1. Run the following Python code to populate sample log records. This simple code continuously writes the same log record to the stream.

   ```
    
   import json
   import boto3
   
   STREAM_NAME = "ExampleInputStream"
   
   
   def get_data():
       return {
           "LOGENTRY": "203.0.113.24 - - [25/Mar/2018:15:25:37 -0700] "
           '"GET /index.php HTTP/1.1" 200 125 "-" '
           '"Mozilla/5.0 [en] Gecko/20100101 Firefox/52.0"'
       }
   
   
   def generate(stream_name, kinesis_client):
       while True:
           data = get_data()
           print(data)
           kinesis_client.put_record(
               StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
           )
   
   
   if __name__ == "__main__":
       generate(STREAM_NAME, boto3.client("kinesis"))
   ```

## Step 2: Create the Kinesis Data Analytics Application
<a name="examples-transforming-strings-regexlogparse-2"></a>

Next, create an Kinesis Data Analytics application as follows:

1. Open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose **Create application**, and specify an application name.

1. On the application details page, choose **Connect streaming data**. 

1. On the **Connect to source** page, do the following:

   1. Choose the stream that you created in the preceding section. 

   1. Choose the option to create an IAM role.

   1. Choose **Discover schema**. Wait for the console to show the inferred schema and samples records used to infer the schema for the in-application stream created. The inferred schema has only one column.

   1. Choose **Save and continue**.

   

1. On the application details page, choose **Go to SQL editor**. To start the application, choose **Yes, start application** in the dialog box that appears.

1. In the SQL editor, write the application code, and verify the results as follows:

   1. Copy the following application code and paste it into the editor.

      ```
      CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (logentry VARCHAR(24), match1 VARCHAR(24), match2 VARCHAR(24));
      
      CREATE OR REPLACE PUMP "STREAM_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
          SELECT STREAM T.LOGENTRY, T.REC.COLUMN1, T.REC.COLUMN2
          FROM 
               (SELECT STREAM LOGENTRY,
                   REGEX_LOG_PARSE(LOGENTRY, '(\w.+) (\d.+) (\w.+) (\w.+)') AS REC
                   FROM SOURCE_SQL_STREAM_001) AS T;
      ```

   1. Choose **Save and run SQL**. On the **Real-time analytics **tab, you can see all the in-application streams that the application created and verify the data.

# Example: Parsing Web Logs (W3C\$1LOG\$1PARSE Function)
<a name="examples-transforming-strings-w3clogparse"></a>

This example uses the `W3C_LOG_PARSE` function to transform a string in Amazon Kinesis Data Analytics. You can use `W3C_LOG_PARSE` to format Apache logs quickly. For more information, see [W3C\$1LOG\$1PARSE](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-w3c-log-parse.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.

In this example, you write log records to an Amazon Kinesis data stream. Example logs are shown following:

```
{"Log":"192.168.254.30 - John [24/May/2004:22:01:02 -0700] "GET /icons/apache_pba.gif HTTP/1.1" 304 0"}
{"Log":"192.168.254.30 - John [24/May/2004:22:01:03 -0700] "GET /icons/apache_pbb.gif HTTP/1.1" 304 0"}
{"Log":"192.168.254.30 - John [24/May/2004:22:01:04 -0700] "GET /icons/apache_pbc.gif HTTP/1.1" 304 0"}
...
```



You then create an Kinesis Data Analytics application on the console, with the Kinesis data stream as the streaming source. The discovery process reads sample records on the streaming source and infers an in-application schema with one column (log), as shown following:

![\[Console screenshot showing formatted stream sample tab with the in-application schema containing the log column.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/log-10.png)


Then, you use the application code with the `W3C_LOG_PARSE` function to parse the log, and create another in-application stream with various log fields in separate columns, as shown following:

![\[Console screenshot showing real-time analytics tab with in-application stream.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/log-20.png)


**Topics**
+ [Step 1: Create a Kinesis Data Stream](#examples-transforming-strings-w3clogparse-1)
+ [Step 2: Create the Kinesis Data Analytics Application](#examples-transforming-strings-w3clogparse-2)

## Step 1: Create a Kinesis Data Stream
<a name="examples-transforming-strings-w3clogparse-1"></a>

Create an Amazon Kinesis data stream, and populate the log records as follows:

1. Sign in to the AWS Management Console and open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose **Data Streams** in the navigation pane.

1. Choose **Create Kinesis stream**, and create a stream with one shard. For more information, see [Create a Stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html) in the *Amazon Kinesis Data Streams Developer Guide*.

1. Run the following Python code to populate the sample log records. This simple code continuously writes the same log record to the stream.

   ```
    
   import json
   import boto3
   
   STREAM_NAME = "ExampleInputStream"
   
   
   def get_data():
       return {
           "log": "192.168.254.30 - John [24/May/2004:22:01:02 -0700] "
           '"GET /icons/apache_pb.gif HTTP/1.1" 304 0'
       }
   
   
   def generate(stream_name, kinesis_client):
       while True:
           data = get_data()
           print(data)
           kinesis_client.put_record(
               StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
           )
   
   
   if __name__ == "__main__":
       generate(STREAM_NAME, boto3.client("kinesis"))
   ```

## Step 2: Create the Kinesis Data Analytics Application
<a name="examples-transforming-strings-w3clogparse-2"></a>

Create an Kinesis Data Analytics application as follows:

1. Open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose **Create application**, type an application name, and choose **Create application**.

1. On the application details page, choose **Connect streaming data**.

1. On the **Connect to source** page, do the following:

   1. Choose the stream that you created in the preceding section. 

   1. Choose the option to create an IAM role.

   1. Choose **Discover schema**. Wait for the console to show the inferred schema and samples records used to infer the schema for the in-application stream created. The inferred schema has only one column.

   1. Choose **Save and continue**.

   

1. On the application details page, choose **Go to SQL editor**. To start the application, choose **Yes, start application** in the dialog box that appears.

1. In the SQL editor, write the application code, and verify the results as follows:

   1. Copy the following application code and paste it into the editor.

      ```
      CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
      column1 VARCHAR(16),
      column2 VARCHAR(16),
      column3 VARCHAR(16),
      column4 VARCHAR(16),
      column5 VARCHAR(16),
      column6 VARCHAR(16),
      column7 VARCHAR(16));
      
      CREATE OR REPLACE PUMP "myPUMP" AS 
      INSERT INTO "DESTINATION_SQL_STREAM"
              SELECT STREAM
                  l.r.COLUMN1,
                  l.r.COLUMN2,
                  l.r.COLUMN3,
                  l.r.COLUMN4,
                  l.r.COLUMN5,
                  l.r.COLUMN6,
                  l.r.COLUMN7
              FROM (SELECT STREAM W3C_LOG_PARSE("log", 'COMMON')
                    FROM "SOURCE_SQL_STREAM_001") AS l(r);
      ```

   1. Choose **Save and run SQL**. On the **Real-time analytics **tab, you can see all the in-application streams that the application created and verify the data.

# Example: Split Strings into Multiple Fields (VARIABLE\$1COLUMN\$1LOG\$1PARSE Function)
<a name="examples-transforming-strings-variablecolumnlogparse"></a>

This example uses the `VARIABLE_COLUMN_LOG_PARSE` function to manipulate strings in Kinesis Data Analytics. `VARIABLE_COLUMN_LOG_PARSE` splits an input string into fields separated by a delimiter character or a delimiter string. For more information, see [VARIABLE\$1COLUMN\$1LOG\$1PARSE](https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-variable-column-log-parse.html) in the *Amazon Managed Service for Apache Flink SQL Reference*.

In this example, you write semi-structured records to an Amazon Kinesis data stream. The example records are as follows:

```
{ "Col_A" : "string",
  "Col_B" : "string",
  "Col_C" : "string",
  "Col_D_Unstructured" : "value,value,value,value"}
{ "Col_A" : "string",
  "Col_B" : "string",
  "Col_C" : "string",
  "Col_D_Unstructured" : "value,value,value,value"}
```



You then create an Kinesis Data Analytics application on the console, using the Kinesis stream as the streaming source. The discovery process reads sample records on the streaming source and infers an in-application schema with four columns, as shown following:

![\[Console screenshot showing in-application schema with 4 columns.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/unstructured-10.png)


Then, you use the application code with the `VARIABLE_COLUMN_LOG_PARSE` function to parse the comma-separated values, and insert normalized rows in another in-application stream, as shown following:



![\[Console screenshot showing real-time analytics tab with in-application stream.\]](http://docs.aws.amazon.com/kinesisanalytics/latest/dev/images/unstructured-20.png)


**Topics**
+ [Step 1: Create a Kinesis Data Stream](#examples-transforming-strings-variablecolumnlogparse-1)
+ [Step 2: Create the Kinesis Data Analytics Application](#examples-transforming-strings-variablecolumnlogparse-2)

## Step 1: Create a Kinesis Data Stream
<a name="examples-transforming-strings-variablecolumnlogparse-1"></a>

Create an Amazon Kinesis data stream and populate the log records as follows:

1. Sign in to the AWS Management Console and open the Kinesis console at [https://console.aws.amazon.com/kinesis](https://console.aws.amazon.com/kinesis).

1. Choose **Data Streams** in the navigation pane.

1. Choose **Create Kinesis stream**, and create a stream with one shard. For more information, see [Create a Stream](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html) in the *Amazon Kinesis Data Streams Developer Guide*.

1. Run the following Python code to populate the sample log records. This simple code continuously writes the same log record to the stream.

   ```
    
   import json
   import boto3
   
   STREAM_NAME = "ExampleInputStream"
   
   
   def get_data():
       return {"Col_A": "a", "Col_B": "b", "Col_C": "c", "Col_E_Unstructured": "x,y,z"}
   
   
   def generate(stream_name, kinesis_client):
       while True:
           data = get_data()
           print(data)
           kinesis_client.put_record(
               StreamName=stream_name, Data=json.dumps(data), PartitionKey="partitionkey"
           )
   
   
   if __name__ == "__main__":
       generate(STREAM_NAME, boto3.client("kinesis"))
   ```

   

## Step 2: Create the Kinesis Data Analytics Application
<a name="examples-transforming-strings-variablecolumnlogparse-2"></a>

Create an Kinesis Data Analytics application as follows:

1. Open the Managed Service for Apache Flink console at [ https://console.aws.amazon.com/kinesisanalytics](https://console.aws.amazon.com/kinesisanalytics).

1. Choose **Create application**, type an application name, and choose **Create application**.

1. On the application details page, choose **Connect streaming data**. 

1. On the **Connect to source** page, do the following:

   1. Choose the stream that you created in the preceding section.

   1. Choose the option to create an IAM role.

   1. Choose **Discover schema**. Wait for the console to show the inferred schema and samples records used to infer the schema for the in-application stream created. Note that the inferred schema has only one column.

   1. Choose **Save and continue**.

   

1. On the application details page, choose **Go to SQL editor**. To start the application, choose **Yes, start application** in the dialog box that appears.

1. In the SQL editor, write application code, and verify the results:

   1. Copy the following application code and paste it into the editor:

      ```
      CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM"(
                  "column_A" VARCHAR(16),
                  "column_B" VARCHAR(16),
                  "column_C" VARCHAR(16),
                  "COL_1" VARCHAR(16),             
                  "COL_2" VARCHAR(16),            
                  "COL_3" VARCHAR(16));
      
      CREATE OR REPLACE PUMP "SECOND_STREAM_PUMP" AS
      INSERT INTO "DESTINATION_SQL_STREAM"
         SELECT STREAM  t."Col_A", t."Col_B", t."Col_C",
                        t.r."COL_1", t.r."COL_2", t.r."COL_3"
         FROM (SELECT STREAM 
                 "Col_A", "Col_B", "Col_C",
                 VARIABLE_COLUMN_LOG_PARSE ("Col_E_Unstructured",
                                           'COL_1 TYPE VARCHAR(16), COL_2 TYPE VARCHAR(16), COL_3 TYPE VARCHAR(16)',
                                           ',') AS r 
               FROM "SOURCE_SQL_STREAM_001") as t;
      ```

   1. Choose **Save and run SQL**. On the **Real-time analytics **tab, you can see all the in-application streams that the application created and verify the data.