

# Application logs
Application logs

Centralized Logging with OpenSearch supports ingesting application logs from the following log sources:
+  [Instance Group](instance-group.md): the solution automatically installs a log agent (Fluent Bit 1.9), collects application logs on EC2 instances and then sends logs into Amazon OpenSearch Service.
+  [Amazon EKS cluster](amazon-eks-cluster.md): the solution generates all-in-one configuration file for customers to deploy the log agent (Fluent Bit 1.9) as a DaemonSet or Sidecar. After the log agent is deployed, the solution starts collecting pod logs and sends them to Amazon OpenSearch Service.
+  [Amazon S3](amazon-s3.md): the solution either ingests logs in the specified Amazon S3 location continuously or performs one-time ingestion. You can also filter logs based on Amazon S3 prefix or parse logs with custom Log Config.
+  [Syslog](syslog.md): the solution collects syslog logs through UDP or TCP protocol.

After creating a log analytics pipeline, you can add more log sources to the log analytics pipeline. For more information, see [add a new log source](log-sources.md#add-a-new-log-source).
+ Important If you are using Centralized Logging with OpenSearch to create an application log pipeline for the first time, you are recommended to learn the [concepts](solution-overview.md#concepts) and supported log formats and log sources.

# Supported log formats and log sources


The table lists the log formats supported by each log source. For more information about how to create log ingestion for each log format, refer to [Log Config](log-config.md).


| Log Format | Instance Group | EKS Cluster | Amazon S3 | Syslog | 
| --- | --- | --- | --- | --- | 
|  NGINX  |  Yes  |  Yes  |  Yes  |  No  | 
|  Apache HTTP Server  |  Yes  |  Yes  |  Yes  |  No  | 
|  JSON  |  Yes  |  Yes  |  Yes  |  Yes  | 
|  Single-line Text  |  Yes  |  Yes  |  Yes  |  Yes  | 
|  Multi-line Text  |  Yes  |  Yes  |  Yes (Not support in Light Engine Mode)  |  No  | 
|  Multi-line Text (Spring Boot)  |  Yes  |  Yes  |  Yes (Not support in Light Engine Mode)  |  No  | 
|  Syslog RFC5424/RFC3164  |  No  |  No  |  No  |  Yes  | 
|  Syslog Custom  |  No  |  No  |  No  |  Yes  | 
|  Windows Event  |  Yes  |  No  |  No  |  No  | 
|  IIS  |  Yes  |  No  |  No  |  No  | 

# Instance group


An instance group represents a group of EC2 Linux instances, which enables the solution to associate a Log Config with multiple EC2 instances quickly. Centralized Logging with OpenSearch uses [Systems Manager Agent (SSM Agent)](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) to install/configure Fluent Bit agent, and sends log data to [Kinesis Data Streams](https://aws.amazon.com/kinesis/data-streams/).

This article guides you to create a log pipeline that ingests logs from an Instance Group.

## Create a log analytics pipeline (OpenSearch Engine)


 **Prerequisites** 

Make sure you have imported an Amazon OpenSearch Service domain. For more information, see [Domain operations](domain-operations.md).

 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under Log Analytics Pipelines, choose Application Log.

1. Choose Create a pipeline.

1. Choose **Instance Group** as Log Source, choose **Amazon OpenSearch Service**, and choose **Next**.

1. Select an instance group. If you have no instance group yet, choose **Create Instance Group** at the top right corner, and follow the [instructions ](log-sources.md#instance-group-1)to create an instance group. After that, choose **Refresh** and then select the newly created instance group.

1. (Auto Scaling group only) If your instance group is created based on an Auto Scaling group, after ingestion status become "Created", then you can find the generated Shell Script in the instance group’s detail page. Copy the shell script and update the User Data of the Auto Scaling [Launch configurations](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html) or [Launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html).

1. Keep the default Permission grant method.

1. (Optional) If you choose, I will manually add the following required permissions after pipeline creation, continue to do the following:

1. Choose **Expand to view required permissions** and copy the provided JSON policy.

1. Go to AWS Management Console.

1. On the left navigation pane, choose **IAM**, and select **Policies** under **Access management**.

1. Choose **Create Policy**, choose **JSON**, and replace all the content inside the text block. Make sure to substitute <YOUR ACCOUNT ID> with your account id.

1. Choose **Next**, and then enter a name for this policy.

1. Attach the policy to your EC2 instance profile to grant the log agent permissions to send logs to the application log pipeline. If you are using the Auto Scaling group, you must update the IAM instance profile associated with the Auto Scaling group. If needed, you can follow the documentation to update your [launch template](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#advanced-settings-for-your-launch-template) or [launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/change-launch-config.html).

1. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with Amazon EC2 instance group as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New**, and follow instructions in [Log Config](log-config.md).

1. Enter a **Log Path** to specify the location of logs to be collected. You can use , to separate multiple paths. Choose **Next**.

1. Specify **Index name** in lowercase.

1. In the **Buffer** section, choose **S3** or **Kinesis Data Streams**. If you don’t want the buffer layer, choose **None**. Refer to the [Log Buffer ](solution-overview.md#concepts)for more information about choosing the appropriate buffer layer.
   + Amazon S3 buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/instance-group.html)
   + Kinesis Data Streams buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/instance-group.html)
**Important**  
Important You may observe duplicate logs in OpenSearch if a threshold error occurs in Kinesis Data Streams (KDS). This is because the Fluent Bit log agent uploads logs in [chunk](https://docs.fluentbit.io/manual/administration/buffering-and-storage#chunks-memory-filesystem-and-backpressure) (contains multiple records), and will retry the chunk if upload failed. Each KDS shard can support up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second. Estimate your log volume and choose an appropriate shard number.

1. Choose **Next**.

1. In the **Specify OpenSearch domain** section, select an imported domain for **Amazon OpenSearch Service domain**.

1. In the **Log Lifecycle** section, enter the number of days to manage the Amazon OpenSearch Service index lifecycle. The Centralized Logging with OpenSearch will create the associated [Index State Management (ISM)](https://opensearch.org/docs/latest/im-plugin/ism/index/) policy automatically for this pipeline.

1. In the **Select log processor** section, choose the log processor.
   + When selecting Lambda as a log processor, you can configure the Lambda concurrency if needed.
   + (Optional) OSI as log processor is now supported in these [Regions](https://aws.amazon.com/about-aws/whats-new/2023/04/amazon-opensearch-service-ingestion/). When OSI is selected, enter the minimum and maximum number of OCU. For more information, see [Scaling pipelines](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ingestion.html#ingestion-scaling).

1. Choose **Next**.

1. Enable **Alarms** if needed and select an existing SNS topic. If you choose **Create a new SNS topic**, provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to "Active" state.

## Create a log analytics pipeline (Light Engine)


 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Instance Group** as Log Source, choose **Light Engine**, and choose **Next**.

1. Select an instance group. If you have no instance group yet, choose **Create Instance Group** at the top right corner, and follow the [instructions](log-sources.md#instance-group-1) to create an instance group. After that, choose **Refresh** and then select the newly created instance group.

1. (Auto Scaling group only) If your instance group is created based on an Auto Scaling group, after ingestion status become "Created", then you can find the generated Shell Script in the instance group’s detail page. Copy the shell script and update the User Data of the Auto Scaling [Launch configurations](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html) or [Launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html).

1. Keep the default **Permission grant method**.

1. (Optional) If you choose **I will manually add the below required permissions after pipeline creation**, continue to do the following:

   1. Choose **Expand to view required permissions** and copy the provided JSON policy.

   1. Go to AWS Management Console.

   1. On the left navigation pane, choose **IAM**, and select **Policies** under **Access management**.

   1. Choose **Create Policy**, choose **JSON**, and replace all the content inside the text block. Make sure to substitute <YOUR ACCOUNT ID> with your account id.

   1. Choose **Next**, and then enter a name for this policy.

   1. Attach the policy to your EC2 instance profile to grant the log agent permissions to send logs to the application log pipeline. If you are using the Auto Scaling group, you must update the IAM instance profile associated with the Auto Scaling group. If needed, you can follow the documentation to update your [launch template](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#advanced-settings-for-your-launch-template) or [launch configuration](https://docs.aws.amazon.com/autoscaling/ec2/userguide/change-launch-config.html).

1. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with Amazon EC2 instance group as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New**, and follow instructions in [Log Config](log-config.md).

1. Enter a **Log Path** to specify the location of logs to be collected. You can use , to separate multiple paths. Choose **Next**.

1. In the **Buffer** section, configure Amazon S3 buffer parameters.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/instance-group.html)

1. Choose **Next**.

1. In the **Specify Light Engine Configuration** section, if you want to ingest an associated templated Grafana dashboard, select **Yes** for the sample dashboard.

1. Choose an existing Grafana, or import a new one by making configurations in Grafana.

1. Select an Amazon S3 bucket to store partitioned logs and give a name to the log table. The solution provides a predefined table name, but you can modify it according to your needs.

1. Modify the log processing frequency if needed, which is set to **5** minutes by default with a minimum processing frequency of **1** minute.

1. In the **Log Lifecycle** section, enter the log merger time and lag archive time. The solution provides default values, which you can modify according to your needs.

1. Choose **Next**.

1. Enable **Alarms** if needed and select an existing SNS topic. If you choose **Create a new SNS topic**, provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to "Active" state.

# Amazon EKS cluster


For Amazon Elastic Kubernetes Service (Amazon EKS) clusters, Centralized Logging with OpenSearch generates an all-in-one configuration file for you to deploy the [log agent](https://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/application-logs.html#log-agent) (Fluent Bit 1.9) as a DaemonSet or Sidecar. After the log agent is deployed, the solution starts collecting pod logs.

The following guides you to create a log pipeline that ingests logs from an Amazon EKS cluster.

## Create a log analytics pipeline (OpenSearch Engine)


 **Prerequisites** 

Make sure you have imported an Amazon OpenSearch Service domain. For more information, see [Domain operations](domain-operations.md).

 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Amazon EKS** as Log Source, choose **Amazon OpenSearch Service**, and choose **Next**.

1. Choose the AWS account in which the logs are stored.

1. Choose an EKS Cluster. If no clusters are imported yet, choose **Import an EKS Cluster** and follow [instructions](log-sources.md#amazon-eks-cluster-1) to import an EKS cluster. After that, select the newly imported EKS cluster from the dropdown list.

1. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with the Amazon EKS cluster as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New** and follow instructions in [Log Config](log-config.md).

1. Enter a **Log Path** to specify the location of logs you want to collect. You can use, to separate multiple paths. Choose **Next**.

1. Specify **Index name** in lowercase.

1. In the **Buffer** section, choose **S3** or **Kinesis Data Streams**. If you don’t want the buffer layer, choose **None**. Refer to the [Log Buffer](solution-overview.md#concepts) for more information about choosing the appropriate buffer layer.
   + Amazon S3 buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/amazon-eks-cluster.html)
   + Kinesis Data Streams buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/amazon-eks-cluster.html)
**Important**  
You may observe duplicate logs in OpenSearch if a threshold error occurs in Kinesis Data Streams (KDS). This is because the Fluent Bit log agent uploads logs in [chunk](https://docs.fluentbit.io/manual/administration/buffering-and-storage#chunks-memory-filesystem-and-backpressure) (contains multiple records), and will retry the chunk if upload failed. Each KDS shard can support up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second. Estimate your log volume and choose an appropriate shard number.

1. Choose **Next**.

1. In the **Specify OpenSearch domain** section, select an imported domain for **Amazon OpenSearch Service domain**.

1. In the **Log Lifecycle** section, enter the number of days to manage the Amazon OpenSearch Service index lifecycle. The Centralized Logging with OpenSearch will create the associated [Index State Management (ISM)](https://opensearch.org/docs/latest/im-plugin/ism/index/) policy automatically for this pipeline.

1. In the **Select log processor** section, choose the log processor.
   + When selecting Lambda as a log processor, you can configure the Lambda concurrency if needed.
   + (Optional) OSI as log processor is now supported in these [Regions](https://aws.amazon.com/about-aws/whats-new/2023/04/amazon-opensearch-service-ingestion/). When OSI is selected, enter the minimum and maximum number of OCU. For more information, see [Scaling pipelines](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ingestion.html#ingestion-scaling).

1. Choose **Next**.

1. Enable **Alarms** if needed and select an existing SNS topic. If you choose **Create a new SNS topic**, please provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to "Active" state.

## Create a log analytics pipeline (Light Engine)


 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Amazon EKS** as Log Source, choose **Light Engine**, and choose **Next**.

1. Choose the AWS account in which the logs are stored.

1. Choose an EKS Cluster. If no clusters are imported yet, choose **Import an EKS Cluster** and follow [instructions](https://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/log-sources.html#amazon-eks-cluster) to import an EKS cluster. After that, select the newly imported EKS cluster from the dropdown list.

1. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with the Amazon EKS cluster as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New** and follow instructions in [Log Config](https://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/log-config-1.html).

1. Enter a **Log Path** to specify the location of logs you want to collect. You can use, to separate multiple paths. Choose **Next**.

1. In the **Buffer** section, configure Amazon S3 buffer parameters.
   + Amazon S3 buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/amazon-eks-cluster.html)

1. Choose **Next**.

1. In the **Specify Light Engine Configuration** section, if you want to ingest an associated templated Grafana dashboard, select **Yes** for the sample dashboard.

1. Choose an existing Grafana, or import a new one by making configurations in Grafana.

1. Select an Amazon S3 bucket to store partitioned logs and give a name to the log table. The solution provides a predefined table name, but you can modify it according to your needs.

1. Modify the log processing frequency if needed, which is set to **5** minutes by default with a minimum processing frequency of **1** minute.

1. In the **Log Lifecycle** section, enter the log merger time and lag archive time. The solution provides default values, which you can modify according to your needs.

1. Choose **Next**.

1. Enable **Alarms** if needed and select an exiting SNS topic. If you choose **Create a new SNS topic**, provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to "Active" state.

# Amazon S3


For Amazon S3, Centralized Logging with OpenSearch ingests logs in a specified Amazon S3 location continuously or performs one-time ingestion. You can also filter logs based on Amazon S3 prefix or parse logs with custom Log Config.

The following guides you to create a log pipeline that ingests logs from an Amazon S3 bucket.

## Create a log analytics pipeline (OpenSearch Engine)


 **Prerequisites** 

Make sure you have imported an Amazon OpenSearch Service domain. For more information, see [Domain operations](domain-operations.md).

 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Amazon S3** as Log Source, choose **Amazon OpenSearch Service**, and choose **Next**.

1. Choose the Amazon S3 bucket where your logs are stored. If needed, enter **Prefix filter**, which is optional.

1. Choose **Ingestion mode** based on your need. If you want to ingest logs continuously, select **On-going**; if you only must ingest logs once, select **One-time**.

1. Specify **Compression format** if your log files are compressed, and choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with Amazon S3 as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New**. Refer to [Log Config](log-config.md) for more information.

1. Choose **Next**.

1. Specify **Index name** in lowercase.

1. In the **Specify OpenSearch domain** section, select an imported domain for **Amazon OpenSearch Service domain**.

1. In the **Log Lifecycle** section, enter the number of days to manage the Amazon OpenSearch Service index lifecycle. The Centralized Logging with OpenSearch creates the associated [Index State Management (ISM)](https://opensearch.org/docs/latest/im-plugin/ism/index/) policy automatically for this pipeline.

1. Choose **Next**.

1. Enable **Alarms** if needed and select an existing SNS topic. If you choose **Create a new SNS topic**, provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to an "Active" state.

## Create a log analytics pipeline (Light Engine)


 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Amazon S3** as Log Source, choose **Light Engine**, and choose **Next**.

1. Choose the Amazon S3 bucket where your logs are stored. If needed, enter **Prefix filter**, which is optional.

1. Choose **Ingestion mode** based on your need. If you want to ingest the log continuously, select **On-going**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with Amazon S3 as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose Create New. Refer to [Log Config](log-config.md) for more information.

1. Choose Next.

1. In the Specify Light Engine Configuration section, if you want to ingest associated templated Grafana dashboards, select Yes for the sample dashboard.

1. You can choose an existing Grafana, or if you must import a new one, you can go to Grafana for configuration.

1. Select an S3 bucket to store partitioned logs and define a name for the log table. We have provided a predefined table name, but you can modify it according to your business needs.

1. The log processing frequency is set to 5 minutes by default, with a minimum processing frequency of 1 minute.

1. In the Log Lifecycle section, enter the log merge time and log archive time. We have provided default values, but you can adjust them based on your business requirements.

1. Select Next.

1. Enable Alarms if needed and select an existing SNS topic. If you choose Create a new SNS topic, provide a name and an email address for the new SNS topic.

1. If desired, add tags.

1. Select Create.

1. Wait for the application pipeline to turn to "Active" state.

# Syslog


Centralized Logging with OpenSearch collects syslog logs through UDP or TCP protocol.

The following guides you to create a log pipeline that ingests logs from a syslog endpoint.

## Create a log analytics pipeline (OpenSearch Engine)


Prerequisites

Make sure you have imported an Amazon OpenSearch Service domain. For more information, see [Domain operations](domain-operations.md).

 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Syslog Endpoint** as Log Source, choose **Amazon OpenSearch Service**, and choose **Next**.

1. Select **UDP** or **TCP** with custom port number. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with syslog as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New** and follow instructions in [Log Config](log-config.md).

1. Enter a **Log Path** to specify the location of logs you want to collect.

1. Specify **Index name** in lowercase.

1. In the **Buffer** section, choose **S3** or **Kinesis Data Streams**. If you don’t want the buffer layer, choose **None**. Refer to the [Log Buffer](solution-overview.md#concepts) for more information about choosing the appropriate buffer layer.
   + Amazon S3 buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/syslog.html)
   + Kinesis Data Streams buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/syslog.html)
**Important**  
You may observe duplicate logs in OpenSearch if a threshold error occurs in Kinesis Data Streams (KDS). This is because the Fluent Bit log agent uploads logs in [chunk](https://docs.fluentbit.io/manual/administration/buffering-and-storage#chunks-memory-filesystem-and-backpressure) (contains multiple records), and will retry the chunk if upload failed. Each KDS shard can support up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second. Estimate your log volume and choose an appropriate shard number.

1. Choose **Next**.

1. In the **Specify OpenSearch domain** section, select an imported domain for **Amazon OpenSearch Service domain**.

1. In the **Log Lifecycle** section, enter the number of days to manage the Amazon OpenSearch Service index lifecycle. The Centralized Logging with OpenSearch will create the associated [Index State Management (ISM)](https://opensearch.org/docs/latest/im-plugin/ism/index/) policy automatically for this pipeline.

1. In the **Select log processor** section, choose the log processor.
   + When selecting Lambda as a log processor, you can configure the Lambda concurrency if needed.
   + (Optional) OSI as log processor is now supported in these [Regions](https://aws.amazon.com/about-aws/whats-new/2023/04/amazon-opensearch-service-ingestion/). When OSI is selected, enter the minimum and maximum number of OCU. For more information, see [Scaling pipelines](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ingestion.html#ingestion-scaling).

1. Choose **Next**.

1. Enable **Alarms** if needed and select an existing SNS topic. If you choose **Create a new SNS topic**, provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to "Active" state.

## Create a log analytics pipeline (Light Engine)


 **Follow these steps:** 

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Analytics Pipelines**, choose **Application Log**.

1. Choose **Create a pipeline**.

1. Choose **Syslog Endpoint** as Log Source, choose **Light Engine**, and choose **Next**.

1. Select **UDP** or **TCP** with custom port number. Choose **Next**.

You have created a log source for the log analytics pipeline. Now you are ready to make further configurations for the log analytics pipeline with syslog as log source.

1. Select a log config. If you do not find the desired log config from the dropdown list, choose **Create New** and follow instructions in [Log Config](https://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/log-config-1.html).

1. Enter a **Log Path** to specify the location of logs you want to collect.

1. In the **Buffer** section, configure Amazon S3 buffer parameters.
   + Amazon S3 buffer parameters    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/solutions/latest/centralized-logging-with-opensearch/syslog.html)

1. Choose **Next**.

1. In the **Specify Light Engine Configuration** section, if you want to ingest an associated templated Grafana dashboard, select **Yes** for the sample dashboard.

1. Choose an existing Grafana, or import a new one by making configurations in Grafana.

1. Select an Amazon S3 bucket to store partitioned logs and give a name to the log table. The solution provides a predefined table name, but you can modify it according to your needs.

1. Modify the log processing frequency if needed, which is set to **5** minutes by default with a minimum processing frequency of **1** minute.

1. In the **Log Lifecycle** section, enter the log merger time and lag archive time. The solution provides default values, which you can modify according to your needs.

1. Choose **Next**.

1. Enable **Alarms** if needed and select an exiting SNS topic. If you choose **Create a new SNS topic**, provide a name and an email address for the new SNS topic.

1. Add tags if needed.

1. Choose **Create**.

1. Wait for the application pipeline to turn to "Active" state.

# Pipeline resources


A log analytics pipeline can have more than one log source.

# Log sources


You must create a log source first before collecting application logs. Centralized Logging with OpenSearch supports the following log sources:
+  [Instance group](#instance-group-1) 
+  [Amazon EKS cluster](#amazon-eks-cluster-1) 
+  [Amazon S3](#amazon-s3-1) 
+  [Syslog](#syslog-1) 

For more information, see [concepts](solution-overview.md#concepts).

## Instance Group


An instance group represents a group of EC2 Linux instances, which enables the solution to associate a [Log Config](instance-group.md) with multiple EC2 instances quickly. Centralized Logging with OpenSearch uses [Systems Manager Agent(SSM Agent)](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) to install/configure Fluent Bit agent, and sends log data to [Kinesis Data Streams](https://aws.amazon.com/kinesis/data-streams/).

 **Prerequisites** 

Make sure that the instances meet the following requirements:
+ SSM agent is installed on instances. Refer to [install SSM agent on EC2 instances for Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html) for more details.
+ The AmazonSSMManagedInstanceCore policy is being associated with the instances.
+ The [OpenSSL 1.1](https://www.openssl.org/source/) or later is installed. Refer to [OpenSSL Installation](additional-resources.md#openssl-1.1-installation) for more details.
+ The instances have network access to AWS Systems Manager.
+ The instances have network access to Amazon Kinesis Data Streams, if you use it as the Log Buffer.
+ The instances have network access to Amazon S3, if you use it as the Log Buffer.
+ The operating system of the instances is supported by Fluent Bit. Refer to [Supported Platform](https://docs.fluentbit.io/manual/installation/supported-platforms).

### (Option 1) Select instances to create an Instance Group


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Source**, choose **Instance Group**.

1. Choose the **Create an instance group** button.

1. In the **Settings** section, specify a group name.

1. In the **Configuration** section, select **Instances**. You can use up to 5 tags to filter the instances.

1. Verify that all the selected instances' "Pending Status" is **Online**.

1. (Optional) If the selected instances' "Pending Status" are empty, choose the **Install log agent** button, and wait for "Pending Status" to become **Online**.

1. (Optional) If you want to ingest logs from another account, select a [linked account](cross-account-ingestion.md) in the **Account Settings** section to create an instance group log source from another account.

1. Choose **Create**.

**Important**  
Important Use the Centralized Logging with OpenSearch console to install Fluent Bit agent on Ubuntu instances in **Beijing (cn-north-1) and Ningxia (cn-northwest-1)** Region will cause installation error. The Fluent Bit assets cannot be downloaded successfully. You must install the Fluent Bit agent by yourself.

### (Option 2) Select an Auto Scaling group to create an Instance Group


When creating an Instance Group with Amazon EC2 Auto Scaling group, the solution will generate a shell script that you should include in the [EC2 User Data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts).

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Source**, choose **Instance Group**.

1. Choose the **Create an instance group** button.

1. In the **Settings** section, specify a group name.

1. In the Configuration section, select Auto Scaling Groups.

1. In the **Auto Scaling groups** section, select the Auto Scaling group from which you want to collect logs.

1. (Optional) If you want to ingest logs from another account, select a [linked account](cross-account-ingestion.md) in the **Account Settings** section to create an instance group log source from another account.

1. Choose **Create**. After you created a Log Ingestion using the Instance Group, you can find the generated Shell Script in the details page.

1. Copy the shell script and update the User Data of the Auto Scaling Group’s [launch configurations](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html) or [launch template](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html). The shell script will automatically install Fluent Bit, SSM agent if needed, and download Fluent Bit configurations.

1. Once you have updated the launch configurations or launch template, you must start an [instance refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) to update the instances within the Auto Scaling group. The newly launched instances will ingest logs to the OpenSearch cluster or the Log Buffer layer.

## Amazon EKS Cluster


The [EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) in Centralized Logging with OpenSearch refers to the Amazon Elastic Kubernetes Service (Amazon EKS) from which you want to collect pod logs. Centralized Logging with OpenSearch will guide you to deploy the log agent as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) or [Sidecar](https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods) in the EKS Cluster.

**Important**  
Centralized Logging with OpenSearch does not support sending logs in one EKS cluster to more than one Amazon OpenSearch Service domain at the same time.
Make sure your EKS cluster’s VPC is connected to the Amazon OpenSearch Service cluster’s VPC so that logs can be ingested. Refer to [VPC Connectivity](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/vpc-to-vpc-connectivity.html) for more details regarding approaches to connect VPCs.

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Log Source**, choose **EKS Cluster**.

1. Choose the **Import a Cluster** button.

1. Choose the **EKS Cluster** where Centralized Logging with OpenSearch collects logs from. (Optional) If you want to ingest logs from another account, select a [linked account](cross-account-ingestion.md) from the **Account** dropdown to import an EKS log source from another account.

1. Select **DaemonSet** or **Sidecar** as the log agent’s deployment pattern.

1. Choose **Next**.

1. Specify the **Amazon OpenSearch Service** where Centralized Logging with OpenSearch sends the logs to.

1. Follow the guidance to establish a VPC peering connection between EKS’s VPC and OpenSearch’s VPC.
   +  [Create and accept VPC peering connections](https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html) 
   +  [Update your route tables for a VPC peering connection](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html) 
   +  [Update your security groups to reference peer VPC groups](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html) 

1. Choose **Next**.

1. Add tags if needed.

1. Choose **Create**.

## Amazon S3


The [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) in Centralized Logging with OpenSearch refers to the Amazon S3 from which you want to collect application logs stored in your bucket. You can choose **On-going** or **One-time** to create your ingestion job.

**Important**  
On-going means that the ingestion job will run when a new file is delivered to the specified Amazon S3 location.
One-time means that the ingestion job will run at creation and will only run once to load all files in the specified location.

## Syslog


**Important**  
Important To ingest logs, make sure your Syslog generator/sender’s subnet is connected to Centralized Logging with OpenSearch’s **two** private subnets. Refer to [VPC Connectivity](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/vpc-to-vpc-connectivity.html) for more details about how to connect VPCs.

You can use a UDP or TCP custom port number to collect syslog in Centralized Logging with OpenSearch. Syslog refers to logs generated by Linux instance, routers, or network equipment. For more information, see [Syslog](https://en.wikipedia.org/wiki/Syslog) in Wikipedia.

## Add a new log source


A newly created log analytics pipeline has one log source. You can add more log sources into the log pipeline.

1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left navigation pane, under Log Analytics Pipelines, choose Application Log.

1. Choose the log pipeline’s ID.

1. Choose Create a source.

1. Follow the instructions in [Instance Group](#instance-group-1), [Amazon EKS cluster](#amazon-eks-cluster-1), [Amazon S3](#amazon-s3-1), or [Syslog](#syslog-1) to create a log source according to your need.

# Log config


Centralized Logging with OpenSearch solution supports creating log configs for the following log formats:
+  [JSON](#create-a-json-config) 
+  [Apache](#create-an-apache-http-server-log-config) 
+  [NGINX](#create-a-nginx-log-config) 
+  [Syslog](#create-a-syslog-config) 
+  [Single-line text](#create-a-single-line-text-config) 
+  [Multi-line text](#create-a-multi-line-text-config) 

For more information, refer to [supported log formats and log sources](supported-log-formats-and-log-sources.md).

The following describes how to create a log config for each log format.

## Create a JSON config


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Resources**, choose **Log Config**.

1. Choose the **Create a log config** button.

1. Specify Config Name.

1. Choose **JSON** in the log type dropdown list.

1. In the **Sample log parsing** section, paste a sample JSON log and choose **Parse log** to verify if the log parsing is successful.

   For example:

   ```
   {"host":"81.95.250.9", "user-identifier":"-", "time":"08/Mar/2022:06:28:03 +0000", "method": "PATCH", "request": "/clicks-and-mortar/24%2f7", "protocol":"HTTP/2.0", "status":502, "bytes":24337, "referer": "https://www.investorturn-key.net/functionalities/innovative/integrated"}
   ```

1. Check if each field type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
You must specify the datetime of the log using the key "time". If not specified, system time will be added.

1. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details.

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

## Create an Apache HTTP server log config


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Resources**, choose **Log Config**.

1. Choose the **Create a log config** button.

1. Specify Config Name.

1. Choose **Apache HTTP server** in the log type dropdown menu.

1. In the **Apache Log Format** section, paste your Apache HTTP server log format configuration. It is in the format of /etc/httpd/conf/httpd.conf and starts with LogFormat.

   For example:

   ```
   LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
   ```

1. (Optional) In the **Sample log parsing** section, paste a sample Apache HTTP server log to verify if the log parsing is successful.

   For example:

   ```
   127.0.0.1 - - [22/Dec/2021:06:48:57 +0000] "GET /xxx HTTP/1.1" 404 196 "-" "curl/7.79.1"
   ```

1. Choose **Create**.

## Create a NGINX log config


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Resources**, choose **Log Config**.

1. Choose the **Create a log config** button.

1. Specify Config Name.

1. Choose **NGINX** in the log type dropdown menu.

1. In the **NGINX Log Format** section, paste your NGINX log format configuration. It is in the format of /etc/nginx/nginx.conf and starts with log\$1format.

   For example:

   ```
   log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
   '$status $body_bytes_sent "$http_referer" '
   '"$http_user_agent" "$http_x_forwarded_for"';
   ```

1. (Optional) In the **Sample log parsing** section, paste a sample NGINX log to verify if the log parsing is successful.

   For example:

   ```
   127.0.0.1 - - [24/Dec/2021:01:27:11 +0000] "GET / HTTP/1.1" 200 3520 "-" "curl/7.79.1" "-"
   ```

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

## Create a Syslog config


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Resources**, choose **Log Config**.

1. Choose the **Create a log config** button.

1. Specify Config Name.

1. Choose **Syslog** in the log type dropdown menu. Note that Centralized Logging with OpenSearch also supports Syslog with JSON format and single-line text format.

### RFC5424


1. Paste a sample RFC5424 log. For example:

   ```
   <35>1 2013-10-11T22:14:15Z client_machine su - - - 'su root' failed for joe on /dev/pts/2
   ```

1. Choose **Parse Log**.

1. Check if each field type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
You must specify the datetime of the log using the key "time". If not specified, system time will be added.

1. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this manual](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details. For example:

   ```
   %Y-%m-%dT%H:%M:%SZ
   ```

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

### RFC3164


1. Paste a sample RFC3164 log. For example:

   ```
   <35>Oct 12 22:14:15 client_machine su: 'su root' failed for joe on /dev/pts/2
   ```

1. Choose **Parse Log**.

1. Check if each field type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
Specify the datetime of the log using the key "time". If not specified, system time will be added. Since there is no year in the timestamp of RFC3164, it cannot be displayed as a time histogram in the Discover interface of Amazon OpenSearch Service.

1. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details. For example:

   ```
   %b %m %H:%M:%S
   ```

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

### Custom


1. In the **Syslog Format** section, paste your Syslog log format configuration. It is in the format of /etc/rsyslog.conf and starts with template or \$1template. The format syntax follows [Syslog Message Format](https://www.rfc-editor.org/rfc/rfc5424?spm=a2c4g.11186623.0.0.21324a0fUixMd5#:~:text=2009%0A%0A%0A6.-,Syslog%20Message%20Format,-The%20syslog%20message). For example:

   ```
   <%pri%>1 %timestamp:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%\n
   ```

1. In the **Sample log parsing** section, paste a sample NGINX log to verify if the log parsing is successful. For example:

   ```
   <35>1 2013-10-11T22:14:15.003Z client_machine su - - 'su root' failed for joe on /dev/pts/2
   ```

1. Check if each field type mapping is correct. Change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
Specify the datetime of the log using the key "time". If not specified, system time will be added.

1. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this manual](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details.

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

## Create a Single-line text config


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Resources**, choose **Log Config**.

1. Choose the **Create a log config** button.

1. Specify Config Name.

1. Choose **Single-line Text** in the log type dropdown menu.

1. Write the regular expression in [Rubular](https://rubular.com/) to validate first and enter the value. For example:

   ```
   (?<remote_addr>\S+)\s*-\s*(?<remote_user>\S+)\s*\[(?<time_local>\d+/\S+/\d+:\d+:\d+:\d+)\s+\S+\]\s*"(?<request_method>\S+)\s+(?<request_uri>\S+)\s+\S+"\s*(?<status>\S+)\s*(?<body_bytes_sent>\S+)\s*"(?<http_referer>[^"]*)"\s*"(?<http_user_agent>[^"]*)"\s*"(?<http_x_forwarded_for>[^"]*)".*
   ```

1. In the **Sample log parsing** section, paste a sample Single-line text log and choose **Parse log** to verify if the log parsing is successful. For example:

   ```
   127.0.0.1 - - [24/Dec/2021:01:27:11 +0000] "GET / HTTP/1.1" 200 3520 "-" "curl/7.79.1" "-"
   ```

1. Check if each field type mapping is correct. Change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
Specify the datetime of the log using the key "time". If not specified, system time will be added.

1. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this manual](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details.

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

## Create a Multi-line text config


1. Sign in to the Centralized Logging with OpenSearch Console.

1. In the left sidebar, under **Resources**, choose **Log Config**.

1. Choose the **Create a log config** button.

1. Specify Config Name.

1. Choose **Multi-line Text** in the log type dropdown menu.

 **Java - Spring Boot** 

1. For Java Spring Boot logs, you could provide a simple log format. For example:

   ```
   %d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger : %msg%n
   ```

1. Paste a sample multi-line log. For example:

   ```
   2022-02-18 10:32:26.400 ERROR [http-nio-8080-exec-1] org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.ArithmeticException: / by zero] with root cause
   java.lang.ArithmeticException: / by zero
      at com.springexamples.demo.web.LoggerController.logs(LoggerController.java:22)
      at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke
   ```

1. Choose **Parse Log**.

1. Check if each field type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
You must specify the datetime of the log using the key "time". If not specified, system time will be added.

1. Specify the **Time format**. The format syntax follows [strptime](https://linux.die.net/man/3/strptime). Check [this](https://docs.fluentbit.io/manual/pipeline/parsers/configuring-parser#time-resolution-and-fractional-seconds) for details.

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.

Custom

1. For other kinds of logs, you could specify the first line regex pattern. For example:

   ```
   (?<time>\d{4}-\d{2}-\d{2}\s*\d{2}:\d{2}:\d{2}.\d{3})\s*(?<message>goroutine\s*\d\s*\[.+\]:)
   ```

1. Paste a sample multi-line log. For example:

   ```
   2023-07-12 10:32:26.400 goroutine 1 [chan receive]:
   runtime.gopark(0x4739b8, 0xc420024178, 0x46fcd7, 0xc, 0xc420028e17, 0x3)
     /usr/local/go/src/runtime/proc.go:280 +0x12c fp=0xc420053e30 sp=0xc420053e00 pc=0x42503c
   runtime.goparkunlock(0xc420024178, 0x46fcd7, 0xc, 0x1000f010040c217, 0x3)
     /usr/local/go/src/runtime/proc.go:286 +0x5e fp=0xc420053e70 sp=0xc420053e30 pc=0x42512e
   runtime.chanrecv(0xc420024120, 0x0, 0xc420053f01, 0x4512d8)
     /usr/local/go/src/runtime/chan.go:506 +0x304 fp=0xc420053f20 sp=0xc420053e70 pc=0x4046b4
   runtime.chanrecv1(0xc420024120, 0x0)
     /usr/local/go/src/runtime/chan.go:388 +0x2b fp=0xc420053f50 sp=0xc420053f20 pc=0x40439b
   main.main()
     foo.go:9 +0x6f fp=0xc420053f80 sp=0xc420053f50 pc=0x4512ef
   runtime.main()
     /usr/local/go/src/runtime/proc.go:185 +0x20d fp=0xc420053fe0 sp=0xc420053f80 pc=0x424bad
   runtime.goexit()
     /usr/local/go/src/runtime/asm_amd64.s:2337 +0x1 fp=0xc420053fe8 sp=0xc420053fe0 pc=0x44b4d1
   ```

1. Choose **Parse Log**.

1. Check if each field type mapping is correct. You can change the type by selecting the dropdown menu in the second column. For all supported types, see [Data Types](https://opensearch.org/docs/latest/search-plugins/sql/datatypes/).
**Note**  
You must specify the datetime of the log using the key "time". If not specified, system time will be added.

1. (Optional) In the **Filter** section, you add some conditions to filter logs at the log agent side. The solution will ingest logs that match ALL the specified conditions only.

1. Select **Create**.