Logs
OpenSearch Ingestion can transform unstructured log data into a structured format during ingestion. OpenSearch Ingestion provides processors that normalize and enrich your data before it is indexed. Examples of helpful processors are:
-
grok– Parses and structures unstructured text data such as web server access logs, into distinct fields. -
date– Parses a date from a log field and sets it as the event's timestamp. -
parse_json– Parses a string field that contains a JSON object.
Note – To make getting started easier, we’ve created a new Get Started
OpenSearch UI and observability workspace
After your logs data is ingested into Amazon OpenSearch Service, you use the tools provided by the Amazon OpenSearch Service observability workspace in OpenSearch UI to analyze it. The observability workspace provides specialized tools designed to extract meaningful insights in Discover and Dashboards.
The observability workspace comes with a new Discover experience which uses piped processing language
Querying your logs using PPL
You have several options for querying your logs to gather insights into the operation of your application or service.
Piped processing language (PPL) is a query language with pipe-based (|) syntax for chaining commands. You can use it to build powerful expressions to analyze your logs.
Note: To unlock newer PPL commands/functions in OpenSearch 2.19, you’ll need to change a feature flag in OpenSearch Developer Tools using the following query (not required for OpenSearch 3.3):
PUT /_plugins/_query/settings { "transient" : { "plugins.calcite.enabled" : true } }
Find the hosts with the most errors
This example analyzes you logs to determine the service hosts with the most total errors.
source = my-index |
where level = "ERROR" |
stats count() as error_count by host |
sort -error_count |
head 5
Calculate average request time
This example analyzes your logs to calculate the average request time for each status code in the log.
source = my-index |
stats avg(request_time) by status_code
For more information about PPL, see the PPL reference manual
Querying your logs using AI
This example analyzes your logs to show the errors logged in the last 5 minutes.
Show me all of the error logs from the last 5 minutes
Querying your logs using SQL
SQL provides a familiar way to query log data.
This example analyzes your logs to show errors by timestamp.
SELECT timestamp, severity_text, body, service_name
FROM opentelemetry_logs
WHERE severity_text = 'ERROR' AND service_name = 'my-service'
ORDER BY timestamp DESC;
For more information about SQL, see the SQL reference manual
Querying your logs using DQL
DQL is good for quick searching and filtering.
This example analyzes your logs and returns errors and exceptions.
error OR exception
For more information about DQL, see the DQL reference manual
Dashboards and alerts for logs
In the new Discover experience with PPL, you can create visualizations from the visualizations tab within Discover. Choose from 12 visualization types and edit on the fly before adding them to a dashboard. In the old Discover experience, you’ll browse to Visualize in the left navigation to create a new visualization and to Dashboards to add the visualizations to your dashboards.
You can define alert monitors using PPL or the OpenSearch Service query DSL to run scheduled queries. A trigger condition, such as a specific number of error logs, fires an alert. You can send notifications through channels such Amazon Simple Notification Service or webhooks.
For more information about alerting, see the alerting documentation