Class: Aws::CloudWatchLogs::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::CloudWatchLogs::Client
- Includes:
- Aws::ClientStubs
- Defined in:
- gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb
Overview
An API client for CloudWatchLogs. To construct a client, you need to configure a :region and :credentials.
client = Aws::CloudWatchLogs::Client.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the developer guide.
See #initialize for a full list of supported configuration options.
Instance Attribute Summary
Attributes inherited from Seahorse::Client::Base
API Operations collapse
-
#associate_kms_key(params = {}) ⇒ Struct
Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.
-
#associate_source_to_s3_table_integration(params = {}) ⇒ Types::AssociateSourceToS3TableIntegrationResponse
Associates a data source with an S3 Table Integration for query access in the 'logs' namespace.
-
#cancel_export_task(params = {}) ⇒ Struct
Cancels the specified export task.
-
#create_delivery(params = {}) ⇒ Types::CreateDeliveryResponse
Creates a delivery.
-
#create_export_task(params = {}) ⇒ Types::CreateExportTaskResponse
Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket.
-
#create_log_anomaly_detector(params = {}) ⇒ Types::CreateLogAnomalyDetectorResponse
Creates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs.
-
#create_log_group(params = {}) ⇒ Struct
Creates a log group with the specified name.
-
#create_log_stream(params = {}) ⇒ Struct
Creates a log stream for the specified log group.
-
#create_scheduled_query(params = {}) ⇒ Types::CreateScheduledQueryResponse
Creates a scheduled query that runs CloudWatch Logs Insights queries at regular intervals.
-
#delete_account_policy(params = {}) ⇒ Struct
Deletes a CloudWatch Logs account policy.
-
#delete_data_protection_policy(params = {}) ⇒ Struct
Deletes the data protection policy from the specified log group.
-
#delete_delivery(params = {}) ⇒ Struct
Deletes a delivery.
-
#delete_delivery_destination(params = {}) ⇒ Struct
Deletes a delivery destination.
-
#delete_delivery_destination_policy(params = {}) ⇒ Struct
Deletes a delivery destination policy.
-
#delete_delivery_source(params = {}) ⇒ Struct
Deletes a delivery source.
-
#delete_destination(params = {}) ⇒ Struct
Deletes the specified destination, and eventually disables all the subscription filters that publish to it.
-
#delete_index_policy(params = {}) ⇒ Struct
Deletes a log-group level field index policy that was applied to a single log group.
-
#delete_integration(params = {}) ⇒ Struct
Deletes the integration between CloudWatch Logs and OpenSearch Service.
-
#delete_log_anomaly_detector(params = {}) ⇒ Struct
Deletes the specified CloudWatch Logs anomaly detector.
-
#delete_log_group(params = {}) ⇒ Struct
Deletes the specified log group and permanently deletes all the archived log events associated with the log group.
-
#delete_log_stream(params = {}) ⇒ Struct
Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.
-
#delete_metric_filter(params = {}) ⇒ Struct
Deletes the specified metric filter.
-
#delete_query_definition(params = {}) ⇒ Types::DeleteQueryDefinitionResponse
Deletes a saved CloudWatch Logs Insights query definition.
-
#delete_resource_policy(params = {}) ⇒ Struct
Deletes a resource policy from this account.
-
#delete_retention_policy(params = {}) ⇒ Struct
Deletes the specified retention policy.
-
#delete_scheduled_query(params = {}) ⇒ Struct
Deletes a scheduled query and stops all future executions.
-
#delete_subscription_filter(params = {}) ⇒ Struct
Deletes the specified subscription filter.
-
#delete_transformer(params = {}) ⇒ Struct
Deletes the log transformer for the specified log group.
-
#describe_account_policies(params = {}) ⇒ Types::DescribeAccountPoliciesResponse
Returns a list of all CloudWatch Logs account policies in the account.
-
#describe_configuration_templates(params = {}) ⇒ Types::DescribeConfigurationTemplatesResponse
Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries.
-
#describe_deliveries(params = {}) ⇒ Types::DescribeDeliveriesResponse
Retrieves a list of the deliveries that have been created in the account.
-
#describe_delivery_destinations(params = {}) ⇒ Types::DescribeDeliveryDestinationsResponse
Retrieves a list of the delivery destinations that have been created in the account.
-
#describe_delivery_sources(params = {}) ⇒ Types::DescribeDeliverySourcesResponse
Retrieves a list of the delivery sources that have been created in the account.
-
#describe_destinations(params = {}) ⇒ Types::DescribeDestinationsResponse
Lists all your destinations.
-
#describe_export_tasks(params = {}) ⇒ Types::DescribeExportTasksResponse
Lists the specified export tasks.
-
#describe_field_indexes(params = {}) ⇒ Types::DescribeFieldIndexesResponse
Returns a list of custom and default field indexes which are discovered in log data.
-
#describe_index_policies(params = {}) ⇒ Types::DescribeIndexPoliciesResponse
Returns the field index policies of the specified log group.
-
#describe_log_groups(params = {}) ⇒ Types::DescribeLogGroupsResponse
Returns information about log groups, including data sources that ingest into each log group.
-
#describe_log_streams(params = {}) ⇒ Types::DescribeLogStreamsResponse
Lists the log streams for the specified log group.
-
#describe_metric_filters(params = {}) ⇒ Types::DescribeMetricFiltersResponse
Lists the specified metric filters.
-
#describe_queries(params = {}) ⇒ Types::DescribeQueriesResponse
Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account.
-
#describe_query_definitions(params = {}) ⇒ Types::DescribeQueryDefinitionsResponse
This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions.
-
#describe_resource_policies(params = {}) ⇒ Types::DescribeResourcePoliciesResponse
Lists the resource policies in this account.
-
#describe_subscription_filters(params = {}) ⇒ Types::DescribeSubscriptionFiltersResponse
Lists the subscription filters for the specified log group.
-
#disassociate_kms_key(params = {}) ⇒ Struct
Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.
-
#disassociate_source_from_s3_table_integration(params = {}) ⇒ Types::DisassociateSourceFromS3TableIntegrationResponse
Disassociates a data source from an S3 Table Integration, removing query access and deleting all associated data from the integration.
-
#filter_log_events(params = {}) ⇒ Types::FilterLogEventsResponse
Lists log events from the specified log group.
-
#get_data_protection_policy(params = {}) ⇒ Types::GetDataProtectionPolicyResponse
Returns information about a log group data protection policy.
-
#get_delivery(params = {}) ⇒ Types::GetDeliveryResponse
Returns complete information about one logical delivery.
-
#get_delivery_destination(params = {}) ⇒ Types::GetDeliveryDestinationResponse
Retrieves complete information about one delivery destination.
-
#get_delivery_destination_policy(params = {}) ⇒ Types::GetDeliveryDestinationPolicyResponse
Retrieves the delivery destination policy assigned to the delivery destination that you specify.
-
#get_delivery_source(params = {}) ⇒ Types::GetDeliverySourceResponse
Retrieves complete information about one delivery source.
-
#get_integration(params = {}) ⇒ Types::GetIntegrationResponse
Returns information about one integration between CloudWatch Logs and OpenSearch Service.
-
#get_log_anomaly_detector(params = {}) ⇒ Types::GetLogAnomalyDetectorResponse
Retrieves information about the log anomaly detector that you specify.
-
#get_log_events(params = {}) ⇒ Types::GetLogEventsResponse
Lists log events from the specified log stream.
-
#get_log_fields(params = {}) ⇒ Types::GetLogFieldsResponse
Discovers available fields for a specific data source and type.
-
#get_log_group_fields(params = {}) ⇒ Types::GetLogGroupFieldsResponse
Returns a list of the fields that are included in log events in the specified log group.
-
#get_log_object(params = {}) ⇒ Types::GetLogObjectResponse
Retrieves a large logging object (LLO) and streams it back.
-
#get_log_record(params = {}) ⇒ Types::GetLogRecordResponse
Retrieves all of the fields and values of a single log event.
-
#get_query_results(params = {}) ⇒ Types::GetQueryResultsResponse
Returns the results from the specified query.
-
#get_scheduled_query(params = {}) ⇒ Types::GetScheduledQueryResponse
Retrieves details about a specific scheduled query, including its configuration, execution status, and metadata.
-
#get_scheduled_query_history(params = {}) ⇒ Types::GetScheduledQueryHistoryResponse
Retrieves the execution history of a scheduled query within a specified time range, including query results and destination processing status.
-
#get_transformer(params = {}) ⇒ Types::GetTransformerResponse
Returns the information about the log transformer associated with this log group.
-
#list_aggregate_log_group_summaries(params = {}) ⇒ Types::ListAggregateLogGroupSummariesResponse
Returns an aggregate summary of all log groups in the Region grouped by specified data source characteristics.
-
#list_anomalies(params = {}) ⇒ Types::ListAnomaliesResponse
Returns a list of anomalies that log anomaly detectors have found.
-
#list_integrations(params = {}) ⇒ Types::ListIntegrationsResponse
Returns a list of integrations between CloudWatch Logs and other services in this account.
-
#list_log_anomaly_detectors(params = {}) ⇒ Types::ListLogAnomalyDetectorsResponse
Retrieves a list of the log anomaly detectors in the account.
-
#list_log_groups(params = {}) ⇒ Types::ListLogGroupsResponse
Returns a list of log groups in the Region in your account.
-
#list_log_groups_for_query(params = {}) ⇒ Types::ListLogGroupsForQueryResponse
Returns a list of the log groups that were analyzed during a single CloudWatch Logs Insights query.
-
#list_scheduled_queries(params = {}) ⇒ Types::ListScheduledQueriesResponse
Lists all scheduled queries in your account and region.
-
#list_sources_for_s3_table_integration(params = {}) ⇒ Types::ListSourcesForS3TableIntegrationResponse
Returns a list of data source associations for a specified S3 Table Integration, showing which data sources are currently associated for query access.
-
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Displays the tags associated with a CloudWatch Logs resource.
-
#list_tags_log_group(params = {}) ⇒ Types::ListTagsLogGroupResponse
The ListTagsLogGroup operation is on the path to deprecation.
-
#put_account_policy(params = {}) ⇒ Types::PutAccountPolicyResponse
Creates an account-level data protection policy, subscription filter policy, field index policy, transformer policy, or metric extraction policy that applies to all log groups or a subset of log groups in the account.
-
#put_data_protection_policy(params = {}) ⇒ Types::PutDataProtectionPolicyResponse
Creates a data protection policy for the specified log group.
-
#put_delivery_destination(params = {}) ⇒ Types::PutDeliveryDestinationResponse
Creates or updates a logical delivery destination.
-
#put_delivery_destination_policy(params = {}) ⇒ Types::PutDeliveryDestinationPolicyResponse
Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account.
-
#put_delivery_source(params = {}) ⇒ Types::PutDeliverySourceResponse
Creates or updates a logical delivery source.
-
#put_destination(params = {}) ⇒ Types::PutDestinationResponse
Creates or updates a destination.
-
#put_destination_policy(params = {}) ⇒ Struct
Creates or updates an access policy associated with an existing destination.
-
#put_index_policy(params = {}) ⇒ Types::PutIndexPolicyResponse
Creates or updates a field index policy for the specified log group.
-
#put_integration(params = {}) ⇒ Types::PutIntegrationResponse
Creates an integration between CloudWatch Logs and another service in this account.
-
#put_log_events(params = {}) ⇒ Types::PutLogEventsResponse
Uploads a batch of log events to the specified log stream.
-
#put_log_group_deletion_protection(params = {}) ⇒ Struct
Enables or disables deletion protection for the specified log group.
-
#put_metric_filter(params = {}) ⇒ Struct
Creates or updates a metric filter and associates it with the specified log group.
-
#put_query_definition(params = {}) ⇒ Types::PutQueryDefinitionResponse
Creates or updates a query definition for CloudWatch Logs Insights.
-
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyResponse
Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53.
-
#put_retention_policy(params = {}) ⇒ Struct
Sets the retention of the specified log group.
-
#put_subscription_filter(params = {}) ⇒ Struct
Creates or updates a subscription filter and associates it with the specified log group.
-
#put_transformer(params = {}) ⇒ Struct
Creates or updates a log transformer for a single log group.
-
#start_live_tail(params = {}) ⇒ Types::StartLiveTailResponse
Starts a Live Tail streaming session for one or more log groups.
-
#start_query(params = {}) ⇒ Types::StartQueryResponse
Starts a query of one or more log groups or data sources using CloudWatch Logs Insights.
-
#stop_query(params = {}) ⇒ Types::StopQueryResponse
Stops a CloudWatch Logs Insights query that is in progress.
-
#tag_log_group(params = {}) ⇒ Struct
The TagLogGroup operation is on the path to deprecation.
-
#tag_resource(params = {}) ⇒ Struct
Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource.
-
#test_metric_filter(params = {}) ⇒ Types::TestMetricFilterResponse
Tests the filter pattern of a metric filter against a sample of log event messages.
-
#test_transformer(params = {}) ⇒ Types::TestTransformerResponse
Use this operation to test a log transformer.
-
#untag_log_group(params = {}) ⇒ Struct
The UntagLogGroup operation is on the path to deprecation.
-
#untag_resource(params = {}) ⇒ Struct
Removes one or more tags from the specified resource.
-
#update_anomaly(params = {}) ⇒ Struct
Use this operation to suppress anomaly detection for a specified anomaly or pattern.
-
#update_delivery_configuration(params = {}) ⇒ Struct
Use this operation to update the configuration of a [delivery][1] to change either the S3 path pattern or the format of the delivered logs.
-
#update_log_anomaly_detector(params = {}) ⇒ Struct
Updates an existing log anomaly detector.
-
#update_scheduled_query(params = {}) ⇒ Types::UpdateScheduledQueryResponse
Updates an existing scheduled query with new configuration.
Instance Method Summary collapse
-
#initialize(options) ⇒ Client
constructor
A new instance of Client.
Methods included from Aws::ClientStubs
#api_requests, #stub_data, #stub_responses
Methods inherited from Seahorse::Client::Base
add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder
#handle, #handle_request, #handle_response
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
491 492 493 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 491 def initialize(*args) super end |
Instance Method Details
#associate_kms_key(params = {}) ⇒ Struct
Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.
When you use AssociateKmsKey, you specify either the logGroupName
parameter or the resourceIdentifier parameter. You can't specify
both of those parameters in the same operation.
Specify the
logGroupNameparameter to cause log events ingested into that log group to be encrypted with that key. Only the log events ingested after the key is associated are encrypted with that key.Associating a KMS key with a log group overrides any existing associations between the log group and a KMS key. After a KMS key is associated with a log group, all newly ingested data for the log group is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
Associating a key with a log group does not cause the results of queries of that log group to be encrypted with that key. To have query results encrypted with a KMS key, you must use an
AssociateKmsKeyoperation with theresourceIdentifierparameter that specifies aquery-resultresource.Specify the
resourceIdentifierparameter with aquery-resultresource, to use that key to encrypt the stored results of all future StartQuery operations in the account. The response from a GetQueryResults operation will still return the query results in plain text.Even if you have not associated a key with your query results, the query results are encrypted when stored, using the default CloudWatch Logs method.
If you run a query from a monitoring account that queries logs in a source account, the query results key from the monitoring account, if any, is used.
If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable.
It can take up to 5 minutes for this operation to take effect.
If you attempt to associate a KMS key with a log group but the KMS key
does not exist or the KMS key is disabled, you receive an
InvalidParameterException error.
619 620 621 622 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 619 def associate_kms_key(params = {}, = {}) req = build_request(:associate_kms_key, params) req.send_request() end |
#associate_source_to_s3_table_integration(params = {}) ⇒ Types::AssociateSourceToS3TableIntegrationResponse
Associates a data source with an S3 Table Integration for query access in the 'logs' namespace. This enables querying log data using analytics engines that support Iceberg such as Amazon Athena, Amazon Redshift, and Apache Spark.
659 660 661 662 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 659 def associate_source_to_s3_table_integration(params = {}, = {}) req = build_request(:associate_source_to_s3_table_integration, params) req.send_request() end |
#cancel_export_task(params = {}) ⇒ Struct
Cancels the specified export task.
The task must be in the PENDING or RUNNING state.
683 684 685 686 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 683 def cancel_export_task(params = {}, = {}) req = build_request(:cancel_export_task, params) req.send_request() end |
#create_delivery(params = {}) ⇒ Types::CreateDeliveryResponse
Creates a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination that you have already created.
Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.
A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, a delivery stream in Firehose, or X-Ray.
To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.
Create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.
If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
Use
CreateDeliveryto create a delivery by pairing exactly one delivery source and one delivery destination.
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
To update an existing delivery configuration, use UpdateDeliveryConfiguration.
803 804 805 806 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 803 def create_delivery(params = {}, = {}) req = build_request(:create_delivery, params) req.send_request() end |
#create_export_task(params = {}) ⇒ Types::CreateExportTaskResponse
Creates an export task so that you can efficiently export data from a
log group to an Amazon S3 bucket. When you perform a
CreateExportTask operation, you must use credentials that have
permission to write to the S3 bucket that you specify as the
destination.
Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported.
Exporting to S3 buckets that are encrypted with AES-256 is supported.
This is an asynchronous call. If all the required information is
provided, this operation initiates an export task and responds with
the ID of the task. After the task has started, you can use
DescribeExportTasks to get the status of the export task. Each
account can only have one active (RUNNING or PENDING) export task
at a time. To cancel an export task, use CancelExportTask.
You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects.
911 912 913 914 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 911 def create_export_task(params = {}, = {}) req = build_request(:create_export_task, params) req.send_request() end |
#create_log_anomaly_detector(params = {}) ⇒ Types::CreateLogAnomalyDetectorResponse
Creates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs.
An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find patterns. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns.
The anomaly detector uses pattern recognition to find anomalies,
which are unusual log events. It uses the evaluationFrequency to
compare current log events and patterns with trained baselines.
Fields within a pattern are called tokens. Fields that vary within a
pattern, such as a request ID or timestamp, are referred to as
dynamic tokens and represented by <*>.
The following is an example of a pattern:
[INFO] Request time: <*> ms
This pattern represents log events like [INFO] Request time: 327 ms
and other similar log events that differ only by the number, in this
csse 327. When the pattern is displayed, the different numbers are
replaced by <*>
1038 1039 1040 1041 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1038 def create_log_anomaly_detector(params = {}, = {}) req = build_request(:create_log_anomaly_detector, params) req.send_request() end |
#create_log_group(params = {}) ⇒ Struct
Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account.
You must use the following guidelines when naming a log group:
Log group names must be unique within a Region for an Amazon Web Services account.
Log group names can be between 1 and 512 characters long.
Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)
Log group names can't start with the string
aws/
When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.
If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a KMS key with the log group but the KMS
key does not exist or the KMS key is disabled, you receive an
InvalidParameterException error.
CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys.
1160 1161 1162 1163 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1160 def create_log_group(params = {}, = {}) req = build_request(:create_log_group, params) req.send_request() end |
#create_log_stream(params = {}) ⇒ Struct
Creates a log stream for the specified log group. A log stream is a sequence of log events that originate from a single source, such as an application instance or a resource that is being monitored.
There is no limit on the number of log streams that you can create for
a log group. There is a limit of 50 TPS on CreateLogStream
operations, after which transactions are throttled.
You must use the following guidelines when naming a log stream:
Log stream names must be unique within the log group.
Log stream names can be between 1 and 512 characters long.
Don't use ':' (colon) or '*' (asterisk) characters.
1200 1201 1202 1203 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1200 def create_log_stream(params = {}, = {}) req = build_request(:create_log_stream, params) req.send_request() end |
#create_scheduled_query(params = {}) ⇒ Types::CreateScheduledQueryResponse
Creates a scheduled query that runs CloudWatch Logs Insights queries at regular intervals. Scheduled queries enable proactive monitoring by automatically executing queries to detect patterns and anomalies in your log data. Query results can be delivered to Amazon S3 for analysis or further processing.
1313 1314 1315 1316 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1313 def create_scheduled_query(params = {}, = {}) req = build_request(:create_scheduled_query, params) req.send_request() end |
#delete_account_policy(params = {}) ⇒ Struct
Deletes a CloudWatch Logs account policy. This stops the account-wide policy from applying to log groups or data sources in the account. If you delete a data protection policy or subscription filter policy, any log-group level policies of those types remain in effect. This operation supports deletion of data source-based field index policies, including facet configurations, in addition to log group-based policies.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are deleting.
To delete a data protection policy, you must have the
logs:DeleteDataProtectionPolicyandlogs:DeleteAccountPolicypermissions.To delete a subscription filter policy, you must have the
logs:DeleteSubscriptionFilterandlogs:DeleteAccountPolicypermissions.To delete a transformer policy, you must have the
logs:DeleteTransformerandlogs:DeleteAccountPolicypermissions.To delete a field index policy, you must have the
logs:DeleteIndexPolicyandlogs:DeleteAccountPolicypermissions.If you delete a field index policy that included facet configurations, those facets will no longer be available for interactive exploration in the CloudWatch Logs Insights console. However, facet data is retained for up to 30 days.
If you delete a field index policy, the indexing of the log events that happened before you deleted the policy will still be used for up to 30 days to improve CloudWatch Logs Insights queries.
1371 1372 1373 1374 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1371 def delete_account_policy(params = {}, = {}) req = build_request(:delete_account_policy, params) req.send_request() end |
#delete_data_protection_policy(params = {}) ⇒ Struct
Deletes the data protection policy from the specified log group.
For more information about data protection policies, see PutDataProtectionPolicy.
1401 1402 1403 1404 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1401 def delete_data_protection_policy(params = {}, = {}) req = build_request(:delete_data_protection_policy, params) req.send_request() end |
#delete_delivery(params = {}) ⇒ Struct
Deletes a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination. Deleting a delivery only deletes the connection between the delivery source and delivery destination. It does not delete the delivery destination or the delivery source.
1432 1433 1434 1435 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1432 def delete_delivery(params = {}, = {}) req = build_request(:delete_delivery, params) req.send_request() end |
#delete_delivery_destination(params = {}) ⇒ Struct
Deletes a delivery destination. A delivery is a connection between a logical delivery source and a logical delivery destination.
You can't delete a delivery destination if any current deliveries are
associated with it. To find whether any deliveries are associated with
this delivery destination, use the DescribeDeliveries operation
and check the deliveryDestinationArn field in the results.
1470 1471 1472 1473 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1470 def delete_delivery_destination(params = {}, = {}) req = build_request(:delete_delivery_destination, params) req.send_request() end |
#delete_delivery_destination_policy(params = {}) ⇒ Struct
Deletes a delivery destination policy. For more information about these policies, see PutDeliveryDestinationPolicy.
1498 1499 1500 1501 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1498 def delete_delivery_destination_policy(params = {}, = {}) req = build_request(:delete_delivery_destination_policy, params) req.send_request() end |
#delete_delivery_source(params = {}) ⇒ Struct
Deletes a delivery source. A delivery is a connection between a logical delivery source and a logical delivery destination.
You can't delete a delivery source if any current deliveries are
associated with it. To find whether any deliveries are associated with
this delivery source, use the DescribeDeliveries operation and
check the deliverySourceName field in the results.
1530 1531 1532 1533 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1530 def delete_delivery_source(params = {}, = {}) req = build_request(:delete_delivery_source, params) req.send_request() end |
#delete_destination(params = {}) ⇒ Struct
Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.
1554 1555 1556 1557 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1554 def delete_destination(params = {}, = {}) req = build_request(:delete_destination, params) req.send_request() end |
#delete_index_policy(params = {}) ⇒ Struct
Deletes a log-group level field index policy that was applied to a single log group. The indexing of the log events that happened before you delete the policy will still be used for as many as 30 days to improve CloudWatch Logs Insights queries.
If the deleted policy included facet configurations, those facets will no longer be available for interactive exploration in the CloudWatch Logs Insights console for this log group. However, facet data is retained for up to 30 days.
You can't use this operation to delete an account-level index policy. Instead, use DeletAccountPolicy.
If you delete a log-group level field index policy and there is an account-level field index policy, in a few minutes the log group begins using that account-wide policy to index new incoming log events. This operation only affects log group-level policies, including any facet configurations, and preserves any data source-based account policies that may apply to the log group.
1599 1600 1601 1602 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1599 def delete_index_policy(params = {}, = {}) req = build_request(:delete_index_policy, params) req.send_request() end |
#delete_integration(params = {}) ⇒ Struct
Deletes the integration between CloudWatch Logs and OpenSearch
Service. If your integration has active vended logs dashboards, you
must specify true for the force parameter, otherwise the operation
will fail. If you delete the integration by setting force to true,
all your vended logs dashboards powered by OpenSearch Service will be
deleted and the data that was on them will no longer be accessible.
1638 1639 1640 1641 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1638 def delete_integration(params = {}, = {}) req = build_request(:delete_integration, params) req.send_request() end |
#delete_log_anomaly_detector(params = {}) ⇒ Struct
Deletes the specified CloudWatch Logs anomaly detector.
1666 1667 1668 1669 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1666 def delete_log_anomaly_detector(params = {}, = {}) req = build_request(:delete_log_anomaly_detector, params) req.send_request() end |
#delete_log_group(params = {}) ⇒ Struct
Deletes the specified log group and permanently deletes all the archived log events associated with the log group.
1689 1690 1691 1692 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1689 def delete_log_group(params = {}, = {}) req = build_request(:delete_log_group, params) req.send_request() end |
#delete_log_stream(params = {}) ⇒ Struct
Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.
1716 1717 1718 1719 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1716 def delete_log_stream(params = {}, = {}) req = build_request(:delete_log_stream, params) req.send_request() end |
#delete_metric_filter(params = {}) ⇒ Struct
Deletes the specified metric filter.
1742 1743 1744 1745 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1742 def delete_metric_filter(params = {}, = {}) req = build_request(:delete_metric_filter, params) req.send_request() end |
#delete_query_definition(params = {}) ⇒ Types::DeleteQueryDefinitionResponse
Deletes a saved CloudWatch Logs Insights query definition. A query definition contains details about a saved CloudWatch Logs Insights query.
Each DeleteQueryDefinition operation can delete one query
definition.
You must have the logs:DeleteQueryDefinition permission to be able
to perform this operation.
1784 1785 1786 1787 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1784 def delete_query_definition(params = {}, = {}) req = build_request(:delete_query_definition, params) req.send_request() end |
#delete_resource_policy(params = {}) ⇒ Struct
Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.
1817 1818 1819 1820 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1817 def delete_resource_policy(params = {}, = {}) req = build_request(:delete_resource_policy, params) req.send_request() end |
#delete_retention_policy(params = {}) ⇒ Struct
Deletes the specified retention policy.
Log events do not expire if they belong to log groups without a retention policy.
1842 1843 1844 1845 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1842 def delete_retention_policy(params = {}, = {}) req = build_request(:delete_retention_policy, params) req.send_request() end |
#delete_scheduled_query(params = {}) ⇒ Struct
Deletes a scheduled query and stops all future executions. This operation also removes any configured actions and associated resources.
1866 1867 1868 1869 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1866 def delete_scheduled_query(params = {}, = {}) req = build_request(:delete_scheduled_query, params) req.send_request() end |
#delete_subscription_filter(params = {}) ⇒ Struct
Deletes the specified subscription filter.
1892 1893 1894 1895 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1892 def delete_subscription_filter(params = {}, = {}) req = build_request(:delete_subscription_filter, params) req.send_request() end |
#delete_transformer(params = {}) ⇒ Struct
Deletes the log transformer for the specified log group. As soon as you do this, the transformation of incoming log events according to that transformer stops. If this account has an account-level transformer that applies to this log group, the log group begins using that account-level transformer when this log-group level transformer is deleted.
After you delete a transformer, be sure to edit any metric filters or subscription filters that relied on the transformed versions of the log events.
1925 1926 1927 1928 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 1925 def delete_transformer(params = {}, = {}) req = build_request(:delete_transformer, params) req.send_request() end |
#describe_account_policies(params = {}) ⇒ Types::DescribeAccountPoliciesResponse
Returns a list of all CloudWatch Logs account policies in the account.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are retrieving information for.
To see data protection policies, you must have the
logs:GetDataProtectionPolicyandlogs:DescribeAccountPoliciespermissions.To see subscription filter policies, you must have the
logs:DescribeSubscriptionFiltersandlogs:DescribeAccountPoliciespermissions.To see transformer policies, you must have the
logs:GetTransformerandlogs:DescribeAccountPoliciespermissions.To see field index policies, you must have the
logs:DescribeIndexPoliciesandlogs:DescribeAccountPoliciespermissions.
2003 2004 2005 2006 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2003 def describe_account_policies(params = {}, = {}) req = build_request(:describe_account_policies, params) req.send_request() end |
#describe_configuration_templates(params = {}) ⇒ Types::DescribeConfigurationTemplatesResponse
Use this operation to return the valid and default values that are used when creating delivery sources, delivery destinations, and deliveries. For more information about deliveries, see CreateDelivery.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2091 2092 2093 2094 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2091 def describe_configuration_templates(params = {}, = {}) req = build_request(:describe_configuration_templates, params) req.send_request() end |
#describe_deliveries(params = {}) ⇒ Types::DescribeDeliveriesResponse
Retrieves a list of the deliveries that have been created in the account.
A delivery is a connection between a delivery source and a delivery destination .
A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, Firehose or X-Ray. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2158 2159 2160 2161 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2158 def describe_deliveries(params = {}, = {}) req = build_request(:describe_deliveries, params) req.send_request() end |
#describe_delivery_destinations(params = {}) ⇒ Types::DescribeDeliveryDestinationsResponse
Retrieves a list of the delivery destinations that have been created in the account.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2204 2205 2206 2207 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2204 def describe_delivery_destinations(params = {}, = {}) req = build_request(:describe_delivery_destinations, params) req.send_request() end |
#describe_delivery_sources(params = {}) ⇒ Types::DescribeDeliverySourcesResponse
Retrieves a list of the delivery sources that have been created in the account.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2251 2252 2253 2254 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2251 def describe_delivery_sources(params = {}, = {}) req = build_request(:describe_delivery_sources, params) req.send_request() end |
#describe_destinations(params = {}) ⇒ Types::DescribeDestinationsResponse
Lists all your destinations. The results are ASCII-sorted by destination name.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2301 2302 2303 2304 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2301 def describe_destinations(params = {}, = {}) req = build_request(:describe_destinations, params) req.send_request() end |
#describe_export_tasks(params = {}) ⇒ Types::DescribeExportTasksResponse
Lists the specified export tasks. You can list all your export tasks or filter the results based on task ID or task status.
2359 2360 2361 2362 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2359 def describe_export_tasks(params = {}, = {}) req = build_request(:describe_export_tasks, params) req.send_request() end |
#describe_field_indexes(params = {}) ⇒ Types::DescribeFieldIndexesResponse
Returns a list of custom and default field indexes which are discovered in log data. For more information about field index policies, see PutIndexPolicy.
2407 2408 2409 2410 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2407 def describe_field_indexes(params = {}, = {}) req = build_request(:describe_field_indexes, params) req.send_request() end |
#describe_index_policies(params = {}) ⇒ Types::DescribeIndexPoliciesResponse
Returns the field index policies of the specified log group. For more information about field index policies, see PutIndexPolicy.
If a specified log group has a log-group level index policy, that policy is returned by this operation.
If a specified log group doesn't have a log-group level index policy, but an account-wide index policy applies to it, that account-wide policy is returned by this operation.
To find information about only account-level policies, use DescribeAccountPolicies instead.
2464 2465 2466 2467 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2464 def describe_index_policies(params = {}, = {}) req = build_request(:describe_index_policies, params) req.send_request() end |
#describe_log_groups(params = {}) ⇒ Types::DescribeLogGroupsResponse
Returns information about log groups, including data sources that ingest into each log group. You can return all your log groups or filter the results by prefix. The results are ASCII-sorted by log group name.
CloudWatch Logs doesn't support IAM policies that control access to
the DescribeLogGroups action by using the aws:ResourceTag/key-name
condition key. Other CloudWatch Logs actions do support the use of
the aws:ResourceTag/key-name condition key to control access. For
more information about using tags to control access, see Controlling
access to Amazon Web Services resources using tags.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2619 2620 2621 2622 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2619 def describe_log_groups(params = {}, = {}) req = build_request(:describe_log_groups, params) req.send_request() end |
#describe_log_streams(params = {}) ⇒ Types::DescribeLogStreamsResponse
Lists the log streams for the specified log group. You can list all the log streams or filter the results by prefix. You can also control how the results are ordered.
You can specify the log group to search by using either
logGroupIdentifier or logGroupName. You must include one of these
two parameters, but you can't include both.
This operation has a limit of 25 transactions per second, after which transactions are throttled.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2731 2732 2733 2734 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2731 def describe_log_streams(params = {}, = {}) req = build_request(:describe_log_streams, params) req.send_request() end |
#describe_metric_filters(params = {}) ⇒ Types::DescribeMetricFiltersResponse
Lists the specified metric filters. You can list all of the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
2808 2809 2810 2811 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2808 def describe_metric_filters(params = {}, = {}) req = build_request(:describe_metric_filters, params) req.send_request() end |
#describe_queries(params = {}) ⇒ Types::DescribeQueriesResponse
Returns a list of CloudWatch Logs Insights queries that are scheduled, running, or have been run recently in this account. You can request all queries or limit it to queries of a specific log group or queries with a certain status.
This operation includes both interactive queries started directly by users and automated queries executed by scheduled query configurations. Scheduled query executions appear in the results alongside manually initiated queries, providing visibility into all query activity in your account.
2873 2874 2875 2876 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2873 def describe_queries(params = {}, = {}) req = build_request(:describe_queries, params) req.send_request() end |
#describe_query_definitions(params = {}) ⇒ Types::DescribeQueryDefinitionsResponse
This operation returns a paginated list of your saved CloudWatch Logs Insights query definitions. You can retrieve query definitions from the current account or from a source account that is linked to the current account.
You can use the queryDefinitionNamePrefix parameter to limit the
results to only the query definitions that have names that start with
a certain string.
2938 2939 2940 2941 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2938 def describe_query_definitions(params = {}, = {}) req = build_request(:describe_query_definitions, params) req.send_request() end |
#describe_resource_policies(params = {}) ⇒ Types::DescribeResourcePoliciesResponse
Lists the resource policies in this account.
2990 2991 2992 2993 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 2990 def describe_resource_policies(params = {}, = {}) req = build_request(:describe_resource_policies, params) req.send_request() end |
#describe_subscription_filters(params = {}) ⇒ Types::DescribeSubscriptionFiltersResponse
Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3050 3051 3052 3053 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3050 def describe_subscription_filters(params = {}, = {}) req = build_request(:describe_subscription_filters, params) req.send_request() end |
#disassociate_kms_key(params = {}) ⇒ Struct
Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.
When you use DisassociateKmsKey, you specify either the
logGroupName parameter or the resourceIdentifier parameter. You
can't specify both of those parameters in the same operation.
Specify the
logGroupNameparameter to stop using the KMS key to encrypt future log events ingested and stored in the log group. Instead, they will be encrypted with the default CloudWatch Logs method. The log events that were ingested while the key was associated with the log group are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed.Specify the
resourceIdentifierparameter with thequery-resultresource to stop using the KMS key to encrypt the results of all future StartQuery operations in the account. They will instead be encrypted with the default CloudWatch Logs method. The results from queries that ran while the key was associated with the account are still encrypted with that key. Therefore, CloudWatch Logs will need permissions for the key whenever that data is accessed.
It can take up to 5 minutes for this operation to take effect.
3131 3132 3133 3134 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3131 def disassociate_kms_key(params = {}, = {}) req = build_request(:disassociate_kms_key, params) req.send_request() end |
#disassociate_source_from_s3_table_integration(params = {}) ⇒ Types::DisassociateSourceFromS3TableIntegrationResponse
Disassociates a data source from an S3 Table Integration, removing query access and deleting all associated data from the integration.
3161 3162 3163 3164 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3161 def disassociate_source_from_s3_table_integration(params = {}, = {}) req = build_request(:disassociate_source_from_s3_table_integration, params) req.send_request() end |
#filter_log_events(params = {}) ⇒ Types::FilterLogEventsResponse
Lists log events from the specified log group. You can list all the log events or filter the results using one or more of the following:
A filter pattern
A time range
The log stream name, or a log stream name prefix that matches multiple log streams
You must have the logs:FilterLogEvents permission to perform this
operation.
You can specify the log group to search by using either
logGroupIdentifier or logGroupName. You must include one of these
two parameters, but you can't include both.
FilterLogEvents is a paginated operation. Each page returned can
contain up to 1 MB of log events or up to 10,000 log events. A
returned page might only be partially full, or even empty. For
example, if the result of a query would return 15,000 log events, the
first page isn't guaranteed to have 10,000 log events even if they
all fit into 1 MB.
Partially full or empty pages don't necessarily mean that pagination
is finished. If the results include a nextToken, there might be more
log events available. You can return these additional log events by
providing the nextToken in a subsequent FilterLogEvents operation.
If the results don't include a nextToken, then pagination is
finished.
Specifying the limit parameter only guarantees that a single page
doesn't return more log events than the specified limit, but it might
return fewer events than the limit. This is the expected API behavior.
The returned log events are sorted by event timestamp, the timestamp
when the event was ingested by CloudWatch Logs, and the ID of the
PutLogEvents request.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
FilterLogEvents
operation returns only the original versions of log events, before
they were transformed. To view the transformed versions, you must use
a CloudWatch Logs query.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3342 3343 3344 3345 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3342 def filter_log_events(params = {}, = {}) req = build_request(:filter_log_events, params) req.send_request() end |
#get_data_protection_policy(params = {}) ⇒ Types::GetDataProtectionPolicyResponse
Returns information about a log group data protection policy.
3375 3376 3377 3378 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3375 def get_data_protection_policy(params = {}, = {}) req = build_request(:get_data_protection_policy, params) req.send_request() end |
#get_delivery(params = {}) ⇒ Types::GetDeliveryResponse
Returns complete information about one logical delivery. A delivery is a connection between a delivery source and a delivery destination .
A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.
You need to specify the delivery id in this operation. You can find
the IDs of the deliveries in your account with the
DescribeDeliveries operation.
3433 3434 3435 3436 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3433 def get_delivery(params = {}, = {}) req = build_request(:get_delivery, params) req.send_request() end |
#get_delivery_destination(params = {}) ⇒ Types::GetDeliveryDestinationResponse
Retrieves complete information about one delivery destination.
3467 3468 3469 3470 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3467 def get_delivery_destination(params = {}, = {}) req = build_request(:get_delivery_destination, params) req.send_request() end |
#get_delivery_destination_policy(params = {}) ⇒ Types::GetDeliveryDestinationPolicyResponse
Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see PutDeliveryDestinationPolicy.
3503 3504 3505 3506 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3503 def get_delivery_destination_policy(params = {}, = {}) req = build_request(:get_delivery_destination_policy, params) req.send_request() end |
#get_delivery_source(params = {}) ⇒ Types::GetDeliverySourceResponse
Retrieves complete information about one delivery source.
3538 3539 3540 3541 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3538 def get_delivery_source(params = {}, = {}) req = build_request(:get_delivery_source, params) req.send_request() end |
#get_integration(params = {}) ⇒ Types::GetIntegrationResponse
Returns information about one integration between CloudWatch Logs and OpenSearch Service.
3604 3605 3606 3607 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3604 def get_integration(params = {}, = {}) req = build_request(:get_integration, params) req.send_request() end |
#get_log_anomaly_detector(params = {}) ⇒ Types::GetLogAnomalyDetectorResponse
Retrieves information about the log anomaly detector that you specify. The KMS key ARN detected is valid.
3656 3657 3658 3659 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3656 def get_log_anomaly_detector(params = {}, = {}) req = build_request(:get_log_anomaly_detector, params) req.send_request() end |
#get_log_events(params = {}) ⇒ Types::GetLogEventsResponse
Lists log events from the specified log stream. You can list all of the log events or filter using a time range.
GetLogEvents is a paginated operation. Each page returned can
contain up to 1 MB of log events or up to 10,000 log events. A
returned page might only be partially full, or even empty. For
example, if the result of a query would return 15,000 log events, the
first page isn't guaranteed to have 10,000 log events even if they
all fit into 1 MB.
Partially full or empty pages don't necessarily mean that pagination
is finished. As long as the nextBackwardToken or nextForwardToken
returned is NOT equal to the nextToken that you passed into the API
call, there might be more log events available. The token that you use
depends on the direction you want to move in along the log stream. The
returned tokens are never null.
startFromHead to true and you don’t include endTime
in your request, you can end up in a situation where the pagination
doesn't terminate. This can happen when the new log events are being
added to the target log streams faster than they are being read. This
situation is a good use case for the CloudWatch Logs Live Tail
feature.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
You can specify the log group to search by using either
logGroupIdentifier or logGroupName. You must include one of these
two parameters, but you can't include both.
GetLogEvents operation
returns only the original versions of log events, before they were
transformed. To view the transformed versions, you must use a
CloudWatch Logs query.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
3802 3803 3804 3805 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3802 def get_log_events(params = {}, = {}) req = build_request(:get_log_events, params) req.send_request() end |
#get_log_fields(params = {}) ⇒ Types::GetLogFieldsResponse
Discovers available fields for a specific data source and type. The response includes any field modifications introduced through pipelines, such as new fields or changed field types.
3840 3841 3842 3843 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3840 def get_log_fields(params = {}, = {}) req = build_request(:get_log_fields, params) req.send_request() end |
#get_log_group_fields(params = {}) ⇒ Types::GetLogGroupFieldsResponse
Returns a list of the fields that are included in log events in the specified log group. Includes the percentage of log events that contain each field. The search is limited to a time period that you specify.
This operation is used for discovering fields within log group events. For discovering fields across data sources, use the GetLogFields operation.
You can specify the log group to search by using either
logGroupIdentifier or logGroupName. You must specify one of these
parameters, but you can't specify both.
In the results, fields that start with @ are fields generated by
CloudWatch Logs. For example, @timestamp is the timestamp of each
log event. For more information about the fields that are generated by
CloudWatch logs, see Supported Logs and Discovered Fields.
The response results are sorted by the frequency percentage, starting with the highest percentage.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.
3925 3926 3927 3928 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 3925 def get_log_group_fields(params = {}, = {}) req = build_request(:get_log_group_fields, params) req.send_request() end |
#get_log_object(params = {}) ⇒ Types::GetLogObjectResponse
Retrieves a large logging object (LLO) and streams it back. This API
is used to fetch the content of large portions of log events that have
been ingested through the PutOpenTelemetryLogs API. When log events
contain fields that would cause the total event size to exceed 1MB,
CloudWatch Logs automatically processes up to 10 fields, starting with
the largest fields. Each field is truncated as needed to keep the
total event size as close to 1MB as possible. The excess portions are
stored as Large Log Objects (LLOs) and these fields are processed
separately and LLO reference system fields (in the format
@ptr.$[path.to.field]) are added. The path in the reference field
reflects the original JSON structure where the large field was
located. For example, this could be @ptr.$['input']['message'],
@ptr.$['AAA']['BBB']['CCC']['DDD'], @ptr.$['AAA'], or any other
path matching your log structure.
4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4081 def get_log_object(params = {}, = {}) params = params.dup event_stream_handler = case handler = params.delete(:event_stream_handler) when EventStreams::GetLogObjectResponseStream then handler when Proc then EventStreams::GetLogObjectResponseStream.new.tap(&handler) when nil then EventStreams::GetLogObjectResponseStream.new else msg = "expected :event_stream_handler to be a block or "\ "instance of Aws::CloudWatchLogs::EventStreams::GetLogObjectResponseStream"\ ", got `#{handler.inspect}` instead" raise ArgumentError, msg end yield(event_stream_handler) if block_given? req = build_request(:get_log_object, params) req.context[:event_stream_handler] = event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 95) req.send_request() end |
#get_log_record(params = {}) ⇒ Types::GetLogRecordResponse
Retrieves all of the fields and values of a single log event. All
fields are retrieved, even if the original query that produced the
logRecordPointer retrieved only a subset of fields. Fields are
returned as field name/field value pairs.
The full unparsed log event is returned within @message.
4145 4146 4147 4148 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4145 def get_log_record(params = {}, = {}) req = build_request(:get_log_record, params) req.send_request() end |
#get_query_results(params = {}) ⇒ Types::GetQueryResultsResponse
Returns the results from the specified query.
Only the fields requested in the query are returned, along with a
@ptr field, which is the identifier for the log record. You can use
the value of @ptr in a GetLogRecord operation to get the full
log record.
GetQueryResults does not start running a query. To run a query, use
StartQuery. For more information about how long results of
previous queries are available, see CloudWatch Logs quotas.
If the value of the Status field in the output is Running, this
operation returns only partial results. If you see a value of
Scheduled or Running for the status, you can retry the operation
later to see the final results.
This operation is used both for retrieving results from interactive
queries and from automated scheduled query executions. Scheduled
queries use GetQueryResults internally to retrieve query results for
processing and delivery to configured destinations.
If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start queries in linked source accounts. For more information, see CloudWatch cross-account observability.
4220 4221 4222 4223 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4220 def get_query_results(params = {}, = {}) req = build_request(:get_query_results, params) req.send_request() end |
#get_scheduled_query(params = {}) ⇒ Types::GetScheduledQueryResponse
Retrieves details about a specific scheduled query, including its configuration, execution status, and metadata.
4285 4286 4287 4288 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4285 def get_scheduled_query(params = {}, = {}) req = build_request(:get_scheduled_query, params) req.send_request() end |
#get_scheduled_query_history(params = {}) ⇒ Types::GetScheduledQueryHistoryResponse
Retrieves the execution history of a scheduled query within a specified time range, including query results and destination processing status.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4356 4357 4358 4359 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4356 def get_scheduled_query_history(params = {}, = {}) req = build_request(:get_scheduled_query_history, params) req.send_request() end |
#get_transformer(params = {}) ⇒ Types::GetTransformerResponse
Returns the information about the log transformer associated with this log group.
This operation returns data only for transformers created at the log group level. To get information for an account-level transformer, use DescribeAccountPolicies.
4474 4475 4476 4477 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4474 def get_transformer(params = {}, = {}) req = build_request(:get_transformer, params) req.send_request() end |
#list_aggregate_log_group_summaries(params = {}) ⇒ Types::ListAggregateLogGroupSummariesResponse
Returns an aggregate summary of all log groups in the Region grouped by specified data source characteristics. Supports optional filtering by log group class, name patterns, and data sources. If you perform this action in a monitoring account, you can also return aggregated summaries of log groups from source accounts that are linked to the monitoring account. For more information about using cross-account observability to set up monitoring accounts and source accounts, see CloudWatch cross-account observability.
The operation aggregates log groups by data source name and type and optionally format, providing counts of log groups that share these characteristics. The operation paginates results. By default, it returns up to 50 results and includes a token to retrieve more results.
4590 4591 4592 4593 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4590 def list_aggregate_log_group_summaries(params = {}, = {}) req = build_request(:list_aggregate_log_group_summaries, params) req.send_request() end |
#list_anomalies(params = {}) ⇒ Types::ListAnomaliesResponse
Returns a list of anomalies that log anomaly detectors have found. For details about the structure format of each anomaly object that is returned, see the example in this section.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4669 4670 4671 4672 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4669 def list_anomalies(params = {}, = {}) req = build_request(:list_anomalies, params) req.send_request() end |
#list_integrations(params = {}) ⇒ Types::ListIntegrationsResponse
Returns a list of integrations between CloudWatch Logs and other services in this account. Currently, only one integration can be created in an account, and this integration must be with OpenSearch Service.
4714 4715 4716 4717 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4714 def list_integrations(params = {}, = {}) req = build_request(:list_integrations, params) req.send_request() end |
#list_log_anomaly_detectors(params = {}) ⇒ Types::ListLogAnomalyDetectorsResponse
Retrieves a list of the log anomaly detectors in the account.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4768 4769 4770 4771 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4768 def list_log_anomaly_detectors(params = {}, = {}) req = build_request(:list_log_anomaly_detectors, params) req.send_request() end |
#list_log_groups(params = {}) ⇒ Types::ListLogGroupsResponse
Returns a list of log groups in the Region in your account. If you are performing this action in a monitoring account, you can choose to also return log groups from source accounts that are linked to the monitoring account. For more information about using cross-account observability to set up monitoring accounts and source accounts, see CloudWatch cross-account observability.
You can optionally filter the list by log group class, by using regular expressions in your request to match strings in the log group names, by using the fieldIndexes parameter to filter log groups based on which field indexes are configured, by using the dataSources parameter to filter log groups by data source types, and by using the fieldIndexNames parameter to filter by specific field index names.
This operation is paginated. By default, your first use of this operation returns 50 results, and includes a token to use in a subsequent operation to return more results.
4893 4894 4895 4896 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4893 def list_log_groups(params = {}, = {}) req = build_request(:list_log_groups, params) req.send_request() end |
#list_log_groups_for_query(params = {}) ⇒ Types::ListLogGroupsForQueryResponse
Returns a list of the log groups that were analyzed during a single
CloudWatch Logs Insights query. This can be useful for queries that
use log group name prefixes or the filterIndex command, because the
log groups are dynamically selected in these cases.
For more information about field indexes, see Create field indexes to improve query performance and reduce costs.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
4950 4951 4952 4953 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 4950 def list_log_groups_for_query(params = {}, = {}) req = build_request(:list_log_groups_for_query, params) req.send_request() end |
#list_scheduled_queries(params = {}) ⇒ Types::ListScheduledQueriesResponse
Lists all scheduled queries in your account and region. You can filter results by state to show only enabled or disabled queries.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
5005 5006 5007 5008 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5005 def list_scheduled_queries(params = {}, = {}) req = build_request(:list_scheduled_queries, params) req.send_request() end |
#list_sources_for_s3_table_integration(params = {}) ⇒ Types::ListSourcesForS3TableIntegrationResponse
Returns a list of data source associations for a specified S3 Table Integration, showing which data sources are currently associated for query access.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
5056 5057 5058 5059 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5056 def list_sources_for_s3_table_integration(params = {}, = {}) req = build_request(:list_sources_for_s3_table_integration, params) req.send_request() end |
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging.
5099 5100 5101 5102 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5099 def (params = {}, = {}) req = build_request(:list_tags_for_resource, params) req.send_request() end |
#list_tags_log_group(params = {}) ⇒ Types::ListTagsLogGroupResponse
The ListTagsLogGroup operation is on the path to deprecation. We recommend that you use ListTagsForResource instead.
Lists the tags for the specified log group.
5135 5136 5137 5138 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5135 def (params = {}, = {}) req = build_request(:list_tags_log_group, params) req.send_request() end |
#put_account_policy(params = {}) ⇒ Types::PutAccountPolicyResponse
Creates an account-level data protection policy, subscription filter policy, field index policy, transformer policy, or metric extraction policy that applies to all log groups or a subset of log groups in the account.
For field index policies, you can configure indexed fields as facets to enable interactive exploration of your logs. Facets provide value distributions and counts for indexed fields in the CloudWatch Logs Insights console without requiring query execution. For more information, see Use facets to group and explore logs.
To use this operation, you must be signed on with the correct permissions depending on the type of policy that you are creating.
To create a data protection policy, you must have the
logs:PutDataProtectionPolicyandlogs:PutAccountPolicypermissions.To create a subscription filter policy, you must have the
logs:PutSubscriptionFilterandlogs:PutAccountPolicypermissions.To create a transformer policy, you must have the
logs:PutTransformerandlogs:PutAccountPolicypermissions.To create a field index policy, you must have the
logs:PutIndexPolicyandlogs:PutAccountPolicypermissions.To configure facets for field index policies, you must have the
logs:PutIndexPolicyandlogs:PutAccountPolicypermissions.To create a metric extraction policy, you must have the
logs:PutMetricExtractionPolicyandlogs:PutAccountPolicypermissions.
Data protection policy
A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.
Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.
If you use PutAccountPolicy to create a data protection policy for
your whole account, it applies to both existing log groups and all log
groups that are created later in this account. The account-level
policy is applied to existing log groups with eventual consistency. It
might take up to 5 minutes before sensitive data in existing log
groups begins to be masked.
By default, when a user views a log event that includes masked data,
the sensitive data is replaced by asterisks. A user who has the
logs:Unmask permission can use a GetLogEvents or
FilterLogEvents operation with the unmask parameter set to
true to view the unmasked log events. Users with the logs:Unmask
can also view unmasked data in the CloudWatch Logs console by running
a CloudWatch Logs Insights query with the unmask query command.
For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
To use the PutAccountPolicy operation for a data protection policy,
you must be signed on with the logs:PutDataProtectionPolicy and
logs:PutAccountPolicy permissions.
The PutAccountPolicy operation applies to all log groups in the
account. You can use PutDataProtectionPolicy to create a data
protection policy that applies to just one log group. If a log group
has its own data protection policy and the account also has an
account-level data protection policy, then the two policies are
cumulative. Any sensitive term specified in either policy is masked.
Subscription filter policy
A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
An Kinesis Data Streams data stream in the same account as the subscription policy, for same-account delivery.
An Firehose data stream in the same account as the subscription policy, for same-account delivery.
A Lambda function in the same account as the subscription policy, for same-account delivery.
A logical destination in a different account created with PutDestination, for cross-account delivery. Kinesis Data Streams and Firehose are supported as logical destinations.
Each account can have one account-level subscription filter policy per
Region. If you are updating an existing filter, you must specify the
correct name in PolicyName. To perform a PutAccountPolicy
subscription filter operation for any destination except a Lambda
function, you must also have the iam:PassRole permission.
Transformer policy
Creates or updates a log transformer policy for your account. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contain relevant, source-specific information. After you have created a transformer, CloudWatch Logs performs this transformation at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filters.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. For more information about the available processors to use in a transformer, see Processors that you can use.
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can have one account-level transformer policy that applies to all
log groups in the account. Or you can create as many as 20
account-level transformer policies that are each scoped to a subset of
log groups with the selectionCriteria parameter. If you have
multiple account-level transformer policies with selection criteria,
no two of them can use the same or overlapping log group name
prefixes. For example, if you have one policy filtered to log groups
that start with my-log, you can't have another field index policy
filtered to my-logpprod or my-logging.
CloudWatch Logs provides default field indexes for all log groups in the Standard log class. Default field indexes are automatically available for the following fields:
@logStream@aws.region@aws.account@source.logtraceId
Default field indexes are in addition to any custom field indexes you define within your policy. Default field indexes are not counted towards your field index quota.
You can also set up a transformer at the log-group level. For more
information, see PutTransformer. If there is both a log-group
level transformer created with PutTransformer and an account-level
transformer that could apply to the same log group, the log group uses
only the log-group level transformer. It ignores the account-level
transformer.
Field index policy
You can use field index policies to create indexes on fields found in log events in the log group. Creating field indexes can help lower the scan volume for CloudWatch Logs Insights queries that reference those fields, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, user IDs, or instance IDs. For more information, see Create field indexes to improve query performance and reduce costs
To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for requestId.
Then, any CloudWatch Logs Insights query on that log group that
includes requestId = value or requestId in [value, value, ...]
will attempt to process only the log events where the indexed field
matches the specified value.
Matches of log events to the names of indexed fields are
case-sensitive. For example, an indexed field of RequestId won't
match a log event containing requestId.
You can have one account-level field index policy that applies to all
log groups in the account. Or you can create as many as 40
account-level field index policies (20 for log group prefix selection,
20 for data source selection) that are each scoped to a subset of log
groups or data sources with the selectionCriteria parameter. Field
index policies can now be created for specific data source name and
type combinations using DataSourceName and DataSourceType selection
criteria. If you have multiple account-level index policies with
selection criteria, no two of them can use the same or overlapping log
group name prefixes. For example, if you have one policy filtered to
log groups that start with my-log, you can't have another field
index policy filtered to my-logpprod or my-logging.
If you create an account-level field index policy in a monitoring account in cross-account observability, the policy is applied only to the monitoring account and not to any source accounts.
If you want to create a field index policy for a single log group, you
can use PutIndexPolicy instead of PutAccountPolicy. If you do
so, that log group will use only that log-group level policy, and will
ignore the account-level policy that you create with
PutAccountPolicy.
Metric extraction policy
A metric extraction policy controls whether CloudWatch Metrics can be created through the Embedded Metrics Format (EMF) for log groups in your account. By default, EMF metric creation is enabled for all log groups. You can use metric extraction policies to disable EMF metric creation for your entire account or specific log groups.
When a policy disables EMF metric creation for a log group, log events in the EMF format are still ingested, but no CloudWatch Metrics are created from them.
Creating a policy disables metrics for AWS features that use EMF to
create metrics, such as CloudWatch Container Insights and CloudWatch
Application Signals. To prevent turning off those features by
accident, we recommend that you exclude the underlying log-groups
through a selection-criteria such as LogGroupNamePrefix NOT IN
["/aws/containerinsights", "/aws/ecs/containerinsights",
"/aws/application-signals/data"].
Each account can have either one account-level metric extraction
policy that applies to all log groups, or up to 5 policies that are
each scoped to a subset of log groups with the selectionCriteria
parameter. The selection criteria supports filtering by LogGroupName
and LogGroupNamePrefix using the operators IN and NOT IN. You
can specify up to 50 values in each IN or NOT IN list.
The selection criteria can be specified in these formats:
LogGroupName IN ["log-group-1", "log-group-2"]
LogGroupNamePrefix NOT IN ["/aws/prefix1", "/aws/prefix2"]
If you have multiple account-level metric extraction policies with
selection criteria, no two of them can have overlapping criteria. For
example, if you have one policy with selection criteria
LogGroupNamePrefix IN ["my-log"], you can't have another metric
extraction policy with selection criteria LogGroupNamePrefix IN
["/my-log-prod"] or LogGroupNamePrefix IN ["/my-logging"], as the
set of log groups matching these prefixes would be a subset of the log
groups matching the first policy's prefix, creating an overlap.
When using NOT IN, only one policy with this operator is allowed per
account.
When combining policies with IN and NOT IN operators, the overlap
check ensures that policies don't have conflicting effects. Two
policies with IN and NOT IN operators do not overlap if and only
if every value in the INpolicy is completely contained within some
value in the NOT IN policy. For example:
If you have a
NOT INpolicy for prefix"/aws/lambda", you can create anINpolicy for the exact log group name"/aws/lambda/function1"because the set of log groups matching"/aws/lambda/function1"is a subset of the log groups matching"/aws/lambda".If you have a
NOT INpolicy for prefix"/aws/lambda", you cannot create anINpolicy for prefix"/aws"because the set of log groups matching"/aws"is not a subset of the log groups matching"/aws/lambda".
5610 5611 5612 5613 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5610 def put_account_policy(params = {}, = {}) req = build_request(:put_account_policy, params) req.send_request() end |
#put_data_protection_policy(params = {}) ⇒ Types::PutDataProtectionPolicyResponse
Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data.
Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked.
By default, when a user views a log event that includes masked data,
the sensitive data is replaced by asterisks. A user who has the
logs:Unmask permission can use a GetLogEvents or
FilterLogEvents operation with the unmask parameter set to
true to view the unmasked log events. Users with the logs:Unmask
can also view unmasked data in the CloudWatch Logs console by running
a CloudWatch Logs Insights query with the unmask query command.
For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.
The PutDataProtectionPolicy operation applies to only the specified
log group. You can also use PutAccountPolicy to create an
account-level data protection policy that applies to all log groups in
the account, including both existing log groups and log groups that
are created level. If a log group has its own data protection policy
and the account also has an account-level data protection policy, then
the two policies are cumulative. Any sensitive term specified in
either policy is masked.
5719 5720 5721 5722 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5719 def put_data_protection_policy(params = {}, = {}) req = build_request(:put_data_protection_policy, params) req.send_request() end |
#put_delivery_destination(params = {}) ⇒ Types::PutDeliveryDestinationResponse
Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations and X-Ray as the trace delivery destination.
To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.
Use
PutDeliveryDestinationto create a delivery destination in the same account of the actual delivery destination. The delivery destination that you create is a logical object that represents the actual delivery destination.If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
Use
CreateDeliveryto create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.
If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify.
5849 5850 5851 5852 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5849 def put_delivery_destination(params = {}, = {}) req = build_request(:put_delivery_destination, params) req.send_request() end |
#put_delivery_destination_policy(params = {}) ⇒ Types::PutDeliveryDestinationPolicyResponse
Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following:
Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see PutDeliverySource.
Create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.
Use this operation in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
Create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.
The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies.
5916 5917 5918 5919 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 5916 def put_delivery_destination_policy(params = {}, = {}) req = build_request(:put_delivery_destination_policy, params) req.send_request() end |
#put_delivery_source(params = {}) ⇒ Types::PutDeliverySourceResponse
Creates or updates a logical delivery source. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, Firehose or X-Ray for sending traces.
To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:
Use
PutDeliverySourceto create a delivery source, which is a logical object that represents the resource that is actually sending the logs.Use
PutDeliveryDestinationto create a delivery destination, which is a logical object that represents the actual delivery destination. For more information, see PutDeliveryDestination.If you are delivering logs cross-account, you must use PutDeliveryDestinationPolicy in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
Use
CreateDeliveryto create a delivery by pairing exactly one delivery source and one delivery destination. For more information, see CreateDelivery.
You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.
If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify.
6073 6074 6075 6076 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6073 def put_delivery_source(params = {}, = {}) req = build_request(:put_delivery_source, params) req.send_request() end |
#put_destination(params = {}) ⇒ Types::PutDestinationResponse
Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.
A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.
Through an access policy, a destination controls what is written to
it. By default, PutDestination does not set any access policy with
the destination, which means a cross-account user cannot call
PutSubscriptionFilter against this destination. To enable this,
the destination owner must call PutDestinationPolicy after
PutDestination.
To perform a PutDestination operation, you must also have the
iam:PassRole permission.
6151 6152 6153 6154 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6151 def put_destination(params = {}, = {}) req = build_request(:put_destination, params) req.send_request() end |
#put_destination_policy(params = {}) ⇒ Struct
Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.
6204 6205 6206 6207 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6204 def put_destination_policy(params = {}, = {}) req = build_request(:put_destination_policy, params) req.send_request() end |
#put_index_policy(params = {}) ⇒ Types::PutIndexPolicyResponse
Creates or updates a field index policy for the specified log group. Only log groups in the Standard log class support field index policies. For more information about log classes, see Log classes.
You can use field index policies to create field indexes on fields found in log events in the log group. Creating field indexes speeds up and lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields or values that match only a small fraction of the total log events. Common examples of indexes include request ID, session ID, userID, and instance IDs. For more information, see Create field indexes to improve query performance and reduce costs.
You can configure indexed fields as facets to enable interactive exploration and filtering of your logs in the CloudWatch Logs Insights console. Facets allow you to view value distributions and counts for indexed fields without running queries. When you create a field index, you can optionally set it as a facet to enable this interactive analysis capability. For more information, see Use facets to group and explore logs.
To find the fields that are in your log group events, use the GetLogGroupFields operation.
For example, suppose you have created a field index for requestId.
Then, any CloudWatch Logs Insights query on that log group that
includes requestId = value or requestId IN [value, value, ...]
will process fewer log events to reduce costs, and have improved
performance.
CloudWatch Logs provides default field indexes for all log groups in the Standard log class. Default field indexes are automatically available for the following fields:
@logStream@aws.region@aws.account@source.logtraceId
Default field indexes are in addition to any custom field indexes you define within your policy. Default field indexes are not counted towards your field index quota.
Each index policy has the following quotas and restrictions:
As many as 20 fields can be included in the policy.
Each field name can include as many as 100 characters.
Matches of log events to the names of indexed fields are
case-sensitive. For example, a field index of RequestId won't match
a log event containing requestId.
Log group-level field index policies created with PutIndexPolicy
override account-level field index policies created with
PutAccountPolicy that apply to log groups. If you use
PutIndexPolicy to create a field index policy for a log group, that
log group uses only that policy for log group-level indexing,
including any facet configurations. The log group ignores any
account-wide field index policy that applies to log groups, but data
source-based account policies may still apply.
6331 6332 6333 6334 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6331 def put_index_policy(params = {}, = {}) req = build_request(:put_index_policy, params) req.send_request() end |
#put_integration(params = {}) ⇒ Types::PutIntegrationResponse
Creates an integration between CloudWatch Logs and another service in this account. Currently, only integrations with OpenSearch Service are supported, and currently you can have only one integration in your account.
Integrating with OpenSearch Service makes it possible for you to create curated vended logs dashboards, powered by OpenSearch Service analytics. For more information, see Vended log dashboards powered by Amazon OpenSearch Service.
You can use this operation only to create a new integration. You can't modify an existing integration.
6394 6395 6396 6397 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6394 def put_integration(params = {}, = {}) req = build_request(:put_integration, params) req.send_request() end |
#put_log_events(params = {}) ⇒ Types::PutLogEventsResponse
Uploads a batch of log events to the specified log stream.
The sequence token is now ignored in PutLogEvents actions.
PutLogEvents actions are always accepted and never return
InvalidSequenceTokenException or DataAlreadyAcceptedException even
if the sequence token is not valid. You can use parallel
PutLogEvents actions on the same log stream.
The batch of events must satisfy the following constraints:
The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
Events more than 2 hours in the future are rejected while processing remaining valid events.
Events older than 14 days or preceding the log group's retention period are rejected while processing remaining valid events.
The log events in the batch must be in chronological order by their timestamp. The timestamp is the time that the event occurred, expressed as the number of milliseconds after
Jan 1, 1970 00:00:00 UTC. (In Amazon Web Services Tools for PowerShell and the Amazon Web Services SDK for .NET, the timestamp is specified in .NET format:yyyy-mm-ddThh:mm:ss. For example,2017-09-15T13:45:30.)A batch of log events in a single request must be in a chronological order. Otherwise, the operation fails.
Each log event can be no larger than 1 MB.
The maximum number of log events in a batch is 10,000.
For valid events (within 14 days in the past to 2 hours in future), the time span in a single batch cannot exceed 24 hours. Otherwise, the operation fails.
The quota of five requests per second per log stream has been removed.
Instead, PutLogEvents actions are throttled based on a per-second
per-account quota. You can request an increase to the per-second
throttling quota by using the Service Quotas service.
If a call to PutLogEvents returns "UnrecognizedClientException"
the most likely cause is a non-valid Amazon Web Services access key ID
or secret key.
6507 6508 6509 6510 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6507 def put_log_events(params = {}, = {}) req = build_request(:put_log_events, params) req.send_request() end |
#put_log_group_deletion_protection(params = {}) ⇒ Struct
Enables or disables deletion protection for the specified log group. When enabled on a log group, deletion protection blocks all deletion operations until it is explicitly disabled.
For information about the parameters that are common to all actions, see Common Parameters.
6554 6555 6556 6557 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6554 def put_log_group_deletion_protection(params = {}, = {}) req = build_request(:put_log_group_deletion_protection, params) req.send_request() end |
#put_metric_filter(params = {}) ⇒ Struct
Creates or updates a metric filter and associates it with the specified log group. With metric filters, you can configure rules to extract metric data from log events ingested through PutLogEvents.
The maximum number of metric filters that can be associated with a log group is 100.
Using regular expressions in filter patterns is supported. For these filters, there is a quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in filter patterns, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
When you create a metric filter, you can also optionally assign a unit and dimensions to the metric that is created.
Metrics extracted from log events are charged as custom metrics. To
prevent unexpected high charges, do not specify high-cardinality
fields such as IPAddress or requestID as dimensions. Each
different value found for a dimension is treated as a separate metric
and accrues charges as a separate custom metric.
CloudWatch Logs might disable a metric filter if it generates 1,000 different name/value pairs for your specified dimensions within one hour.
You can also set up a billing alarm to alert you if your charges are higher than expected. For more information, see Creating a Billing Alarm to Monitor Your Estimated Amazon Web Services Charges.
6668 6669 6670 6671 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6668 def put_metric_filter(params = {}, = {}) req = build_request(:put_metric_filter, params) req.send_request() end |
#put_query_definition(params = {}) ⇒ Types::PutQueryDefinitionResponse
Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.
To update a query definition, specify its queryDefinitionId in your
request. The values of name, queryString, and logGroupNames are
changed to the values that you specify in your update operation. No
current values are retained from the current query definition. For
example, imagine updating a current query definition that includes log
groups. If you don't specify the logGroupNames parameter in your
update operation, the query definition changes to contain no log
groups.
You must have the logs:PutQueryDefinition permission to be able to
perform this operation.
6776 6777 6778 6779 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6776 def put_query_definition(params = {}, = {}) req = build_request(:put_query_definition, params) req.send_request() end |
#put_resource_policy(params = {}) ⇒ Types::PutResourcePolicyResponse
Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. This API has the following restrictions:
Supported actions - Policy only supports
logs:PutLogEventsandlogs:CreateLogStreamactionsSupported principals - Policy only applies when operations are invoked by Amazon Web Services service principals (not IAM users, roles, or cross-account principals
Policy limits - An account can have a maximum of 10 policies without resourceARN and one per LogGroup resourceARN
Resource policies with actions invoked by non-Amazon Web Services service principals (such as IAM users, roles, or other Amazon Web Services accounts) will not be enforced. For access control involving these principals, use the IAM policies.
6872 6873 6874 6875 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6872 def put_resource_policy(params = {}, = {}) req = build_request(:put_resource_policy, params) req.send_request() end |
#put_retention_policy(params = {}) ⇒ Struct
Sets the retention of the specified log group. With a retention policy, you can configure the number of days for which to retain log events in the specified log group.
To illustrate, imagine that you change a log group to have a longer retention setting when it contains log events that are past the expiration date, but haven't been deleted. Those log events will take up to 72 hours to be deleted after the new retention date is reached. To make sure that log data is deleted permanently, keep a log group at its lower retention setting until 72 hours after the previous retention period ends. Alternatively, wait to change the retention setting until you confirm that the earlier log events are deleted.
When log events reach their retention setting they are marked for
deletion. After they are marked for deletion, they do not add to your
archival storage costs anymore, even if they are not actually deleted
until later. These log events marked for deletion are also not
included when you use an API to retrieve the storedBytes value to
see how many bytes a log group is storing.
6932 6933 6934 6935 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 6932 def put_retention_policy(params = {}, = {}) req = build_request(:put_retention_policy, params) req.send_request() end |
#put_subscription_filter(params = {}) ⇒ Struct
Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.
The following destinations are supported for subscription filters:
An Amazon Kinesis data stream belonging to the same account as the subscription filter, for same-account delivery.
A logical destination created with PutDestination that belongs to a different account, for cross-account delivery. We currently support Kinesis Data Streams and Firehose as logical destinations.
An Amazon Kinesis Data Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
An Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
Each log group can have up to two subscription filters associated with
it. If you are updating an existing filter, you must specify the
correct name in filterName.
Using regular expressions in filter patterns is supported. For these filters, there is a quotas of quota of two regular expression patterns within a single filter pattern. There is also a quota of five regular expression patterns per log group. For more information about using regular expressions in filter patterns, see Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail.
To perform a PutSubscriptionFilter operation for any destination
except a Lambda function, you must also have the iam:PassRole
permission.
7082 7083 7084 7085 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7082 def put_subscription_filter(params = {}, = {}) req = build_request(:put_subscription_filter, params) req.send_request() end |
#put_transformer(params = {}) ⇒ Struct
Creates or updates a log transformer for a single log group. You use log transformers to transform log events into a different format, making them easier for you to process and analyze. You can also transform logs from different sources into standardized formats that contains relevant, source-specific information.
After you have created a transformer, CloudWatch Logs performs the transformations at the time of log ingestion. You can then refer to the transformed versions of the logs during operations such as querying with CloudWatch Logs Insights or creating metric filters or subscription filers.
You can also use a transformer to copy metadata from metadata keys into the log events themselves. This metadata can include log group name, log stream name, account ID and Region.
A transformer for a log group is a series of processors, where each processor applies one type of transformation to the log events ingested into this log group. The processors work one after another, in the order that you list them, like a pipeline. For more information about the available processors to use in a transformer, see Processors that you can use.
Having log events in standardized format enables visibility across your applications for your log analysis, reporting, and alarming needs. CloudWatch Logs provides transformation for common log types with out-of-the-box transformation templates for major Amazon Web Services log sources such as VPC flow logs, Lambda, and Amazon RDS. You can use pre-built transformation templates or create custom transformation policies.
You can create transformers only for the log groups in the Standard log class.
You can also set up a transformer at the account level. For more
information, see PutAccountPolicy. If there is both a log-group
level transformer created with PutTransformer and an account-level
transformer that could apply to the same log group, the log group uses
only the log-group level transformer. It ignores the account-level
transformer.
7293 7294 7295 7296 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7293 def put_transformer(params = {}, = {}) req = build_request(:put_transformer, params) req.send_request() end |
#start_live_tail(params = {}) ⇒ Types::StartLiveTailResponse
Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see Use Live Tail to view logs in near real time.
The response to this operation is a response stream, over which the server sends live log events and the client receives them.
The following objects are sent over the stream:
A single LiveTailSessionStart object is sent at the start of the session.
Every second, a LiveTailSessionUpdate object is sent. Each of these objects contains an array of the actual log events.
If no new log events were ingested in the past second, the
LiveTailSessionUpdateobject will contain an empty array.The array of log events contained in a
LiveTailSessionUpdatecan include as many as 500 log events. If the number of log events matching the request exceeds 500 per second, the log events are sampled down to 500 log events to be included in eachLiveTailSessionUpdateobject.If your client consumes the log events slower than the server produces them, CloudWatch Logs buffers up to 10
LiveTailSessionUpdateevents or 5000 log events, after which it starts dropping the oldest events.A SessionStreamingException object is returned if an unknown error occurs on the server side.
A SessionTimeoutException object is returned when the session times out, after it has been kept open for three hours.
StartLiveTail API routes requests to
streaming-logs.Region.amazonaws.com using SDK host prefix injection.
VPC endpoint support is not available for this API.
You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks.
For examples of using an SDK to start a Live Tail session, see Start a Live Tail session using an Amazon Web Services SDK.
7574 7575 7576 7577 7578 7579 7580 7581 7582 7583 7584 7585 7586 7587 7588 7589 7590 7591 7592 7593 7594 7595 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7574 def start_live_tail(params = {}, = {}) params = params.dup event_stream_handler = case handler = params.delete(:event_stream_handler) when EventStreams::StartLiveTailResponseStream then handler when Proc then EventStreams::StartLiveTailResponseStream.new.tap(&handler) when nil then EventStreams::StartLiveTailResponseStream.new else msg = "expected :event_stream_handler to be a block or "\ "instance of Aws::CloudWatchLogs::EventStreams::StartLiveTailResponseStream"\ ", got `#{handler.inspect}` instead" raise ArgumentError, msg end yield(event_stream_handler) if block_given? req = build_request(:start_live_tail, params) req.context[:event_stream_handler] = event_stream_handler req.handlers.add(Aws::Binary::DecodeHandler, priority: 95) req.send_request() end |
#start_query(params = {}) ⇒ Types::StartQueryResponse
Starts a query of one or more log groups or data sources using CloudWatch Logs Insights. You specify the log groups or data sources and time range to query and the query string to use. You can query up to 10 data sources in a single query.
For more information, see CloudWatch Logs Insights Query Syntax.
After you run a query using StartQuery, the query results are stored
by CloudWatch Logs. You can use GetQueryResults to retrieve the
results of a query, using the queryId that StartQuery returns.
Interactive queries started with StartQuery share concurrency limits
with automated scheduled query executions. Both types of queries count
toward the same regional concurrent query quota, so high scheduled
query activity may affect the availability of concurrent slots for
interactive queries.
StartQuery operation must
include one of the following:
Either exactly one of the following parameters:
logGroupName,logGroupNames, orlogGroupIdentifiersOr the
queryStringmust include aSOURCEcommand to select log groups for the query. TheSOURCEcommand can select log groups based on log group name prefix, account ID, and log class, or select data sources using dataSource syntax in LogsQL, PPL, and SQL.For more information about the
SOURCEcommand, see SOURCE.
If you have associated a KMS key with the query results in this account, then StartQuery uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method.
Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.
If you are using CloudWatch cross-account observability, you can use
this operation in a monitoring account to start a query in a linked
source account. For more information, see CloudWatch cross-account
observability. For a cross-account StartQuery operation, the
query definition must be defined in the monitoring account.
You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards.
7755 7756 7757 7758 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7755 def start_query(params = {}, = {}) req = build_request(:start_query, params) req.send_request() end |
#stop_query(params = {}) ⇒ Types::StopQueryResponse
Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running.
This operation can be used to cancel both interactive queries and
individual scheduled query executions. When used with scheduled
queries, StopQuery cancels only the specific execution identified by
the query ID, not the scheduled query configuration itself.
7791 7792 7793 7794 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7791 def stop_query(params = {}, = {}) req = build_request(:stop_query, params) req.send_request() end |
#tag_log_group(params = {}) ⇒ Struct
The TagLogGroup operation is on the path to deprecation. We recommend that you use TagResource instead.
Adds or updates the specified tags for the specified log group.
To list the tags for a log group, use ListTagsForResource. To remove tags, use UntagResource.
For more information about tags, see Tag Log Groups in Amazon CloudWatch Logs in the Amazon CloudWatch Logs User Guide.
CloudWatch Logs doesn't support IAM policies that prevent users from
assigning specified tags to log groups using the
aws:Resource/key-name or aws:TagKeys condition keys. For more
information about using tags to control access, see Controlling
access to Amazon Web Services resources using tags.
7842 7843 7844 7845 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7842 def tag_log_group(params = {}, = {}) req = build_request(:tag_log_group, params) req.send_request() end |
#tag_resource(params = {}) ⇒ Struct
Assigns one or more tags (key-value pairs) to the specified CloudWatch Logs resource. Currently, the only CloudWatch Logs resources that can be tagged are log groups and destinations.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.
Tags don't have any semantic meaning to Amazon Web Services and are interpreted strictly as strings of characters.
You can use the TagResource action with a resource that already has
tags. If you specify a new tag key for the alarm, this tag is appended
to the list of tags associated with the alarm. If you specify a tag
key that is already associated with the alarm, the new tag value that
you specify replaces the previous value for that tag.
You can associate as many as 50 tags with a CloudWatch Logs resource.
7900 7901 7902 7903 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7900 def tag_resource(params = {}, = {}) req = build_request(:tag_resource, params) req.send_request() end |
#test_metric_filter(params = {}) ⇒ Types::TestMetricFilterResponse
Tests the filter pattern of a metric filter against a sample of log event messages. You can use this operation to validate the correctness of a metric filter pattern.
7941 7942 7943 7944 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 7941 def test_metric_filter(params = {}, = {}) req = build_request(:test_metric_filter, params) req.send_request() end |
#test_transformer(params = {}) ⇒ Types::TestTransformerResponse
Use this operation to test a log transformer. You enter the transformer configuration and a set of log events to test with. The operation responds with an array that includes the original log events and the transformed versions.
8120 8121 8122 8123 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8120 def test_transformer(params = {}, = {}) req = build_request(:test_transformer, params) req.send_request() end |
#untag_log_group(params = {}) ⇒ Struct
The UntagLogGroup operation is on the path to deprecation. We recommend that you use UntagResource instead.
Removes the specified tags from the specified log group.
To list the tags for a log group, use ListTagsForResource. To add tags, use TagResource.
When using IAM policies to control tag management for CloudWatch Logs
log groups, the condition keys aws:Resource/key-name and
aws:TagKeys cannot be used to restrict which tags users can assign.
8162 8163 8164 8165 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8162 def untag_log_group(params = {}, = {}) req = build_request(:untag_log_group, params) req.send_request() end |
#untag_resource(params = {}) ⇒ Struct
Removes one or more tags from the specified resource.
8202 8203 8204 8205 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8202 def untag_resource(params = {}, = {}) req = build_request(:untag_resource, params) req.send_request() end |
#update_anomaly(params = {}) ⇒ Struct
Use this operation to suppress anomaly detection for a specified anomaly or pattern. If you suppress an anomaly, CloudWatch Logs won't report new occurrences of that anomaly and won't update that anomaly with new data. If you suppress a pattern, CloudWatch Logs won't report any anomalies related to that pattern.
You must specify either anomalyId or patternId, but you can't
specify both parameters in the same operation.
If you have previously used this operation to suppress detection of a
pattern or anomaly, you can use it again to cause CloudWatch Logs to
end the suppression. To do this, use this operation and specify the
anomaly or pattern to stop suppressing, and omit the suppressionType
and suppressionPeriod parameters.
8282 8283 8284 8285 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8282 def update_anomaly(params = {}, = {}) req = build_request(:update_anomaly, params) req.send_request() end |
#update_delivery_configuration(params = {}) ⇒ Struct
Use this operation to update the configuration of a delivery to change either the S3 path pattern or the format of the delivered logs. You can't use this operation to change the source or destination of the delivery.
8330 8331 8332 8333 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8330 def update_delivery_configuration(params = {}, = {}) req = build_request(:update_delivery_configuration, params) req.send_request() end |
#update_log_anomaly_detector(params = {}) ⇒ Struct
Updates an existing log anomaly detector.
8380 8381 8382 8383 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8380 def update_log_anomaly_detector(params = {}, = {}) req = build_request(:update_log_anomaly_detector, params) req.send_request() end |
#update_scheduled_query(params = {}) ⇒ Types::UpdateScheduledQueryResponse
Updates an existing scheduled query with new configuration. This operation uses PUT semantics, allowing modification of query parameters, schedule, and destinations.
8502 8503 8504 8505 |
# File 'gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client.rb', line 8502 def update_scheduled_query(params = {}, = {}) req = build_request(:update_scheduled_query, params) req.send_request() end |