Package software.amazon.awscdk.services.logs
Amazon CloudWatch Logs Construct Library
This library supplies constructs for working with CloudWatch Logs.
Log Groups/Streams
The basic unit of CloudWatch is a Log Group. Every log group typically has the same kind of data logged to it, in the same format. If there are multiple applications or services logging into the Log Group, each of them creates a new Log Stream.
Every log operation creates a "log event", which can consist of a simple string or a single-line JSON object. JSON objects have the advantage that they afford more filtering abilities (see below).
The only configurable attribute for log streams is the retention period, which configures after how much time the events in the log stream expire and are deleted.
The default retention period if not supplied is 2 years, but it can be set to
one of the values in the RetentionDays
enum to configure a different
retention period (including infinite retention).
// Configure log group for short retention LogGroup logGroup = LogGroup.Builder.create(stack, "LogGroup") .retention(RetentionDays.ONE_WEEK) .build();// Configure log group for infinite retention LogGroup logGroup = LogGroup.Builder.create(stack, "LogGroup") .retention(Infinity) .build();
LogRetention
The LogRetention
construct is a way to control the retention period of log groups that are created outside of the CDK. The construct is usually
used on log groups that are auto created by AWS services, such as AWS
lambda.
This is implemented using a CloudFormation custom resource which pre-creates the log group if it doesn't exist, and sets the specified log retention period (never expire, by default).
By default, the log group will be created in the same region as the stack. The logGroupRegion
property can be used to configure
log groups in other regions. This is typically useful when controlling retention for log groups auto-created by global services that
publish their log group to a specific region, such as AWS Chatbot creating a log group in us-east-1
.
By default, the log group created by LogRetention will be retained after the stack is deleted. If the RemovalPolicy is set to DESTROY, then the log group will be deleted when the stack is deleted.
Log Group Class
CloudWatch Logs offers two classes of log groups:
- The CloudWatch Logs Standard log class is a full-featured option for logs that require real-time monitoring or logs that you access frequently.
- The CloudWatch Logs Infrequent Access log class is a new log class that you can use to cost-effectively consolidate your logs. This log class offers a subset of CloudWatch Logs capabilities including managed ingestion, storage, cross-account log analytics, and encryption with a lower ingestion price per GB. The Infrequent Access log class is ideal for ad-hoc querying and after-the-fact forensic analysis on infrequently accessed logs.
For more details please check: log group class documentation
Resource Policy
CloudWatch Resource Policies allow other AWS services or IAM Principals to put log events into the log groups.
A resource policy is automatically created when addToResourcePolicy
is called on the LogGroup for the first time:
LogGroup logGroup = new LogGroup(this, "LogGroup"); logGroup.addToResourcePolicy(PolicyStatement.Builder.create() .actions(List.of("logs:CreateLogStream", "logs:PutLogEvents")) .principals(List.of(new ServicePrincipal("es.amazonaws.com"))) .resources(List.of(logGroup.getLogGroupArn())) .build());
Or more conveniently, write permissions to the log group can be granted as follows which gives same result as in the above example.
LogGroup logGroup = new LogGroup(this, "LogGroup"); logGroup.grantWrite(new ServicePrincipal("es.amazonaws.com"));
Similarly, read permissions can be granted to the log group as follows.
LogGroup logGroup = new LogGroup(this, "LogGroup"); logGroup.grantRead(new ServicePrincipal("es.amazonaws.com"));
Be aware that any ARNs or tokenized values passed to the resource policy will be converted into AWS Account IDs. This is because CloudWatch Logs Resource Policies do not accept ARNs as principals, but they do accept Account ID strings. Non-ARN principals, like Service principals or Any principals, are accepted by CloudWatch.
Encrypting Log Groups
By default, log group data is always encrypted in CloudWatch Logs. You have the option to encrypt log group data using a AWS KMS customer master key (CMK) should you not wish to use the default AWS encryption. Keep in mind that if you decide to encrypt a log group, any service or IAM identity that needs to read the encrypted log streams in the future will require the same CMK to decrypt the data.
Here's a simple example of creating an encrypted Log Group using a KMS CMK.
import software.amazon.awscdk.services.kms.*; LogGroup.Builder.create(this, "LogGroup") .encryptionKey(new Key(this, "Key")) .build();
See the AWS documentation for more detailed information about encrypting CloudWatch Logs.
Subscriptions and Destinations
Log events matching a particular filter can be sent to either a Lambda function or a Kinesis stream.
If the Kinesis stream lives in a different account, a CrossAccountDestination
object needs to be added in the destination account which will act as a proxy
for the remote Kinesis stream. This object is automatically created for you
if you use the CDK Kinesis library.
Create a SubscriptionFilter
, initialize it with an appropriate Pattern
(see
below) and supply the intended destination:
import software.amazon.awscdk.services.logs.destinations.*; Function fn; LogGroup logGroup; SubscriptionFilter.Builder.create(this, "Subscription") .logGroup(logGroup) .destination(new LambdaDestination(fn)) .filterPattern(FilterPattern.allTerms("ERROR", "MainThread")) .filterName("ErrorInMainThread") .build();
When you use KinesisDestination
, you can choose the method used to
distribute log data to the destination by setting the distribution
property.
import software.amazon.awscdk.services.logs.destinations.*; import software.amazon.awscdk.services.kinesis.*; Stream stream; LogGroup logGroup; SubscriptionFilter.Builder.create(this, "Subscription") .logGroup(logGroup) .destination(new KinesisDestination(stream)) .filterPattern(FilterPattern.allTerms("ERROR", "MainThread")) .filterName("ErrorInMainThread") .distribution(Distribution.RANDOM) .build();
When you use FirehoseDestination
, you can choose the method used to
distribute log data to the destination by setting the distribution
property.
import software.amazon.awscdk.services.logs.destinations.*; import software.amazon.awscdk.services.kinesisfirehose.*; IDeliveryStream deliveryStream; LogGroup logGroup; SubscriptionFilter.Builder.create(this, "Subscription") .logGroup(logGroup) .destination(new FirehoseDestination(deliveryStream)) .filterPattern(FilterPattern.allEvents()) .build();
Metric Filters
CloudWatch Logs can extract and emit metrics based on a textual log stream. Depending on your needs, this may be a more convenient way of generating metrics for you application than making calls to CloudWatch Metrics yourself.
A MetricFilter
either emits a fixed number every time it sees a log event
matching a particular pattern (see below), or extracts a number from the log
event and uses that as the metric value.
Example:
MetricFilter.Builder.create(this, "MetricFilter") .logGroup(logGroup) .metricNamespace("MyApp") .metricName("Latency") .filterPattern(FilterPattern.all(FilterPattern.exists("$.latency"), FilterPattern.regexValue("$.message", "=", "bind: address already in use"))) .metricValue("$.latency") .build();
Remember that if you want to use a value from the log event as the metric value, you must mention it in your pattern somewhere.
A very simple MetricFilter can be created by using the logGroup.extractMetric()
helper function:
LogGroup logGroup; logGroup.extractMetric("$.jsonField", "Namespace", "MetricName");
Will extract the value of jsonField
wherever it occurs in JSON-structured
log records in the LogGroup, and emit them to CloudWatch Metrics under
the name Namespace/MetricName
.
Exposing Metric on a Metric Filter
You can expose a metric on a metric filter by calling the MetricFilter.metric()
API.
This has a default of statistic = 'avg'
if the statistic is not set in the props
.
Additionally, if the metric filter was created with a dimension map, those dimensions will be included in the metric.
LogGroup logGroup; MetricFilter mf = MetricFilter.Builder.create(this, "MetricFilter") .logGroup(logGroup) .metricNamespace("MyApp") .metricName("Latency") .filterPattern(FilterPattern.exists("$.latency")) .metricValue("$.latency") .dimensions(Map.of( "ErrorCode", "$.errorCode")) .unit(Unit.MILLISECONDS) .build(); //expose a metric from the metric filter Metric metric = mf.metric(); //you can use the metric to create a new alarm //you can use the metric to create a new alarm Alarm.Builder.create(this, "alarm from metric filter") .metric(metric) .threshold(100) .evaluationPeriods(2) .build();
Metrics for IncomingLogs and IncomingBytes
Metric methods have been defined for IncomingLogs and IncomingBytes within LogGroups. These metrics allow for the creation of alarms on log ingestion, ensuring that the log ingestion process is functioning correctly.
To define an alarm based on these metrics, you can use the following template:
LogGroup logGroup = new LogGroup(this, "MyLogGroup"); Metric incomingEventsMetric = logGroup.metricIncomingLogEvents(); Alarm.Builder.create(this, "HighLogVolumeAlarm") .metric(incomingEventsMetric) .threshold(1000) .evaluationPeriods(1) .build();
LogGroup logGroup = new LogGroup(this, "MyLogGroup"); Metric incomingBytesMetric = logGroup.metricIncomingBytes(); Alarm.Builder.create(this, "HighDataVolumeAlarm") .metric(incomingBytesMetric) .threshold(5000000) // 5 MB .evaluationPeriods(1) .build();
Patterns
Patterns describe which log events match a subscription or metric filter. There are three types of patterns:
- Text patterns
- JSON patterns
- Space-delimited table patterns
All patterns are constructed by using static functions on the FilterPattern
class.
In addition to the patterns above, the following special patterns exist:
FilterPattern.allEvents()
: matches all log events.FilterPattern.literal(string)
: if you already know what pattern expression to use, this function takes a string and will use that as the log pattern. For more information, see the Filter and Pattern Syntax.
Text Patterns
Text patterns match if the literal strings appear in the text form of the log line.
FilterPattern.allTerms(term, term, ...)
: matches if all of the given terms (substrings) appear in the log event.FilterPattern.anyTerm(term, term, ...)
: matches if all of the given terms (substrings) appear in the log event.FilterPattern.anyTermGroup([term, term, ...], [term, term, ...], ...)
: matches if all of the terms in any of the groups (specified as arrays) matches. This is an OR match.
Examples:
// Search for lines that contain both "ERROR" and "MainThread" IFilterPattern pattern1 = FilterPattern.allTerms("ERROR", "MainThread"); // Search for lines that either contain both "ERROR" and "MainThread", or // both "WARN" and "Deadlock". IFilterPattern pattern2 = FilterPattern.anyTermGroup(List.of("ERROR", "MainThread"), List.of("WARN", "Deadlock"));
JSON Patterns
JSON patterns apply if the log event is the JSON representation of an object (without any other characters, so it cannot include a prefix such as timestamp or log level). JSON patterns can make comparisons on the values inside the fields.
- Strings: the comparison operators allowed for strings are
=
and!=
. String values can start or end with a*
wildcard. - Numbers: the comparison operators allowed for numbers are
=
,!=
,<
,<=
,>
,>=
.
Fields in the JSON structure are identified by identifier the complete object as $
and then descending into it, such as $.field
or $.list[0].field
.
FilterPattern.stringValue(field, comparison, string)
: matches if the given field compares as indicated with the given string value.FilterPattern.regexValue(field, comparison, string)
: matches if the given field compares as indicated with the given regex pattern.FilterPattern.numberValue(field, comparison, number)
: matches if the given field compares as indicated with the given numerical value.FilterPattern.isNull(field)
: matches if the given field exists and has the valuenull
.FilterPattern.notExists(field)
: matches if the given field is not in the JSON structure.FilterPattern.exists(field)
: matches if the given field is in the JSON structure.FilterPattern.booleanValue(field, boolean)
: matches if the given field is exactly the given boolean value.FilterPattern.all(jsonPattern, jsonPattern, ...)
: matches if all of the given JSON patterns match. This makes an AND combination of the given patterns.FilterPattern.any(jsonPattern, jsonPattern, ...)
: matches if any of the given JSON patterns match. This makes an OR combination of the given patterns.
Example:
// Search for all events where the component field is equal to // "HttpServer" and either error is true or the latency is higher // than 1000. JsonPattern pattern = FilterPattern.all(FilterPattern.stringValue("$.component", "=", "HttpServer"), FilterPattern.any(FilterPattern.booleanValue("$.error", true), FilterPattern.numberValue("$.latency", ">", 1000)), FilterPattern.regexValue("$.message", "=", "bind address already in use"));
Space-delimited table patterns
If the log events are rows of a space-delimited table, this pattern can be used to identify the columns in that structure and add conditions on any of them. The canonical example where you would apply this type of pattern is Apache server logs.
Text that is surrounded by "..."
quotes or [...]
square brackets will
be treated as one column.
FilterPattern.spaceDelimited(column, column, ...)
: construct aSpaceDelimitedTextPattern
object with the indicated columns. The columns map one-by-one the columns found in the log event. The string"..."
may be used to specify an arbitrary number of unnamed columns anywhere in the name list (but may only be specified once).
After constructing a SpaceDelimitedTextPattern
, you can use the following
two members to add restrictions:
pattern.whereString(field, comparison, string)
: add a string condition. The rules are the same as for JSON patterns.pattern.whereNumber(field, comparison, number)
: add a numerical condition. The rules are the same as for JSON patterns.
Multiple restrictions can be added on the same column; they must all apply.
Example:
// Search for all events where the component is "HttpServer" and the // result code is not equal to 200. SpaceDelimitedTextPattern pattern = FilterPattern.spaceDelimited("time", "component", "...", "result_code", "latency").whereString("component", "=", "HttpServer").whereNumber("result_code", "!=", 200);
Logs Insights Query Definition
Creates a query definition for CloudWatch Logs Insights.
Example:
QueryDefinition.Builder.create(this, "QueryDefinition") .queryDefinitionName("MyQuery") .queryString(QueryString.Builder.create() .fields(List.of("@timestamp", "@message")) .parseStatements(List.of("@message \"[*] *\" as loggingType, loggingMessage", "@message \"<*>: *\" as differentLoggingType, differentLoggingMessage")) .filterStatements(List.of("loggingType = \"ERROR\"", "loggingMessage = \"A very strange error occurred!\"")) .sort("@timestamp desc") .limit(20) .build()) .build();
Data Protection Policy
Creates a data protection policy and assigns it to the log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data. When a user who does not have permission to view masked data views a log event that includes masked data, the sensitive data is replaced by asterisks.
For more information, see Protect sensitive log data with masking.
For a list of types of managed identifiers that can be audited and masked, see Types of data that you can protect.
If a new identifier is supported but not yet in the DataIdentifiers
enum, the name of the identifier can be supplied as name
in the constructor instead.
To add a custom data identifier, supply a custom name
and regex
to the CustomDataIdentifiers
constructor.
For more information on custom data identifiers, see Custom data identifiers.
Each policy may consist of a log group, S3 bucket, and/or Firehose delivery stream audit destination.
Example:
import software.amazon.awscdk.services.kinesisfirehose.*; LogGroup logGroupDestination = LogGroup.Builder.create(this, "LogGroupLambdaAudit") .logGroupName("auditDestinationForCDK") .build(); Bucket bucket = new Bucket(this, "audit-bucket"); S3Bucket s3Destination = new S3Bucket(bucket); DeliveryStream deliveryStream = DeliveryStream.Builder.create(this, "Delivery Stream") .destination(s3Destination) .build(); DataProtectionPolicy dataProtectionPolicy = DataProtectionPolicy.Builder.create() .name("data protection policy") .description("policy description") .identifiers(List.of(DataIdentifier.DRIVERSLICENSE_US, // managed data identifier new DataIdentifier("EmailAddress"), // forward compatibility for new managed data identifiers new CustomDataIdentifier("EmployeeId", "EmployeeId-\\d{9}"))) // custom data identifier .logGroupAuditDestination(logGroupDestination) .s3BucketAuditDestination(bucket) .deliveryStreamNameAuditDestination(deliveryStream.getDeliveryStreamName()) .build(); LogGroup.Builder.create(this, "LogGroupLambda") .logGroupName("cdkIntegLogGroup") .dataProtectionPolicy(dataProtectionPolicy) .build();
Field Index Policies
Creates or updates a field index policy for the specified log group. You can use field index policies to create field indexes on fields found in log events in the log group. Creating field indexes lowers the costs for CloudWatch Logs Insights queries that reference those field indexes, because these queries attempt to skip the processing of log events that are known to not match the indexed field. Good fields to index are fields that you often need to query for and fields that have high cardinality of values.
For more information, see Create field indexes to improve query performance and reduce costs.
Only log groups in the Standard log class support field index policies. Currently, this array supports only one field index policy object.
Example:
FieldIndexPolicy fieldIndexPolicy = FieldIndexPolicy.Builder.create() .fields(List.of("Operation", "RequestId")) .build(); LogGroup.Builder.create(this, "LogGroup") .logGroupName("cdkIntegLogGroup") .fieldIndexPolicies(List.of(fieldIndexPolicy)) .build();
Transformer
A log transformer enables transforming log events into a different format, making them easier to process and analyze. You can transform logs from different sources into standardized formats that contain relevant, source-specific information. Transformations are performed at the time of log ingestion. Transformers support several types of processors which can be chained into a processing pipeline (subject to some restrictions, see Usage Limits).
Processor Types
- Parser Processors: Parse string log events into structured log events. These are configurable parsers created using
ParserProcessor
, and support conversion to a format like Json, extracting fields from CSV input, converting vended sources to OCSF format, regex parsing using Grok patterns or key-value parsing. Refer configurable parsers for more examples. - Vended Log Parsers: Parse log events from vended sources into structured log events. These are created using
VendedLogParser
, and support conversion from sources such as AWS WAF, PostGres, Route53, CloudFront and VPC. These parsers are not configurable, meaning these can be added to the pipeline but do not accept any properties or configurations. Refer vended log parsers for more examples. - String Mutators: Perform operations on string values in a field of a log event and are created using
StringMutatorProcessor
. These can be used to format string values in the log event such as changing case, removing trailing whitespaces or extracting values from a string field by splitting the string or regex backreferences. Refer string mutators for more examples. - JSON Mutators: Perform operation on JSON log events and are created using
JsonMutatorProcessor
. These processors can be used to enrich log events by adding new fields, deleting, moving, renaming fields, copying values to other fields or converting a list of key-value pairs to a map. Refer JSON mutators for more examples. - Data Converters: Convert the data into different formats and are created using
DataConverterProcessor
. These can be used to convert values in a field to datatypes such as integers, string, double and boolean or to convert dates and times to different formats. Refer datatype processors for more examples.
Usage Limits
- A transformer can have a maximum of 20 processors
- At least one parser-type processor is required
- Maximum of 5 parser-type processors allowed
- AWS vended log parser (if used) must be the first processor
- Only one parseToOcsf processor, one grok processor, one addKeys processor, and one copyValue processor allowed per transformer
- Transformers can only be used with log groups in the Standard log class
Example:
// Create a log group LogGroup logGroup = new LogGroup(this, "MyLogGroup"); // Create a JSON parser processor ParserProcessor jsonParser = ParserProcessor.Builder.create() .type(ParserProcessorType.JSON) .build(); // Create a processor to add keys JsonMutatorProcessor addKeysProcessor = JsonMutatorProcessor.Builder.create() .type(JsonMutatorType.ADD_KEYS) .addKeysOptions(AddKeysProperty.builder() .entries(List.of(AddKeyEntryProperty.builder() .key("metadata.transformed_in") .value("CloudWatchLogs") .build())) .build()) .build(); // Create a transformer with these processors // Create a transformer with these processors Transformer.Builder.create(this, "Transformer") .transformerName("MyTransformer") .logGroup(logGroup) .transformerConfig(List.of(jsonParser, addKeysProcessor)) .build();
For more details on CloudWatch Logs transformation processors, refer to the AWS documentation.
Notes
Be aware that Log Group ARNs will always have the string :*
appended to
them, to match the behavior of the CloudFormation AWS::Logs::LogGroup
resource.
-
ClassDescriptionThis object defines one key that will be added with the addKeys processor.A builder for
AddKeyEntryProperty
An implementation forAddKeyEntryProperty
This processor adds new key-value pairs to the log event.A builder forAddKeysProperty
An implementation forAddKeysProperty
Base properties for all processor types.A builder forBaseProcessorProps
An implementation forBaseProcessorProps
Creates or updates an account-level data protection policy or subscription filter policy that applies to all log groups or a subset of log groups in the account.A fluent builder forCfnAccountPolicy
.Properties for defining aCfnAccountPolicy
.A builder forCfnAccountPolicyProps
An implementation forCfnAccountPolicyProps
This structure contains information about one delivery in your account.A fluent builder forCfnDelivery
.This structure contains information about one delivery destination in your account.A fluent builder forCfnDeliveryDestination
.Example:A builder forCfnDeliveryDestination.DestinationPolicyProperty
An implementation forCfnDeliveryDestination.DestinationPolicyProperty
Properties for defining aCfnDeliveryDestination
.A builder forCfnDeliveryDestinationProps
An implementation forCfnDeliveryDestinationProps
Properties for defining aCfnDelivery
.A builder forCfnDeliveryProps
An implementation forCfnDeliveryProps
Creates or updates one delivery source in your account.A fluent builder forCfnDeliverySource
.Properties for defining aCfnDeliverySource
.A builder forCfnDeliverySourceProps
An implementation forCfnDeliverySourceProps
The AWS::Logs::Destination resource specifies a CloudWatch Logs destination.A fluent builder forCfnDestination
.Properties for defining aCfnDestination
.A builder forCfnDestinationProps
An implementation forCfnDestinationProps
Creates an integration between CloudWatch Logs and another service in this account.A fluent builder forCfnIntegration
.This structure contains configuration details about an integration between CloudWatch Logs and OpenSearch Service.A builder forCfnIntegration.OpenSearchResourceConfigProperty
An implementation forCfnIntegration.OpenSearchResourceConfigProperty
This structure contains configuration details about an integration between CloudWatch Logs and another entity.A builder forCfnIntegration.ResourceConfigProperty
An implementation forCfnIntegration.ResourceConfigProperty
Properties for defining aCfnIntegration
.A builder forCfnIntegrationProps
An implementation forCfnIntegrationProps
Creates or updates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs.A fluent builder forCfnLogAnomalyDetector
.Properties for defining aCfnLogAnomalyDetector
.A builder forCfnLogAnomalyDetectorProps
An implementation forCfnLogAnomalyDetectorProps
TheAWS::Logs::LogGroup
resource specifies a log group.A fluent builder forCfnLogGroup
.Properties for defining aCfnLogGroup
.A builder forCfnLogGroupProps
An implementation forCfnLogGroupProps
TheAWS::Logs::LogStream
resource specifies an Amazon CloudWatch Logs log stream in a specific log group.A fluent builder forCfnLogStream
.Properties for defining aCfnLogStream
.A builder forCfnLogStreamProps
An implementation forCfnLogStreamProps
TheAWS::Logs::MetricFilter
resource specifies a metric filter that describes how CloudWatch Logs extracts information from logs and transforms it into Amazon CloudWatch metrics.A fluent builder forCfnMetricFilter
.Specifies the CloudWatch metric dimensions to publish with this metric.A builder forCfnMetricFilter.DimensionProperty
An implementation forCfnMetricFilter.DimensionProperty
MetricTransformation
is a property of theAWS::Logs::MetricFilter
resource that describes how to transform log streams into a CloudWatch metric.A builder forCfnMetricFilter.MetricTransformationProperty
An implementation forCfnMetricFilter.MetricTransformationProperty
Properties for defining aCfnMetricFilter
.A builder forCfnMetricFilterProps
An implementation forCfnMetricFilterProps
Creates a query definition for CloudWatch Logs Insights.A fluent builder forCfnQueryDefinition
.Properties for defining aCfnQueryDefinition
.A builder forCfnQueryDefinitionProps
An implementation forCfnQueryDefinitionProps
Creates or updates a resource policy that allows other AWS services to put log events to this account.A fluent builder forCfnResourcePolicy
.Properties for defining aCfnResourcePolicy
.A builder forCfnResourcePolicyProps
An implementation forCfnResourcePolicyProps
TheAWS::Logs::SubscriptionFilter
resource specifies a subscription filter and associates it with the specified log group.A fluent builder forCfnSubscriptionFilter
.Properties for defining aCfnSubscriptionFilter
.A builder forCfnSubscriptionFilterProps
An implementation forCfnSubscriptionFilterProps
Creates or updates a log transformer for a single log group.This object defines one key that will be added with the addKeys processor.A builder forCfnTransformer.AddKeyEntryProperty
An implementation forCfnTransformer.AddKeyEntryProperty
This processor adds new key-value pairs to the log event.A builder forCfnTransformer.AddKeysProperty
An implementation forCfnTransformer.AddKeysProperty
A fluent builder forCfnTransformer
.This object defines one value to be copied with the copyValue processor.A builder forCfnTransformer.CopyValueEntryProperty
An implementation forCfnTransformer.CopyValueEntryProperty
This processor copies values within a log event.A builder forCfnTransformer.CopyValueProperty
An implementation forCfnTransformer.CopyValueProperty
TheCSV
processor parses comma-separated values (CSV) from the log events into columns.A builder forCfnTransformer.CsvProperty
An implementation forCfnTransformer.CsvProperty
This processor converts a datetime string into a format that you specify.A builder forCfnTransformer.DateTimeConverterProperty
An implementation forCfnTransformer.DateTimeConverterProperty
This processor deletes entries from a log event.A builder forCfnTransformer.DeleteKeysProperty
An implementation forCfnTransformer.DeleteKeysProperty
This processor uses pattern matching to parse and structure unstructured data.A builder forCfnTransformer.GrokProperty
An implementation forCfnTransformer.GrokProperty
This processor takes a list of objects that contain key fields, and converts them into a map of target keys.A builder forCfnTransformer.ListToMapProperty
An implementation forCfnTransformer.ListToMapProperty
This processor converts a string to lowercase.A builder forCfnTransformer.LowerCaseStringProperty
An implementation forCfnTransformer.LowerCaseStringProperty
This object defines one key that will be moved with the moveKey processor.A builder forCfnTransformer.MoveKeyEntryProperty
An implementation forCfnTransformer.MoveKeyEntryProperty
This processor moves a key from one field to another.A builder forCfnTransformer.MoveKeysProperty
An implementation forCfnTransformer.MoveKeysProperty
This processor parses CloudFront vended logs, extract fields, and convert them into JSON format.A builder forCfnTransformer.ParseCloudfrontProperty
An implementation forCfnTransformer.ParseCloudfrontProperty
This processor parses log events that are in JSON format.A builder forCfnTransformer.ParseJSONProperty
An implementation forCfnTransformer.ParseJSONProperty
This processor parses a specified field in the original log event into key-value pairs.A builder forCfnTransformer.ParseKeyValueProperty
An implementation forCfnTransformer.ParseKeyValueProperty
Use this processor to parse RDS for PostgreSQL vended logs, extract fields, and and convert them into a JSON format.A builder forCfnTransformer.ParsePostgresProperty
An implementation forCfnTransformer.ParsePostgresProperty
Use this processor to parse Route 53 vended logs, extract fields, and and convert them into a JSON format.A builder forCfnTransformer.ParseRoute53Property
An implementation forCfnTransformer.ParseRoute53Property
This processor converts logs into Open Cybersecurity Schema Framework (OCSF) events.A builder forCfnTransformer.ParseToOCSFProperty
An implementation forCfnTransformer.ParseToOCSFProperty
Use this processor to parse Amazon VPC vended logs, extract fields, and and convert them into a JSON format.A builder forCfnTransformer.ParseVPCProperty
An implementation forCfnTransformer.ParseVPCProperty
Use this processor to parse AWS WAF vended logs, extract fields, and and convert them into a JSON format.A builder forCfnTransformer.ParseWAFProperty
An implementation forCfnTransformer.ParseWAFProperty
This structure contains the information about one processor in a log transformer.A builder forCfnTransformer.ProcessorProperty
An implementation forCfnTransformer.ProcessorProperty
This object defines one key that will be renamed with the renameKey processor.A builder forCfnTransformer.RenameKeyEntryProperty
An implementation forCfnTransformer.RenameKeyEntryProperty
Use this processor to rename keys in a log event.A builder forCfnTransformer.RenameKeysProperty
An implementation forCfnTransformer.RenameKeysProperty
This object defines one log field that will be split with the splitString processor.A builder forCfnTransformer.SplitStringEntryProperty
An implementation forCfnTransformer.SplitStringEntryProperty
Use this processor to split a field into an array of strings using a delimiting character.A builder forCfnTransformer.SplitStringProperty
An implementation forCfnTransformer.SplitStringProperty
This object defines one log field key that will be replaced using the substituteString processor.A builder forCfnTransformer.SubstituteStringEntryProperty
An implementation forCfnTransformer.SubstituteStringEntryProperty
This processor matches a key’s value against a regular expression and replaces all matches with a replacement string.A builder forCfnTransformer.SubstituteStringProperty
An implementation forCfnTransformer.SubstituteStringProperty
Use this processor to remove leading and trailing whitespace.A builder forCfnTransformer.TrimStringProperty
An implementation forCfnTransformer.TrimStringProperty
This object defines one value type that will be converted using the typeConverter processor.A builder forCfnTransformer.TypeConverterEntryProperty
An implementation forCfnTransformer.TypeConverterEntryProperty
Use this processor to convert a value type associated with the specified key to the specified type.A builder forCfnTransformer.TypeConverterProperty
An implementation forCfnTransformer.TypeConverterProperty
This processor converts a string field to uppercase.A builder forCfnTransformer.UpperCaseStringProperty
An implementation forCfnTransformer.UpperCaseStringProperty
Properties for defining aCfnTransformer
.A builder forCfnTransformerProps
An implementation forCfnTransformerProps
Example:A builder forColumnRestriction
An implementation forColumnRestriction
This object defines one value to be copied with the copyValue processor.A builder forCopyValueEntryProperty
An implementation forCopyValueEntryProperty
Copy Value processor, copies values from source to target for each entry.A builder forCopyValueProperty
An implementation forCopyValueProperty
A new CloudWatch Logs Destination for use in cross-account scenarios.A fluent builder forCrossAccountDestination
.Properties for a CrossAccountDestination.A builder forCrossAccountDestinationProps
An implementation forCrossAccountDestinationProps
The CSV processor parses comma-separated values (CSV) from the log events into columns.A builder forCsvProperty
An implementation forCsvProperty
A custom data identifier.Processor for data conversion operations.A fluent builder forDataConverterProcessor
.Properties for creating data converter processors.A builder forDataConverterProps
An implementation forDataConverterProps
Types of data conversion operations.A data protection identifier.Creates a data protection policy for CloudWatch Logs log groups.A fluent builder forDataProtectionPolicy
.Properties for creating a data protection policy.A builder forDataProtectionPolicyProps
An implementation forDataProtectionPolicyProps
This processor converts a datetime string into a format that you specify.A builder forDateTimeConverterProperty
An implementation forDateTimeConverterProperty
Standard datetime formats for the DateTimeConverter processor.Valid delimiter characters for CSV processor.The method used to distribute log data to the destination.Creates a field index policy for CloudWatch Logs log groups.A fluent builder forFieldIndexPolicy
.Properties for creating field index policies.A builder forFieldIndexPolicyProps
An implementation forFieldIndexPolicyProps
A collection of static methods to generate appropriate ILogPatterns.This processor uses pattern matching to parse and structure unstructured data.A builder forGrokProperty
An implementation forGrokProperty
Interface for objects that can render themselves to log patterns.Internal default implementation forIFilterPattern
.A proxy class which represents a concrete javascript instance of this type.Internal default implementation forILogGroup
.A proxy class which represents a concrete javascript instance of this type.Internal default implementation forILogStream
.A proxy class which represents a concrete javascript instance of this type.Interface for classes that can be the destination of a log Subscription.Internal default implementation forILogSubscriptionDestination
.A proxy class which represents a concrete javascript instance of this type.Interface representing a single processor in a CloudWatch Logs transformer.Internal default implementation forIProcessor
.A proxy class which represents a concrete javascript instance of this type.Processor for JSON mutation operations.A fluent builder forJsonMutatorProcessor
.Properties for creating JSON mutator processors.A builder forJsonMutatorProps
An implementation forJsonMutatorProps
Types of JSON mutation operations.Base class for patterns that only match JSON log events.Valid key-value delimiters for ParseKeyValue processor.Valid field delimiters for ParseKeyValue processor.This processor takes a list of objects that contain key fields, and converts them into a map of target keys.A builder forListToMapProperty
An implementation forListToMapProperty
Define a CloudWatch Log Group.A fluent builder forLogGroup
.Class of Log Group.Properties for a LogGroup.A builder forLogGroupProps
An implementation forLogGroupProps
Creates a custom resource to control the retention policy of a CloudWatch Logs log group.A fluent builder forLogRetention
.Construction properties for a LogRetention.A builder forLogRetentionProps
An implementation forLogRetentionProps
Retry options for all AWS API calls.A builder forLogRetentionRetryOptions
An implementation forLogRetentionRetryOptions
Define a Log Stream in a Log Group.A fluent builder forLogStream
.Properties for a LogStream.A builder forLogStreamProps
An implementation forLogStreamProps
Properties returned by a Subscription destination.A builder forLogSubscriptionDestinationConfig
An implementation forLogSubscriptionDestinationConfig
A filter that extracts information from CloudWatch Logs and emits to CloudWatch Metrics.A fluent builder forMetricFilter
.Properties for a MetricFilter created from a LogGroup.A builder forMetricFilterOptions
An implementation forMetricFilterOptions
Properties for a MetricFilter.A builder forMetricFilterProps
An implementation forMetricFilterProps
This object defines one key that will be moved with the moveKey processor.A builder forMoveKeyEntryProperty
An implementation forMoveKeyEntryProperty
This processor moves a key from one field to another.A builder forMoveKeysProperty
An implementation forMoveKeysProperty
Types of event sources supported to convert to OCSF format.OCSF Schema versions supported by transformers.This processor parses log events that are in JSON format.A builder forParseJSONProperty
An implementation forParseJSONProperty
This processor parses a specified field in the original log event into key-value pairs.A builder forParseKeyValueProperty
An implementation forParseKeyValueProperty
Parser processor for common data formats.A fluent builder forParserProcessor
.Properties for creating configurable parser processors.A builder forParserProcessorProps
An implementation forParserProcessorProps
Types of configurable parser processors.Processor to parse events from CloudTrail, Route53Resolver, VPCFlow, EKSAudit and AWSWAF into OCSF V1.1 format.A builder forParseToOCSFProperty
An implementation forParseToOCSFProperty
This processor adds new key-value pairs to the log event.A builder forProcessorDeleteKeysProperty
An implementation forProcessorDeleteKeysProperty
Define a query definition for CloudWatch Logs Insights.A fluent builder forQueryDefinition
.Properties for a QueryDefinition.A builder forQueryDefinitionProps
An implementation forQueryDefinitionProps
Define a QueryString.A fluent builder forQueryString
.Properties for a QueryString.A builder forQueryStringProps
An implementation forQueryStringProps
Valid quote characters for CSV processor.This object defines one key that will be renamed with the renameKey processor.A builder forRenameKeyEntryProperty
An implementation forRenameKeyEntryProperty
Use this processor to rename keys in a log event.A builder forRenameKeysProperty
An implementation forRenameKeysProperty
Resource Policy for CloudWatch Log Groups.A fluent builder forResourcePolicy
.Properties to define Cloudwatch log group resource policy.A builder forResourcePolicyProps
An implementation forResourcePolicyProps
How long, in days, the log contents will be retained.Space delimited text pattern.This object defines one log field that will be split with the splitString processor.A builder forSplitStringEntryProperty
An implementation forSplitStringEntryProperty
Use this processor to split a field into an array of strings using a delimiting character.A builder forSplitStringProperty
An implementation forSplitStringProperty
Properties for a new LogStream created from a LogGroup.A builder forStreamOptions
An implementation forStreamOptions
Processor for string mutation operations.A fluent builder forStringMutatorProcessor
.Properties for creating string mutator processors.A builder forStringMutatorProps
An implementation forStringMutatorProps
Types of string mutation operations.A new Subscription on a CloudWatch log group.A fluent builder forSubscriptionFilter
.Properties for a new SubscriptionFilter created from a LogGroup.A builder forSubscriptionFilterOptions
An implementation forSubscriptionFilterOptions
Properties for a SubscriptionFilter.A builder forSubscriptionFilterProps
An implementation forSubscriptionFilterProps
This object defines one log field key that will be replaced using the substituteString processor.A builder forSubstituteStringEntryProperty
An implementation forSubstituteStringEntryProperty
This processor matches a key's value against a regular expression and replaces all matches with a replacement string.A builder forSubstituteStringProperty
An implementation forSubstituteStringProperty
Represent the L2 construct for the AWS::Logs::Transformer CloudFormation resource.A fluent builder forTransformer
.Properties for Transformer created from LogGroup.A builder forTransformerOptions
An implementation forTransformerOptions
The Resource properties for AWS::Logs::Transformer resource.A builder forTransformerProps
An implementation forTransformerProps
This object defines one value type that will be converted using the typeConverter processor.A builder forTypeConverterEntryProperty
An implementation forTypeConverterEntryProperty
Use this processor to convert a value type associated with the specified key to the specified type.A builder forTypeConverterProperty
An implementation forTypeConverterProperty
Valid data types for type conversion in the TypeConverter processor.Parser processor for AWS vended logs.A fluent builder forVendedLogParser
.Properties for creating AWS vended log parsers.A builder forVendedLogParserProps
An implementation forVendedLogParserProps
Types of AWS vended logs with built-in parsers.