

# Cross-account cross-Region log data sharing using Firehose
<a name="CrossAccountSubscriptions-Firehose"></a>

To share log data across accounts, you need to establish a log data sender and receiver:
+ **Log data sender**—gets the destination information from the recipient and lets CloudWatch Logs know that it is ready to send its log events to the specified destination. In the procedures in the rest of this section, the log data sender is shown with a fictional AWS account number of 111111111111.
+ **Log data recipient**—sets up a destination that encapsulates a Amazon Kinesis Data Streams stream and lets CloudWatch Logs know that the recipient wants to receive log data. The recipient then shares the information about this destination with the sender. In the procedures in the rest of this section, the log data recipient is shown with a fictional AWS account number of 222222222222.

The example in this section uses a Firehose delivery stream with Amazon S3 storage. You can also set up Firehose delivery streams with different settings. For more information, see [ Creating a Firehose Delivery Stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html).

**Note**  
The log group and the destination must be in the same AWS Region. However, the AWS resource that the destination points to can be located in a different Region.

**Note**  
 Firehose subscription filter for a ***same account*** and ***cross-Region*** delivery stream is supported. 

**Topics**
+ [Step 1: Create a Firehose delivery stream](CreateFirehoseStream.md)
+ [Step 2: Create a destination](CreateFirehoseStreamDestination.md)
+ [Step 3: Add/validate IAM permissions for the cross-account destination](Subscription-Filter-CrossAccount-Permissions-Firehose.md)
+ [Step 4: Create a subscription filter](CreateSubscriptionFilterFirehose.md)
+ [Validating the flow of log events](ValidateLogEventFlowFirehose.md)
+ [Modifying destination membership at runtime](ModifyDestinationMembershipFirehose.md)

# Step 1: Create a Firehose delivery stream
<a name="CreateFirehoseStream"></a>

**Important**  
 Before you complete the following steps, you must use an access policy, so Firehose can access your Amazon S3 bucket. For more information, see [Controlling Access](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) in the *Amazon Data Firehose Developer Guide*.   
 All of the steps in this section (Step 1) must be done in the log data recipient account.   
 US East (N. Virginia) is used in the following sample commands. Replace this Region with the correct Region for your deployment. 

**To create a Firehose delivery stream to be used as the destination**

1. Create an Amazon S3 bucket:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket --create-bucket-configuration LocationConstraint=us-east-1
   ```

1. Create the IAM role that grants Firehose permission to put data into the bucket.

   1. First, use a text editor to create a trust policy in a file `~/TrustPolicyForFirehose.json`.

      ```
      { "Statement": { "Effect": "Allow", "Principal": { "Service": "firehose.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId":"222222222222" } } } }
      ```

   1. Create the IAM role, specifying the trust policy file that you just made.

      ```
      aws iam create-role \ 
          --role-name FirehosetoS3Role \ 
          --assume-role-policy-document file://~/TrustPolicyForFirehose.json
      ```

   1. The output of this command will look similar to the following. Make a note of the role name and the role ARN.

      ```
      {
          "Role": {
              "Path": "/",
              "RoleName": "FirehosetoS3Role",
              "RoleId": "AROAR3BXASEKW7K635M53",
              "Arn": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
              "CreateDate": "2021-02-02T07:53:10+00:00",
              "AssumeRolePolicyDocument": {
                  "Statement": {
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "firehose.amazonaws.com"
                      },
                      "Action": "sts:AssumeRole",
                      "Condition": {
                          "StringEquals": {
                              "sts:ExternalId": "222222222222"
                          }
                      }
                  }
              }
          }
      }
      ```

1. Create a permissions policy to define the actions that Firehose can perform in your account.

   1. First, use a text editor to create the following permissions policy in a file named `~/PermissionsForFirehose.json`. Depending on your use case, you might need to add more permissions to this file.

      ```
      {
          "Statement": [{
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject",
                  "s3:PutObjectAcl",
                  "s3:ListBucket"
              ],
              "Resource": [
                  "arn:aws:s3:::amzn-s3-demo-bucket",
                  "arn:aws:s3:::amzn-s3-demo-bucket/*"
              ]
          }]
      }
      ```

   1. Enter the following command to associate the permissions policy that you just created with the IAM role.

      ```
      aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose-To-S3 --policy-document file://~/PermissionsForFirehose.json
      ```

1. Enter the following command to create the Firehose delivery stream. Replace *my-role-arn* and *amzn-s3-demo-bucket2-arn* with the correct values for your deployment.

   ```
   aws firehose create-delivery-stream \
      --delivery-stream-name 'my-delivery-stream' \
      --s3-destination-configuration \
     '{"RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role", "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket"}'
   ```

   The output should look similar to the following:

   ```
   {
       "DeliveryStreamARN": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream"
   }
   ```

# Step 2: Create a destination
<a name="CreateFirehoseStreamDestination"></a>

**Important**  
All steps in this procedure are to be done in the log data recipient account.

When the destination is created, CloudWatch Logs sends a test message to the destination on the recipient account’s behalf. When the subscription filter is active later, CloudWatch Logs sends log events to the destination on the source account’s behalf.

**To create a destination**

1. Wait until the Firehose stream that you created in [Step 1: Create a Firehose delivery stream](CreateFirehoseStream.md) becomes active. You can use the following command to check the **StreamDescription.StreamStatus** property.

   ```
   aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream"
   ```

   In addition, take note of the **DeliveryStreamDescription.DeliveryStreamARN** value, because you will need to use it in a later step. Sample output of this command:

   ```
   {
       "DeliveryStreamDescription": {
           "DeliveryStreamName": "my-delivery-stream",
           "DeliveryStreamARN": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
           "DeliveryStreamStatus": "ACTIVE",
           "DeliveryStreamEncryptionConfiguration": {
               "Status": "DISABLED"
           },
           "DeliveryStreamType": "DirectPut",
           "VersionId": "1",
           "CreateTimestamp": "2021-02-01T23:59:15.567000-08:00",
           "Destinations": [
               {
                   "DestinationId": "destinationId-000000000001",
                   "S3DestinationDescription": {
                       "RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket",
                       "BufferingHints": {
                           "SizeInMBs": 5,
                           "IntervalInSeconds": 300
                       },
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "CloudWatchLoggingOptions": {
                           "Enabled": false
                       }
                   },
                   "ExtendedS3DestinationDescription": {
                       "RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket",
                       "BufferingHints": {
                           "SizeInMBs": 5,
                           "IntervalInSeconds": 300
                       },
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "CloudWatchLoggingOptions": {
                           "Enabled": false
                       },
                       "S3BackupMode": "Disabled"
                   }
               }
           ],
           "HasMoreDestinations": false
       }
   }
   ```

   It might take a minute or two for your delivery stream to show up in the active state.

1. When the delivery stream is active, create the IAM role that will grant CloudWatch Logs the permission to put data into your Firehose stream. First, you'll need to create a trust policy in a file **\$1/TrustPolicyForCWL.json**. Use a text editor to create this policy. For more information about CloudWatch Logs endpoints, see [ Amazon CloudWatch Logs endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/cwl_region.html). 

   This policy includes a `aws:SourceArn` global condition context key that specifies the `sourceAccountId` to help prevent the confused deputy security problem. If you don't yet know the source account ID in the first call, we recommend that you put the destination ARN in the source ARN field. In the subsequent calls, you should set the source ARN to be the actual source ARN that you gathered from the first call. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md). 

   ```
   {
       "Statement": {
           "Effect": "Allow",
           "Principal": {
               "Service": "logs.region.amazonaws.com"
           },
           "Action": "sts:AssumeRole",
           "Condition": {
               "StringLike": {
                   "aws:SourceArn": [
                       "arn:aws:logs:region:sourceAccountId:*",
                       "arn:aws:logs:region:recipientAccountId:*"
                   ]
               }
           }
        }
   }
   ```

1. Use the **aws iam create-role** command to create the IAM role, specifying the trust policy file that you just created. 

   ```
   aws iam create-role \
         --role-name CWLtoKinesisFirehoseRole \
         --assume-role-policy-document file://~/TrustPolicyForCWL.json
   ```

   The following is a sample output. Take note of the returned `Role.Arn` value, because you will need to use it in a later step.

   ```
   {
       "Role": {
           "Path": "/",
           "RoleName": "CWLtoKinesisFirehoseRole",
           "RoleId": "AROAR3BXASEKYJYWF243H",
           "Arn": "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole",
           "CreateDate": "2021-02-02T08:10:43+00:00",
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.region.amazonaws.com"
                   },
                   "Action": "sts:AssumeRole",
                   "Condition": {
                       "StringLike": {
                           "aws:SourceArn": [
                               "arn:aws:logs:region:sourceAccountId:*",
                               "arn:aws:logs:region:recipientAccountId:*"
                           ]
                       }
                   }
               }
           }
       }
   }
   ```

1. Create a permissions policy to define which actions CloudWatch Logs can perform on your account. First, use a text editor to create a permissions policy in a file **\$1/PermissionsForCWL.json**:

   ```
   {
       "Statement":[
         {
           "Effect":"Allow",
           "Action":["firehose:*"],
           "Resource":["arn:aws:firehose:region:222222222222:*"]
         }
       ]
   }
   ```

1. Associate the permissions policy with the role by entering the following command:

   ```
   aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json
   ```

1. After the Firehose delivery stream is in the active state and you have created the IAM role, you can create the CloudWatch Logs destination.

   1. This step will not associate an access policy with your destination and is only the first step out of two that completes a destination creation. Make a note of the ARN of the new destination that is returned in the payload, because you will use this as the `destination.arn` in a later step.

      ```
      aws logs put-destination \                                                       
          --destination-name "testFirehoseDestination" \
          --target-arn "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream" \
          --role-arn "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole"
      
      {
          "destination": {
              "destinationName": "testFirehoseDestination",
              "targetArn": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
              "roleArn": "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole",
              "arn": "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"}
      }
      ```

   1. After the previous step is complete, in the log data recipient account (222222222222), associate an access policy with the destination.

      This policy enables the log data sender account (111111111111) to access the destination in just the log data recipient account (222222222222). You can use a text editor to put this policy in the **\$1/AccessPolicy.json** file:

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement" : [
          {
            "Sid" : "",
            "Effect" : "Allow",
            "Principal" : {
              "AWS" : "111111111111"
            },
            "Action" : "logs:PutSubscriptionFilter",
            "Resource" : "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"
          }
        ]
      }
      ```

------

   1. This creates a policy that defines who has write access to the destination. This policy must specify the **logs:PutSubscriptionFilter** action to access the destination. Cross-account users will use the **PutSubscriptionFilter** action to send log events to the destination:

      ```
      aws logs put-destination-policy \
          --destination-name "testFirehoseDestination" \
          --access-policy file://~/AccessPolicy.json
      ```

# Step 3: Add/validate IAM permissions for the cross-account destination
<a name="Subscription-Filter-CrossAccount-Permissions-Firehose"></a>

According to AWS cross-account policy evaluation logic, in order to access any cross-account resource (such as an Kinesis or Firehose stream used as a destination for a subscription filter) you must have an identity-based policy in the sending account which provides explicit access to the cross-account destination resource. For more information about policy evaluation logic, see [ Cross-account policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic-cross-account.html).

You can attach the identity-based policy to the IAM role or IAM user that you are using to create the subscription filter. This policy must be present in the sending account. If you are using the Administrator role to create the subscription filter, you can skip this step and move on to [Step 4: Create a subscription filter](CreateSubscriptionFilter.md).

**To add or validate the IAM permissions needed for cross-account**

1. Enter the following command to check which IAM role or IAM user is being used to run AWS logs commands.

   ```
   aws sts get-caller-identity
   ```

   The command returns output similar to the following:

   ```
   {
   "UserId": "User ID",
   "Account": "sending account id",
   "Arn": "arn:aws:sending account id:role/user:RoleName/UserName"
   }
   ```

   Make note of the value represented by *RoleName* or *UserName*.

1. Sign into the AWS Management Console in the sending account and search for the attached policies with the IAM role or IAM user returned in the output of the command you entered in step 1.

1. Verify that the policies attached to this role or user provide explicit permissions to call `logs:PutSubscriptionFilter` on the cross-account destination resource.

   The following policy provides permissions to create a subscription filter on any destination resource only in a single AWS account, account `999999999999`:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowSubscriptionFiltersOnAnyResourceInOneSpecificAccount",
               "Effect": "Allow",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": [
                   "arn:aws:logs:*:*:log-group:*",
                   "arn:aws:logs:*:123456789012:destination:*"
               ]
           }
       ]
   }
   ```

------

   The following policy provides permissions to create a subscription filter only on a specific destination resource named `sampleDestination` in single AWS account, account `123456789012`:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
           "Sid": "AllowSubscriptionFiltersOnSpecificResource",
               "Effect": "Allow",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": [
                   "arn:aws:logs:*:*:log-group:*",
                   "arn:aws:logs:*:123456789012:destination:amzn-s3-demo-bucket"
               ]
           }
       ]
   }
   ```

------

# Step 4: Create a subscription filter
<a name="CreateSubscriptionFilterFirehose"></a>

Switch to the sending account, which is 111111111111 in this example. You will now create the subscription filter in the sending account. In this example, the filter is associated with a log group containing AWS CloudTrail events so that every logged activity made by "Root" AWS credentials is delivered to the destination you previously created. For more information about how to send AWS CloudTrail events to CloudWatch Logs, see [Sending CloudTrail Events to CloudWatch Logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html) in the *AWS CloudTrail User Guide*.

When you enter the following command, be sure you are signed in as the IAM user or using the IAM role that you added the policy for, in [Step 3: Add/validate IAM permissions for the cross-account destination](Subscription-Filter-CrossAccount-Permissions-Firehose.md).

```
aws logs put-subscription-filter \
    --log-group-name "aws-cloudtrail-logs-111111111111-300a971e" \                   
    --filter-name "firehose_test" \
    --filter-pattern "{$.userIdentity.type = AssumedRole}" \
    --destination-arn "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"
```

The log group and the destination must be in the same AWS Region. However, the destination can point to an AWS resource such as a Firehose stream that is located in a different Region.

# Validating the flow of log events
<a name="ValidateLogEventFlowFirehose"></a>

After you create the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to the Firehose delivery stream. The data starts appearing in your Amazon S3 bucket based on the time buffer interval that is set on the Firehose delivery stream. Once enough time has passed, you can verify your data by checking the Amazon S3 bucket. To check the bucket, enter the following command:

```
aws s3api list-objects --bucket 'amzn-s3-demo-bucket' 
```

The output of that command will be similar to the following:

```
{
    "Contents": [
        {
            "Key": "2021/02/02/08/my-delivery-stream-1-2021-02-02-08-55-24-5e6dc317-071b-45ba-a9d3-4805ba39c2ba",
            "LastModified": "2021-02-02T09:00:26+00:00",
            "ETag": "\"EXAMPLEa817fb88fc770b81c8f990d\"",
            "Size": 198,
            "StorageClass": "STANDARD",
            "Owner": {
                "DisplayName": "firehose+2test",
                "ID": "EXAMPLE27fd05889c665d2636218451970ef79400e3d2aecca3adb1930042e0"
            }
        }
    ]
}
```

You can then retrieve a specific object from the bucket by entering the following command. Replace the value of `key` with the value you found in the previous command.

```
aws s3api get-object --bucket 'amzn-s3-demo-bucket' --key '2021/02/02/08/my-delivery-stream-1-2021-02-02-08-55-24-5e6dc317-071b-45ba-a9d3-4805ba39c2ba' testfile.gz
```

The data in the Amazon S3 object is compressed with the gzip format. You can examine the raw data from the command line using one of the following commands:

Linux:

```
zcat testfile.gz
```

macOS:

```
zcat <testfile.gz
```

# Modifying destination membership at runtime
<a name="ModifyDestinationMembershipFirehose"></a>

You might encounter situations where you have to add or remove log senders from a destination that you own. You can use the **PutDestinationPolicy** action on your destination with new access policy. In the following example, a previously added account **111111111111** is stopped from sending any more log data, and account **333333333333** is enabled.

1. Fetch the policy that is currently associated with the destination **testDestination** and make a note of the **AccessPolicy**:

   ```
   aws logs describe-destinations \
       --destination-name-prefix "testFirehoseDestination"
   
   {
       "destinations": [
           {
               "destinationName": "testFirehoseDestination",
               "targetArn": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
               "roleArn": "arn:aws:iam:: 222222222222:role/CWLtoKinesisFirehoseRole",
               "accessPolicy": "{\n  \"Version\" : \"2012-10-17\",\n  \"Statement\" : [\n    {\n      \"Sid\" : \"\",\n      \"Effect\" : \"Allow\",\n      \"Principal\" : {\n        \"AWS\" : \"111111111111 \"\n      },\n      \"Action\" : \"logs:PutSubscriptionFilter\",\n      \"Resource\" : \"arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination\"\n    }\n  ]\n}\n\n",
               "arn": "arn:aws:logs:us-east-1: 222222222222:destination:testFirehoseDestination",
               "creationTime": 1612256124430
           }
       ]
   }
   ```

1. Update the policy to reflect that account **111111111111** is stopped, and that account **333333333333** is enabled. Put this policy in the **\$1/NewAccessPolicy.json** file:

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement" : [
       {
         "Sid" : "",
         "Effect" : "Allow",
         "Principal" : {
           "AWS" : "333333333333 "
         },
         "Action" : "logs:PutSubscriptionFilter",
         "Resource" : "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"
       }
     ]
   }
   ```

------

1. Use the following command to associate the policy defined in the **NewAccessPolicy.json** file with the destination:

   ```
   aws logs put-destination-policy \
       --destination-name "testFirehoseDestination" \                                                                              
       --access-policy file://~/NewAccessPolicy.json
   ```

   This eventually disables the log events from account ID **111111111111**. Log events from account ID **333333333333** start flowing to the destination as soon as the owner of account **333333333333** creates a subscription filter.