

# Data Collection
Data Collection

## Introduction


Amazon Web Services offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, security and enterprise applications. These services help organizations move faster, lower IT costs and scale. This workshop provides a set of modules to automate the collection of AWS resource utilization data across multiple Management and Linked accounts. It is designed to centralize this data and make it easy to query and visualize **to help you identify and track optimization opportunities**.

Modules can be installed in any combination using the provided CloudFormation stack. They can be added or removed after initial installation by simply updating the stack. You can learn more about each module [on GitHub](https://github.com/awslabs/cid-framework/tree/main/data-collection#modules).

## Architecture


Resources for this workshop are deployed with AWS CloudFormation in several accounts:

1.  **Data Collection Stack**. This stack deploys common and module-specific resources in the **Data Collection Account**. Each data collection module is optional. We recommend using a dedicated **Data Collection Account** rather than using the Management Account for this stack.

1.  **Read Permissions Stack** This stack is deployed in one or several Management Accounts. It deploys several entities:
   +  **Management Role Stack** This stack deploys an AWS IAM Role granting read-only access to AWS Organizations as well as other roles that are required for the various modules that you elect to install.
   +  **Linked Accounts StackSet** - Some information can be collected only on the level of each individual Linked Account, and this StackSet will deploy a Stack to each of those accounts with an AWS IAM Role granting the permissions required for your selected module.
   +  **Linked Accounts Role Stack for Management Account** - (optional) CloudFormation StackSets only deploy resources into Linked Accounts and do not deploy into the Management Account. This stack is only needed if you deploy any modules that collect data directly from Linked Accounts and the Management Account also contains relevant resources that you want to additionally include.

![\[Data Collection architecture diagram\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/architecture.png)


1. An [Amazon EventBridge](https://aws.amazon.com/eventbridge/) Rule invokes a Step Function of every deployed data collection module, based on a configurable schedule.

1. The Step Function launches a Lambda function **Account Collector** that assumes a Read Role in the Management Account to retrieve the list of Linked Accounts via the AWS Organizations API.

1. The Step Function launches another Data Collection Lambda for each Linked Account (or for each Management Account if the data collection is available on Organization level).

1. This Data Collection Lambda assumes a role in the Management Account or in each Linked Account (depending on the module) and retrieves respective data via the AWS SDK for Python. The retrieved data is then stored an Amazon S3 bucket.

1. Once data is stored in the S3 bucket, the Step Function triggers an [AWS Glue](https://aws.amazon.com/glue/) crawler which creates or updates the table in the Glue Data Catalog.

1. Collected data is then available to be analyzed with [Amazon Athena](https://aws.amazon.com/athena) and visualized with [Amazon Quick Sight](https://aws.amazon.com/quicksight/) using the Cloud Intelligence Dashboards.

## Costs

+ Estimated costs should be <\$15 a month for a small organization.

## Authors

+ Eric Christensen, Senior Technical Account Manager, AWS
+ Julio Cesar Chaves Fernandez, Technical Account Manager, AWS
+ Stephanie Gooch, Senior Commercial Architect, AWS
+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager, AWS

## Contributors

+ Andy Brown, OPTICS Manager Commercial Architects IBUs, AWS
+ Xianshu Zeng, OPTICS Commercial Architect, AWS
+ Rem Baumann, OPTICS Commercial Architect, AWS
+ Yash Bindlish, Enterprise Support Manager, AWS

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Topics**

# Deployment
Deployment

Deployment of the stack consists of 2 steps. First step is in Management Account and the 2nd in Data Collection Account. If you do not have access to Management Account please follow this [guide](data-collection-without-org.md).

![\[Data Collection architecture diagram\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/deployment-steps.png)


## Prerequisites for deployment

+ Access to the **Management AWS Account** of the AWS Organization to deploy CloudFormation. You need permissions in the Management Account to create an IAM role and policy and deploy CloudFormation Stacks and StackSets. **Note:** If you do not have access to the Management Account, you can perform an [alternate deployment](data-collection-without-org.md) of certain modules with a manually created list of Linked Accounts.
+ Access to a Linked Account - referred as **Data Collection Account** 
+ Deployment can be only done in following **regions**: (eu-west-1, us-east-2, us-east-1, us-west-1, us-west-2, ap-southeast-1, eu-central-1, eu-west-2, eu-north-1, ap-southeast-2, ap-south-1, ap-northeast-3, ap-northeast-2, ap-northeast-1, ca-central-1,eu-west-3, sa-east-1). Please make sure you choose one of these regions to install the Data Collection stack.
+ Lambda [concurrent executions limit](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html) of at least 500 (1000 is recommended) in your Data Collection Account. Most accounts will have the regular default of 1000. But depending upon how your account was provisioned, such as through Control Tower, it may have a default limit of only 10, which is insufficient for effective operation. You can check and increase your limit via the [Service Quotas console](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/lambda/quotas).
+ The Trusted Advisor and Support Cases Modules of Data Collection require a Business, Enterprise On-Ramp, or Enterprise Support plan. Please see more information about prerequisites of individual modules [on GitHub](https://github.com/awslabs/cid-framework/tree/main/data-collection#modules) 

## Step 1. [In Management Accounts] Deploy the Read Permissions stack


Prerequisites: Make sure the [trusted access with AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-activate-trusted-access.html) is activated. The Management Account stack makes use of [stack sets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) configured to use [service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html#stacksets-concepts-stackset-permission-models) to deploy stack instances to linked accounts in the AWS Organization. Typically in Organizations it is already the case. For the new Organization you can activate it by going to [StakSet page of CloudFormation](https://console.aws.amazon.com/cloudformation/home?#/stacksets) if this access is not activated you will see the banner with an action button to do so. **Note:** If you do not have access to the Management Account, you can perform an [alternate deployment](data-collection-without-org.md) of certain modules with a manually created list of Linked Accounts.

Login to Management Account and click Launch Stack for deploying [Permission Stack](https://github.com/awslabs/cid-framework/tree/main/data-collection/deploy/deploy-data-read-permissions.yaml):

 [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-read-permissions.yaml&stackName=CidDataCollectionReadPermissionsStack&param_DataCollectionAccountID=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_AllowModuleReadInMgmt=yes&param_OrganizationalUnitID=REPLACE%20WITH%20ORGANIZATIONAL%20UNIT%20ID&param_IncludeBackupModule=no&param_IncludeBudgetsModule=no&param_IncludeComputeOptimizerModule=yes&param_IncludeCostAnomalyModule=yes&param_IncludeECSChargebackModule=no&param_IncludeInventoryCollectorModule=yes&param_IncludeRDSUtilizationModule=no&param_IncludeRightsizingModule=no&param_IncludeTAModule=yes&param_IncludeTransitGatewayModule=no&param_IncludeHealthEventsModule=yes&param_IncludeCostOptimizationHubModule=no&param_IncludeLicenseManagerModule=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-read-permissions.yaml&stackName=CidDataCollectionReadPermissionsStack&param_DataCollectionAccountID=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_AllowModuleReadInMgmt=yes&param_OrganizationalUnitID=REPLACE%20WITH%20ORGANIZATIONAL%20UNIT%20ID&param_IncludeBackupModule=no&param_IncludeBudgetsModule=no&param_IncludeComputeOptimizerModule=yes&param_IncludeCostAnomalyModule=yes&param_IncludeECSChargebackModule=no&param_IncludeInventoryCollectorModule=yes&param_IncludeRDSUtilizationModule=no&param_IncludeRightsizingModule=no&param_IncludeTAModule=yes&param_IncludeTransitGatewayModule=no&param_IncludeHealthEventsModule=yes&param_IncludeCostOptimizationHubModule=no&param_IncludeLicenseManagerModule=yes) 

### More info


1. To ensure full visibility of data across your organization accounts, in the parameters section, we recommend to pass the Organization Root ID as the organizational unit parameter (OrganizationalUnitID). You can check it here: https://console.aws.amazon.com/organizations/v2/home/accounts

![\[Organization Root ID\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/2a-find-organisation-root-id.png)


![\[Data Read Role CloudFormation stack - parameters\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/2b-data-read-permissions-stack-create-parameters.png)


1. Make sure to select all modules that you want to allow access to your organization accounts data. You can check the list of the modules [on GitHub](https://github.com/awslabs/cid-framework/tree/main/data-collection#modules).

![\[Data Read Role CloudFormation - modules selection\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/2c-data-read-permissions-stack-create-modules.png)


1. Please make sure you specify **Data Collection Account Id** correctly. It is not the Management Account Id, its an ID of the dedicated Data Collection Account.

1. Click **Next** at the bottom of the **Specify stack details** stage, and then, click **Next** again at the bottom of the **Configure stack options** stage to move to the **Review** stage. Click **Submit** at the end of the **Review** stage to initiate the update. This process will take a few minutes until completion.

## Step 2. [In Data Collection Account] Deploy the Data Collection Stack


Login to Data Collection Account and click Launch Stack for deploying [Data Collection Stack](https://github.com/awslabs/cid-framework/tree/main/data-collection/deploy/deploy-data-collection.yaml).

 [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-collection.yaml&stackName=CidDataCollectionStack&param_ManagementAccountID=REPLACE%20WITH%20MANAGEMENT%20ACCOUNT%20ID&param_IncludeTAModule=yes&param_IncludeRightsizingModule=no&param_IncludeCostAnomalyModule=yes&param_IncludeInventoryCollectorModule=yes&param_IncludeComputeOptimizerModule=yes&param_IncludeECSChargebackModule=no&param_IncludeRDSUtilizationModule=no&param_IncludeOrgDataModule=yes&param_IncludeBudgetsModule=yes&param_IncludeTransitGatewayModule=no&param_IncludeHealthEventsModule=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-collection.yaml&stackName=CidDataCollectionStack&param_ManagementAccountID=REPLACE%20WITH%20MANAGEMENT%20ACCOUNT%20ID&param_IncludeTAModule=yes&param_IncludeRightsizingModule=no&param_IncludeCostAnomalyModule=yes&param_IncludeInventoryCollectorModule=yes&param_IncludeComputeOptimizerModule=yes&param_IncludeECSChargebackModule=no&param_IncludeRDSUtilizationModule=no&param_IncludeOrgDataModule=yes&param_IncludeBudgetsModule=yes&param_IncludeTransitGatewayModule=no&param_IncludeHealthEventsModule=yes) 

### More Info


1. Please make sure you specify the same Prefix and Role Name parameters and the account Id of the Management Account (can be comma separated list).

1. In the same parameters section, update the regions from which data about resources will be collected. Specify at least the same regions your existing Data Collection stack uses.

![\[Optimization Data Collection Stack update - regions parameter\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1f-data-collection-update-compopt-regions.png)


1. Click **Next** at the bottom of the **Specify stack details** stage, and then, click **Next** again at the bottom of the **Configure stack options** stage to move to the **Review** stage. Click **Submit** at the end of the **Review** stage to initiate the update. This process will take a few minutes until completion.

After deployment you can [check the execution state](data-collection-utilize-data.md#data-collection-utilize-data-check-execution) and then install [Advanced Dashboards](dashboard-advanced.md) for collected data.

## Step 3. (Optional) [In Data Collection Account] Apply granular control over accounts, regions, and modules


In most cases, the Data Collection framework can be run as deployed, collecting data from all accounts in your Organization for all enabled modules and the regions defined during installation. However, in some scenarios you may need to limit certain modules to a subset of accounts, OUs, and/or regions — for example, excluding sandbox accounts from specific modules or restricting regional data collection for compliance reasons.

For detailed instructions on configuring inclusion lists, exclusion lists, and per-module allow/deny rules, see [Granular Account and Region Control over Data Collection](granular-data-collection-control.md).

# Granular Account and Region Control over Data Collection
Granular Data Collection Control

## Last Updated


March 2026

## Author

+ Eric Christensen, Sr. Technical Account Manager, AWS

## Controlling Data Collection Scope


This guide explains how to customize which AWS accounts and regions the Cloud Intelligence Dashboards Data Collection framework processes. You can limit data collection to specific accounts, exclude certain accounts, or apply granular rules to accounts and OUs per module and region.

## Overview


The Data Collection framework provides three methods to control how accounts are processed:


| Method | Use Case | Scope | 
| --- | --- | --- | 
|   [Inclusion List](#inclusion-list)   |  Collect data from only specific accounts  |  All modules  | 
|   [Exclusion List](#exclusion-list)   |  Exclude specific accounts from collection  |  All modules  | 
|   [Granular Control](#granular-execution-control)   |  Fine-grained control by module, account, OU, and region  |  Per module  | 

All configuration files should be placed in your CID data S3 bucket. The `CID-DC-account-collector` Lambda defines default values for these in the environment variable parameters. You may customize as needed. By default, the prefix will be `account-list/`. These files are not created or deployed automatically but must be manually created, per instructions below.

## Inclusion and Exclusion Lists


These simple list-based methods apply globally to all data collection modules.

### Inclusion List


Use an inclusion list when you want to collect data from only a specific set of accounts, and not use AWS Organizations to retrieve the list of all accounts in the Organization. This is useful if you have restrictions to accessing AWS Organizations. See also [Data Collection without AWS Organizations](data-collection-without-org.md).

The data fields needed are: `account_id`, `account_name`, and `payer_id`. Note the inclusion list config can be either a CSV or a JSON structure but you must ensure the file suffix matches the format.

 **File location (default):** `s3://<your-cid-bucket>/account-list/account-list.csv` or `account-list.json` 

 **CSV format:** 

```
account_id,account_name,payer_id
123456789012,Production Account,111122223333
234567890123,Development Account,111122223333
345678901234,Staging Account,111122223333
```

 **JSON format:** 

```
[
  {"account_id": "123456789012", "account_name": "Production Account", "payer_id": "111122223333"},
  {"account_id": "234567890123", "account_name": "Development Account", "payer_id": "111122223333"}
]
```

When an inclusion list is present, the framework collects data only from the accounts in this file and ignores your AWS Organization structure.

### Exclusion List


Use an exclusion list when you have access to your AWS Organizations but want to universally exclude specific accounts. For example, you may want to exclude one or more accounts that are used for security purposes. Note, this functionality does not process OUs. If you wish to exclude an entire OU, either specify each account in the file or consider the **Granular Execution Control** option described later.

 **File location (default):** `s3://<your-cid-bucket>/account-list/excluded-linked-account-list.csv` 

 **Format:** 

The format is a simple comma-separated list of 12-digit AWS account IDs (**Note:** ensure any leading 0s are included):

```
123456789012,234567890123,345678901234
```

List account IDs separated by commas on a single line. The framework will skip these accounts when iterating through your Organization.

### Choosing Between Inclusion and Exclusion

+ Use an **inclusion list** when you have a small number of accounts you want to monitor and/or you do not have access to AWS Organizations.
+ Use an **exclusion list** when you want to monitor most accounts but exclude a few (such as sandbox, security, or test accounts).

**Note**  
If both files exist, the inclusion list takes precedence and the exclusion list is ignored.

## Granular Execution Control


For advanced scenarios, granular execution control allows you to define allow and deny account-based rules at the module level, with optional region filtering. This approach follows IAM-style policy evaluation where deny rules always take precedence over allow rules, no matter where they are defined in a sequence. For example, if your first rule is to deny using a specific module, account, and region combination and your tenth rule is to allow that combination (or vice versa), that particular combination will still be denied unless you remove the deny-based line.

 **File location (default):** `s3://<your-cid-bucket>/account-list/granular-execution-config.csv` 

### File Format


```
module,policy_type,principal,regions,payload
```


| Field | Description | Required | 
| --- | --- | --- | 
|   `module`   |  The module name (e.g., `budgets`, `inventory`, `health-events`)  |  Yes  | 
|   `policy_type`   |   `A` for Allow or `D` for Deny  |  Yes  | 
|   `principal`   |  An AWS account ID or Organization Unit ID (e.g., `123456789098`, `ou-xxxx-xxxxxxxx` or `r-xxxx`)  |  Yes  | 
|   `regions`   |  Colon-separated list of regions (e.g., `us-east-1:ap-southeast-2`). Leave empty for all regions.  |  No  | 
|   `payload`   |  Reserved for future use. Leave empty.  |  No  | 

### Examples


 **Deny a specific account for the budgets module (all regions):** 

```
budgets,D,123456789012,,
```

 **Deny a specific account for inventory in us-east-1 only:** 

```
inventory,D,234567890123,us-east-1,
```

**Note**  
For inventory the current capability is to manage all inventory sub-modules together.

 **Allow an entire OU for the backup module:** 

```
backup,A,ou-abc1-12345678,,
```

 **Allow an OU but deny a specific account within it:** 

```
backup,A,ou-abc1-12345678,,
backup,D,345678901234,,
```

 **Deny multiple regions for a specific account:** 

```
compute-optimizer,D,456789012345,us-east-1:eu-west-1,
```

### Policy Evaluation


The granular configuration follows IAM-style policy evaluation:

1.  **Deny always wins** — If any deny rule matches an account, that account is excluded regardless of any allow rules.

1.  **OU expansion** — When you specify an OU ID, the rule applies to all accounts within that OU and its child OUs.

1.  **Region filtering** — When regions are specified, the rule applies only to those regions. An empty regions field means all regions.

This allows you to create broad allow rules (such as allowing an entire OU) and then carve out specific exceptions with deny rules.

### Precedence


When a granular execution config file exists and contains rules for a module:

1. The granular config is used for that module.

1. Any inclusion and exclusion lists (as described before the granular operation) are ignored for that module.

If the granular config file exists but has no rules for a specific module, that module falls back to the standard inclusion/exclusion list behavior.

## Configuration Summary



| File | Path | Purpose | 
| --- | --- | --- | 
|  Inclusion list  |   `account-list/account-list.csv` or `.json`   |  Collect only these accounts  | 
|  Exclusion list  |   `account-list/excluded-linked-account-list.csv`   |  Exclude these accounts from Organization  | 
|  Granular config  |   `account-list/granular-execution-config.csv`   |  Per-module allow/deny rules  | 

### Quick Start


1. Navigate to your CID data S3 bucket (typically `cid-data-<account-id>`).

1. Create the `account-list/` folder if it doesn’t exist.

1. Upload the appropriate configuration file based on your requirements.

1. The next scheduled data collection run will use your configuration.

No changes to the deployed CloudFormation stack are required. The account collector Lambda automatically detects and applies these configuration files.

## Troubleshooting

+  **Configuration not applied:** Verify the file is in the correct S3 path and the filename matches exactly.
+  **Accounts still being collected:** Check for typos in module names and account IDs. Account IDs must be 12 digits.
+  **OU rules not working:** Ensure you are using the full OU ID format (e.g., `ou-xxxx-xxxxxxxx`) or root ID (`r-xxxx`).
+  **Unexpected behavior:** Review CloudWatch Logs for the account-collector Lambda to see which configuration files were detected and how rules were applied.

# Utilize Data
Utilize Data

## Check execution state


Data Collection stack is using Step Functions to pull the data. You can login to data collection account and check [Step functions Console](https://console.aws.amazon.com/states/home?#/statemachines). You can select with the prefix (default=CID-DC-), and make sure they all run successfully. You may need to scroll the "State machines" table to the right to see "Succeed" and "Failed" columns.

These Step Functions are scheduled to run the first time in 30 mins after deployment and then every 14 days by default. You can trigger the new execution or check the logs of functions if needed.

### More


![\[Step Functions Executions Status\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/4a-step-functions-executions-check.png)


Now you can inspect tables created in the Athena database, and use a simple SELECT query to inspect the results.

![\[Athena tables - data check\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/5a-athena-data-check-tables-query.png)


For example:

```
SELECT * FROM "cost_anomaly_data" LIMIT 10;
```

![\[Athena data check query results\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/5b-athena-data-check-query-results.png)


## Utilizing Your Data


Now that you have pulled together optimization data there are different ways in which you can analyze & visualize it to make infrastructure optimization decisions.

### Visualization of Trusted Advisor data with Amazon Quick Sight


You can visualize Trusted Advisor Data with the [Trusted Advisor Organizational (TAO) Dashboard.](trusted-advisor-dashboard.md) To deploy the TAO Dashboard please follow [TAO Dashboard deployment steps](trusted-advisor-dashboard.md#trusted-advisor-dashboard-prerequisites) and specify the organizational data collection bucket created in this lab as a source.

### Visualization of Compute Optimizer data with Amazon Quick Sight


You can visualize Compute Optimizer Data with [Compute Optimizer Dashboard](compute-optimizer-dashboard.md). To deploy the Compute Optimizer Dashboard please follow the [Compute Optimizer deployment steps](compute-optimizer-dashboard.md) which also delivers Athena Tables and Views.

### Visualization of AWS Budgets data with Amazon Quick Sight


You can visualize AWS Budgets Data with [AWS Budgets Dashboard](budgets-dashboard.md). To deploy the AWS Budgets Dashboard please follow the [AWS Budgets Dashboard deployment steps](budgets-dashboard.md) which also delivers Athena Tables and Views.

### AWS Organization Data and The Cost Intelligence Dashboard


You can integrate organizational structure with OUs and tags specified in AWS Organizations to the dashboards. Learn more how to add organizational taxonomy to Cloud Intelligence Dashboards following [Add Organizational Taxonomy](add-org-taxonomy.md) guide.

### RDS Graviton Eligibility and Savings Estimation with Amazon Quick Sight


You can get insights into Graviton migration savings opportunities with [Graviton Savings Dashboards](graviton-savings-dashboard.md). To deploy the Graviton Savings Dashboards please follow the [Graviton Savings Dashboards](graviton-savings-dashboard.md) which also delivers Athena Tables and Views.

### Snapshots and AMIs


When a AMI gets created it takes a Snapshot of the volume. This is then needed to be kept in the account whilst the AMI is used. Once the AMI is released the Snapshot can no longer be used but it still incurs costs. Using this query we can identify Snapshots that have the "AMI Available", those where the "AMI Removed" and those that fall outside of this scope and are "NOT AMI". Data must be collected and the crawler finished running before this query can be run.

#### Optimization Data Snapshots and AMIs Query


```
  SELECT *,
  CASE
  WHEN snap_ami_id = imageid THEN
  'AMI Avalible'
  WHEN snap_ami_id LIKE 'ami%' THEN
  'AMI Removed'
  ELSE 'Not AMI'
  END AS status
    FROM (
  (SELECT snapshotid AS snap_id,
      volumeid as volume,
      volumesize,
      starttime,
      Description AS snapdescription,
      year,
      month,
      ownerid,

      CASE
      WHEN substr(Description, 1, 22) = 'Created by CreateImage' THEN
      split_part(Description,' ', 5)
      WHEN substr(Description, 2, 11) = 'Copied snap' THEN
      split_part(Description,' ', 9)
      WHEN substr(Description, 1, 22) = 'Copied for Destination' THEN
      split_part(Description,' ', 4)
      ELSE ''
      END AS "snap_ami_id"
  FROM "optimization_data"."snapshot_data"
  ) AS snapshots
  LEFT JOIN
      (SELECT imageid,
      name,
      description,
      state,
      rootdevicetype,
      virtualizationtype
      FROM "optimization_data"."ami_data") AS ami
          ON snapshots.snap_ami_id = ami.imageid )
```

There is an option to add pricing data to this query. This assumes you have deployed the Pricing module.

#### Optimization Data Snapshots and AMIs with OD pricing data


 **Athena** 

1. Go to AWS Athena

1. Go to *Saved queries* at the top of the screen

1. Run the *pricing\$1ec2\$1create\$1table* Query to create a pricing table

1. In *Saved queries* Run the *pricing\$1region\$1names* Query to create a normalized region name table

1. In *Saved queries* run *inventory\$1snapshot\$1connected\$1to\$1ami\$1with\$1pricing* to create a view

1. Run the below to see your data

   ```
       SELECT * FROM "optimization_data"."snapshot_ami_quicksight_view" limit 10;
   ```

#### Optimization Data Snapshots and AMIs with CUR data


You must have access to your Cost and Usage data in the same account and region so you can join through Athena

 **Athena** 

1. Go to AWS Athena

1. Go to *Saved queries* at the top of the screen

1. In *Saved queries* run *inventory\$1snapshot\$1connected\$1to\$1ami\$1with\$1cur* to create a view

1. Change the value \$1\$1table\$1name\$1 to your Cost and Usage report database and name and your \$1\$1date\$1filter\$1 to look at a certain month/year

1. You will see the price of all Snapshots and how much they cost based on their connection with AMIS

Please note that if you delete the snapshot and it is part of a lineage you may only make a small saving

### EBS Volumes and Trusted Advisor Recommendations


Trusted advisor identifies idle and underutilized volumes. This query joins together the data so you can see what portion of your volumes are flagged. Data must be collected and the crawler finished running before this query can be run.

This section requires you to have the **Inventory Module** and the **Trusted Advisor Module** deployed.

#### Optimization Data EBS Volumes and Trusted Advisors Query


```
    SELECT * FROM
        "optimization_data"."ebs_data"
    LEFT JOIN
    (select "volume id","volume name", "volume type","volume size", "monthly storage cost" ,accountid, category, region, year,month
    from
    "optimization_data".ta_data ) ta
    ON "ebs_data"."volumeid" = "ta"."volume id" and "ebs_data"."year" = "ta"."year" and "ebs_data"."month" = "ta"."month"
```

There is an option to add pricing data to this query.

#### Optimization Data EBS Volumes and Trusted Advisor with pricing data


 **Athena** 

1. Go to AWS Athena and run the below

1. Go to **Saved queries** at the top of the screen

1. Run the **ec2-view** Query to create a view of ebs and ta data

1. Run the **ec2\$1pricing** Query to create a pricing table

1. In **Saved queries** run the **region\$1names** Query to create a normalized region name table

1. In **Saved queries** run **ebs-ta-query-pricing** to create a view

1. Run the below to see your data

   ```
       SELECT * FROM "optimization_data"."ebs_quicksight_view" limit 10;
   ```

The section below will bring in opportunities to move EBS volumes to gp3

#### EBS Volumes and Trusted Advisor moving to gp3


1. Go to AWS Athena and run the below

1. Go to **Saved queries** at the top of the screen

1. Run the **ec2-view** Query to create a view of ebs and ta data

1. Run the **ec2\$1pricing** Query to create a pricing table

1. In **Saved queries** run the **region\$1names** Query to create a normalized region name table

1. In **Saved queries** run **gp3-opportunity** to create a view

### AWS EBS Volumes and Snapshots


If you wish to see what volumes have what snapshots attached to them from a holistic view then this query can combine these two data sources. This could provide information into which snapshots you could archive using [Elastic Block Storage Snapshots Archive](https://aws.amazon.com/ebs/snapshots/faqs/#Snapshots_Archive) 

#### Optimization Data Snapshots with EBS


```
WITH data as (
        Select volumeid,
          snapshotid,
          ownerid "account_id",
          cast(  replace(split(split(starttime, '+') [ 1 ], '.') [ 1 ], 'T', ' ') as timestamp) as start_time,
          CAST("concat"("year", '-', "month", '-01') AS date) "data_date",
          sum(volumesize) "volume_size"
        from "optimization_data"."snapshot_data"
        group by 1,2,3,4,5
      ),
      latest AS(
        Select max(data_date) "latest_date" from data
      ),
      ratio AS(
        Select distinct volumeid, data_date, latest_date,
          count(distinct snapshotid) AS "snapshot_count_per_volume"
        from data
        LEFT JOIN latest ON latest.latest_date = data_date
          WHERE volumeid like 'vol%' and data_date = latest_date
        group by 1,2,3
      )
      select data.volumeid,
        data.snapshotid,
        account_id,
        data.data_date,
        start_time,
        volume_size,
        snapshot_count_per_volume,
          CASE WHEN data.volumeid NOT LIKE 'vol%' THEN 1 ELSE dense_rank() OVER (partition by data.volumeid ORDER by start_time) END AS "snapshot_lineage"
        from data
        Left JOIN ratio ON ratio.volumeid = data.volumeid
        ORDER by volumeid, snapshot_lineage
```

If you wish to connect to your Cost and Usage report for snapshot costs please use the below:

#### Optimization Data Snapshots with EBS and CUR


```
      WITH cur_mapping AS (
        SELECT DISTINCT
        split_part(line_item_resource_id,'/',2) AS "snapshot_id",
        line_item_usage_account_id AS "linked_account_id",
        CAST("concat"("year", '-', "month", '-01') AS date) "billing_period", sum(line_item_usage_amount) "snapshot_size",
        sum(line_item_unblended_cost) "snapshot_cost"
        FROM "athenacurcfn_mybillingreport"."mybillingreport"
        WHERE (CAST("concat"("year", '-', "month", '-01') AS date) = ("date_trunc"('month', current_date) - INTERVAL  '1' MONTH)) AND (line_item_resource_id <> '') AND (line_item_line_item_type LIKE '%Usage%') AND (line_item_product_code = 'AmazonEC2') AND (line_item_usage_type LIKE '%EBS:Snapshot%')
        group by 1,2,3
      ),
      snapshot_data AS (
        Select volumeid,
          snapshotid,
          ownerid "account_id",
          cast(
            replace(split(split(starttime, '+') [ 1 ], '.') [ 1 ], 'T', ' ') as timestamp
          ) as start_time,
          CAST("concat"("year", '-', "month", '-01') AS date) "data_date",
          sum(volumesize) "volume_size"
        from "optimization_data"."snapshot_data"
        group by 1,2,3,4,5
      ),
      data AS (
        SELECT DISTINCT volumeid,
          snapshotid,
          account_id,
          billing_period,
          data_date,
          start_time,
          sum(snapshot_size) AS snapshot_size,
          sum(snapshot_cost) AS snapshot_cost,
          sum(volume_size) AS "volume_size"
        FROM snapshot_data
        LEFT JOIN cur_mapping ON cur_mapping.snapshot_id = snapshotid AND cur_mapping.linked_account_id = account_id
        group by 1,2,3,4,5,6
      ),
      latest AS(
        Select max(data_date) "latest_date"
            from data
      ),
      ratio AS(
        Select distinct volumeid, data_date, latest_date,
          count(distinct snapshotid) AS "snapshot_count_per_volume",
          sum(snapshot_cost) AS "all_snapshot_cost_per_volume",
          sum(snapshot_size) AS "all_snapshot_size_per_volume"
        from data
        LEFT JOIN latest ON latest.latest_date = data_date
        WHERE volumeid like 'vol%' and data_date = latest_date
        group by 1,2,3
      )
      select data.volumeid,
        data.snapshotid,
        account_id,
        data.data_date,
        start_time,
        billing_period,
        snapshot_size,
        volume_size,
        all_snapshot_cost_per_volume
        all_snapshot_size_per_volume,
        snapshot_count_per_volume,
        CASE WHEN data.volumeid NOT LIKE 'vol%' THEN 1 ELSE dense_rank() OVER (partition by data.volumeid ORDER by start_time) END AS "snapshot_lineage"
        from data
        LEFT JOIN ratio ON ratio.volumeid = data.volumeid
```

### ECS Chargeback


Report to show costs associated with ECS Tasks leveraging EC2 instances within a Cluster

#### Athena Configuration


1. Navigate to the Athena service

1. Select the "optimization data" database

1. In **Saved Queries** find ** "cluster\$1metadata\$1view" ** Change "BU" to the tag you wish to do chargeback for

1. Click the **Run** button

1. In **Saved Queries** find ** "ec2\$1cluster\$1costs\$1view" ** - Replace \$1\$1CUR\$1 in the "FROM" clause with your CUR table name - For example, "curdb"."ecs\$1services\$1clusters\$1data"

1. Click the **Run** button

1. In **Saved Queries** find ** "bu\$1usage\$1view" ** - Replace \$1\$1CUR\$1 in the "FROM" clause with your CUR table name - For example, "curdb"."ecs\$1services\$1clusters\$1data"

1. Click the **Run** button

Now your views are created you can run your report

 **Manually execute billing report** 
+ In **Saved Queries** find ** "ecs\$1chargeback\$1report" ** - Replace "bu\$1usage\$1view.month" value with the appropriate month desired for the report - For example, a value of "2" returns the charges for February
+ Click the **Run** button

 **Example Output** 

![\[Example output of query results of ECS chargeback query\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/Example_output.png)


Breakdown:
+ task\$1usage: total memory resources reserved (in GBs) by all tasks over the billing period (i.e. -- monthly)
+ percent: task\$1usage / total\$1usage
+ ec2\$1cost: monthly cost for EC2 instance in \$1
+ Services: Name of service
+ servicearn: Arn of service
+ Value: Value of specified tag for the ECS service (could be App, TeamID, etc?)

### AWS Transit Gateway Chargeback


AWS Transit Gateway data transfer cost billed at the central networking account is allocated proportionally to the end usage accounts. The proportion is calculated by connecting with AWS CloudWatch bytes in bytes out data at each Transit Gateway attachment level. The total central data transfer cost is calculated at the central networking account with Cost and Usage Report. The chargeback amount is the corresponding proportional cost of the total central amount.

#### Athena Configuration


1. Navigate to the Athena service and open **Saved Queries**.

1. Select your database where you have your Cost and Usage Report

1. In **Saved Queries** find ** "tgw\$1chargeback\$1cur" ** 

1. Replace `CURDatabase` with your database name in the tgw\$1chargeback\$1cur. For example:

```
"cur"."cost_and_usage_report"
```

The Cloud Watch data collection is automated for all the regions. However, if you are destined to only chargeback to a subset of selected regions, you need to specify it in `"product_location LIKE '%US%'"` line.

1. Click the **Run** button

1. In **Saved Queries** find ** "tgw\$1chargeback\$1cw" ** 

1. Select the "optimization data" database

1. Replace `CURDatabase` with your database name in the tgw\$1chargeback\$1cw.

1. Click the **Run** button

Now your views are created and you can run your report.

# Update
Update

## Update Process


You can check the version of your existing Data Collection in the description of the CloudFormation stack. If it does not contain any version reference, then it is less then v3 and you need to [update from legacy versions](#data-collection-update-from-legacy-versions).

Please check the full [Change Log on GitHub](https://github.com/awslabs/cid-framework/blob/main/data-collection/CHANGELOG.md).

### Step 1. [In Management/Payer Account] Deploy permissions stack


1. The URL to the latest Permissions Stack CloudFormation Template is https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-read-permissions.yaml. Copy this URL and keep it at hand, you will use it to update the current stack.

1. Login to the Management/Payer Account. Open respective CloudFormation Stack and update it by providing the URL that you copied above.

### Step 2. [In Data Collection Account] Update Data Collection stack


1. The URL to the latest Data Collection CloudFormation is https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-collection.yaml. Copy this URL and keep it at hand, you will use it to update the current Data Collection stack.

1. Login to the Data Collection Account. Open respective CloudFormation Stack and update it by providing the URL that you copied above.

## Update from legacy versions


The Data Collection gets updated to increase performance and add the new data collection modules. If you already have this lab installed via Well Architected Labs site, you have a version 1 or 2. This update procedure will help you updated both to the latest v3. You can check the description of Data Collection Stack. If the description of the stack does not contain version (ex: 3.0.0).

Watch [demo of update process from legacy version to v3](https://www.youtube.com/watch?v=vpRNkwmuOEM) 

### Step 1. [In Management/Payer Account] Deploy permissions stack


#### More


1. Login to your Management/Payer Account and get the Organization Root ID from the [AWS Organizations console](https://console.aws.amazon.com/organizations/v2/home/accounts).

![\[Organization Root ID\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/2a-find-organisation-root-id.png)


1. Install the Permission Stack in your Management/Payer Account by clicking Launch Stack below

 [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-read-permissions.yaml&stackName=CidDataCollectionDataReadPermissionsStack&param_DataCollectionAccountID=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_AllowModuleReadInMgmt=yes&param_OrganizationalUnitID=REPLACE%20WITH%20ORGANIZATIONAL%20UNIT%20ID&param_IncludeBudgetsModule=no&param_IncludeComputeOptimizerModule=no&param_IncludeCostAnomalyModule=no&param_IncludeECSChargebackModule=no&param_IncludeInventoryCollectorModule=no&param_IncludeRDSUtilizationModule=no&param_IncludeRightsizingModule=no&param_IncludeTAModule=no&param_IncludeTransitGatewayModule=no](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-read-permissions.yaml&stackName=CidDataCollectionDataReadPermissionsStack&param_DataCollectionAccountID=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_AllowModuleReadInMgmt=yes&param_OrganizationalUnitID=REPLACE%20WITH%20ORGANIZATIONAL%20UNIT%20ID&param_IncludeBudgetsModule=no&param_IncludeComputeOptimizerModule=no&param_IncludeCostAnomalyModule=no&param_IncludeECSChargebackModule=no&param_IncludeInventoryCollectorModule=no&param_IncludeRDSUtilizationModule=no&param_IncludeRightsizingModule=no&param_IncludeTAModule=no&param_IncludeTransitGatewayModule=no) 

**Note**  
To ensure full visibility of data across your organization accounts, in the parameters section, we recommend to pass the Organization Root ID as the organizational unit parameter (OrganizationalUnitID), to ensure the data read role stack is deployed to all accounts in your organization, allowing data collectors to access data from all your organization.

![\[Data Read Role CloudFormation stack - parameters\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/2b-data-read-permissions-stack-create-parameters.png)


1. Make sure to select all modules that you want to allow access to your organization accounts data.

![\[Data Read Role CloudFormation - modules selection\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/2c-data-read-permissions-stack-create-modules.png)


### Step 2. [Data Collection Account] Update Data Collection stack


#### More


1. The URL to the latest Data Collection CloudFormation is https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/deploy-data-collection.yaml. Copy this URL and keep it at hand, you will use it to update the current Data Collection stack.

1. Make a note of the value set on your existing Data Collection stack regions parameter. Previous versions of the Data Collection stack would have the regions in the parameter "ComputeOptimizerRegions":  
![\[Compute Optimizer Regions parameter\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/0-data-collection-stack-current-region-parameter.png)

1. Find the existing data collection stack. The default name of the data collection stack is **OptimizationDataCollectionStack**.  
![\[Optimization Data Collection Stack\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1a-data-collection-stack.png)

1. Start the Data Collection stack update process by clicking on the "Update" button:  
![\[Optimization Data Collection Stack detailed view update button\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1b-data-collection-stack-detailed-view.png)

1. Choose the option to "Replace current template", using the "Amazon S3 URL" option, and paste the URL of the latest Data Collection CloudFormation template you copied before.  
![\[Optimization Data Collection Stack replace template entering S3 URL to new template\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1d-data-collection-update-replace-template-S3-url.png)

1. In the **Specify stack details** stage parameters section, you will find the parameter "Role Prefix" with the value "CID-DC-". We recommend to use this new prefix to avoid conflicts with any existing resources when updating to the latest version of the Data Collection stack.  
![\[Optimization Data Collection Stack update - role prefix\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1e-data-collection-update-role-prefix.png)

1. In the same parameters section, update the regions from which data about resources will be collected. Specify at least the same regions your existing Data Collection stack uses.  
![\[Optimization Data Collection Stack update - regions parameter\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1f-data-collection-update-compopt-regions.png)

1. Click **Next** at the bottom of the **Specify stack details** stage, and then, click **Next** again at the bottom of the **Configure stack options** stage to move to the **Review** stage. Click **Submit** at the end of the **Review** stage to initiate the update. This process will take a few minutes until completion.  
![\[Optimization Data Collection Stack update - submit action\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1g-data-collection-update-submit.png)

Once updated, the new version of the Data Collection stack will be visible in the stack description.

![\[Optimization Data Collection Stack update complete\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-collectors/1h-data-collection-update-complete.png)


### Step 3. [In Data Collection Account] Run data migration script


#### More


The workshop was updated to work for multiple management accounts, and also minor adjustments have been applied to align the folder structure across the data collection modules. To migrate the historical data on your S3 bucket you can run a migration [script](https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/data-collection/source/s3_files_migration.py). The script accepts the parameter `<your_bucket_name>` which is the bucket of data collection stack. Default in legacy versions of this lab is `costoptimizationdata<account_id>`.

You can run following commands from your AWS CloudShell

```
curl https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/data-collection/source/s3_files_migration.py -o s3_files_migration.py
python3 s3_files_migration.py
```

### Step 4. [In Management/Payer Account] Clean up


#### More


 **Delete read role stacks from v2 (OptimizationDataRoleStack)** 

Before version 3.0, 2 stacks were deployed in the Management account: - Read role stack for Management account specific data. - (Optional) Read role stack for collector-specific data.

1. Find the current data read permissions stacks by navigating to the [CloudFormation console](https://us-east-1.console.aws.amazon.com/cloudformation/home)   
![\[Find current data read role stacks\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1a-find-current-data-read-permissions-stacks.png)

1. Delete the Management data read role stack. The default name of the stack is **OptimizationManagementDataRoleStack**.  
![\[Management account data read role stack\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1b-mgmt-acc-mgmt-read-role-stack-delete.png)

1. Confirm you want to delete the stack.  
![\[Confirm deletion of Management account data read permissions stack\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1c-mgmt-acc-mgmt-read-role-stack-delete-confirm.png)

1. Delete data read role stack, if installed. The default name of the stack is **OptimizationDataRoleStack**.  
![\[Collectors data read role stack\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1d-mgmt-acc-data-read-role-stack-delete.png)

1. Confirm you want to delete the stack.

![\[Confirm deletion of the collectors data read permissions stack\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1e-mgmt-acc-data-read-role-stack-delete-confirm.png)


 **Delete data read role StackSet from v2 (OptimizationDataRoleStack)** 

Before version 3.0, data read permissions were deployed as a StackSet in the Management account with the default name **OptimizationDataRoleStack**.

1. Find the Organizational Unit the stackset is targeting by looking at the stackset details:  
![\[Optimization Data Role Stack Organizational Unit IDs\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1f-data-read-permissions-stackset-info-ouid.png)

1. Find the data read role permission stackset in the CloudFormation StackSet console.  
![\[CloudFormation StackSets console - search data read role stackset\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1g-find-data-read-permissions-stackset.png)

1. Delete the stacks deployed by the stackset. You can select the stackset and select the "Delete stacks from StackSet" menu option.  
![\[Delete stacks from stackset menu option\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1h-data-read-permissions-stackset-delete-stacks.png)

1. Enter the Organizational Unit ID you found in step \$11 and select all the regions the stackset is targeting. Usually, the stackset will deploy to a single region, for example, us-east-1. Click **Next** to move to the **Review** stage, and then click **Submit** to start deleting the stacks from the stackset. **NOTE** The deletion process can take a few minutes to complete.  
![\[Delete stacks from stackset - parameters\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1i-data-read-permissions-stackset-delete-stacks-parameters.png)

1. After the stackset’s stacks are deleted, return to the StackSets page, select the data read roles stackset, and use the menu option **Delete StackSet** to delete it.  
![\[Delete StackSet menu option\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1j-data-read-permissions-stackset-delete.png)

1. Confirm you want to delete the stacks in the set.  
![\[Confirmation to delete stackset\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data-collection/update-process/data-read-permissions/1k-data-read-permissions-stackset-delete-confirm.png)

### Step 5. [In Data Collection Account] Update Dashboards


#### More


After update from previous versions you might need to update dashboards of Advanced Group to take into account the change of s3 path:

```
cid-cmd -vvv update --force --recursive --dashboard-id compute-optimizer-dashboard
cid-cmd -vvv update --force --recursive --dashboard-id ta-organizational-view
```

Please carefully verify existence of default path on S3 when asked (mainly S3 bucket names), and adjust accordingly.

## Post Update


After deployment you can [check the execution state](data-collection-utilize-data.md#data-collection-utilize-data-check-execution) and refresh the data by triggering the execution of Step Functions.

# Teardown
Teardown

**Note**  
Please make sure you empty s3 buckets before deletion of CidDataCollectionStack.

1. In the Data Collection Account, go to S3 and search for bucket names that contain "costoptimization" or cid-data, select the radio button next to the bucket name and then click **Empty**.

1. Navigate to CloudFormation console and search for the Stack named **CidDataCollectionStack**, select the radio button next to the Stack and click **Delete**.

1. In the Management Account, go to CloudFormation Console and search for Stack named **CidDataCollectionReadPermissionsStack**, select the radio button next to the Stack and click **Delete**. This will delete IAM Role created in Management Account to be assumed by Lambda for reading data.