

# Dashboards
Dashboards

## Foundational


Most popular dashboards based on AWS Cost And Usage Report. These dashboards are recommended to start with.


|  |  |  |  | 
| --- |--- |--- |--- |
|  Dashboard  |  Preview  |  Links  |  Target Audience  | 
|   [**CUDOS Dashboard**](cudos-cid-kpi.md#foundational-cudos-dashboard) provides you high level details and operational insights with ability to drill down to resource level granularity. In CUDOS dashboard you can find auto-generated cost optimization recommendations and actionable insights which can be used by your FinOps practitioners, Product Owners and Engineering teams out of the box. It allows you to quickly identify spikes and uncover uncertainties in your AWS usage with highlighting particular resources which can be optimized.  |   ![\[CUDOS Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/CUDOS.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=cudos) [Details](cudos-cid-kpi.md#foundational-cudos-dashboard) [Deploy](deployment-in-global-regions.md) [Feedback](feedback-support.md)   |  Product owners, Finance, FinOps, DevOps, Engineering teams  | 
|   [**Cost Intelligence Dashboard**](cudos-cid-kpi.md#foundational-cid-dashboard) is a customizable and accessible dashboard to help create the foundation of your own cost management and optimization (FinOps) tool. Executives, directors, and other individuals within the CFO’s line of business or who manage cloud financials for an organization will find the Cloud Intelligence Dashboard easy to use and relevant to their use cases. Little to no technical knowledge or understanding of AWS Services is required  |   ![\[KPI Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/CID.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=cid) [Details](cudos-cid-kpi.md#foundational-cid-dashboard) [Deploy](deployment-in-global-regions.md) [Feedback](feedback-support.md)   |  Executives, Finance/Procurement  | 
|   [**The KPI and Modernization Dashboard**](cudos-cid-kpi.md#foundational-kpi-dashboard) helps your organization combine DevOps and IT infrastructure with Finance and the C-Suite to grow more efficiently and effectively on AWS. This dashboard lets you set and track modernization and optimization goals such as percent OnDemand, Spot adoption, and Graviton usage. By enabling every line of business to create and track usage goals, and your cloud center of excellence to make recommendations organization-wide, you can grow more efficiently and innovate more quickly on AWS  |   ![\[CID Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/KPI.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=kpi) [Details](cudos-cid-kpi.md#foundational-kpi-dashboard) [Deploy](deployment-in-global-regions.md) [Feedback](feedback-support.md)   |  Product owners, Finance, FinOps, DevOps, Engineering teams  | 

## Advanced


Dashboards that require advanced [Data Collection Stack](data-collection.md).


|  |  |  |  | 
| --- |--- |--- |--- |
|  Dashboard  |  Preview  |  Links  |  Target Audience  | 
|   [**Trusted Advisor Organizational (TAO) Dashboard**](trusted-advisor-dashboard.md) provides you visibility for all cost optimization opportunities and auto-identified idle resources together with highlighted by AWS Trusted Advisor risks and flagged resources across Security, Reliability and Performance pillars. TAO provides historical trends allowing you to track results of optimizations  |   ![\[TAO Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/TAO.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=tao) [Details](trusted-advisor-dashboard.md) [Deploy](trusted-advisor-dashboard.md) [Blog](https://aws.amazon.com/blogs/mt/a-detailed-overview-of-trusted-advisor-organizational-dashboard/) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering, SRE, Security teams  | 
|   [**Compute Optimizer Dashboard**](compute-optimizer-dashboard.md) helps your organization to visualize and trace right sizing recommendations from AWS Compute Optimizer. These recommendations will help you identify Cost savings opportunities for over provisioned resources and also see the Operational risk from under provisioned ones  |   ![\[COD Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/COD.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=compute-optimizer-dashboard) [Details](compute-optimizer-dashboard.md) [Deploy](compute-optimizer-dashboard.md) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**Cost Anomaly Dashboard**](cost-anomaly-dashboard.md) helps you to track and visualize findings from AWS Cost Anomaly Detection.  |   ![\[CAD Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cad.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=aws-cost-anomalies) [Details](cost-anomaly-dashboard.md) [Deploy](cost-anomaly-dashboard.md) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**Extended Support Cost Projection**](extended-support.md) helps you visualize the cost projection for RDS and EKS Extended Support charges based on your current resource usage.  |   ![\[Extended Support Cost Projection Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/rdsxtsuppcp.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=extended-support-cost-projection) [Details](extended-support.md) [Deploy](extended-support.md) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**Graviton Savings Dashboard**](graviton-savings-dashboard.md) helps you to quantify your Graviton opportunities and existing usage for EC2, RDS, OpenSearch, and ElastiCache Services.  |   ![\[Graviton Opportunities Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/graviton_ec2_existingusage.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=graviton-savings&sheet=default) [Details](graviton-savings-dashboard.md) [Deploy](graviton-savings-dashboard.md) [Blog](https://aws.amazon.com/blogs/compute/accelerate-your-aws-graviton-adoption-with-the-aws-graviton-savings-dashboard/) [Feedback](graviton-savings-dashboard.md#graviton-savings-dashboard-feedback-support)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**Health Events Dashboard**](health-events-dashboard.md) helps you to track past, current and upcoming events that are published to your Personal Health Dashboard.  |   ![\[Health Events Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/he_dashboard.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=health-events-dashboard) [Details](health-events-dashboard.md) [Deploy](health-events-dashboard.md) [Feedback](health-events-dashboard.md#health-events-dashboard-feedback-support)   |  Product owners, DevOps, Engineering, SRE, Security teams  | 
|   [**AWS News Feeds**](news-feeds.md) displays several AWS Feeds including What’s New, Blog Post, Videos and Security Bulletins.  |   ![\[AWS News Feeds\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/aws-feeds.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=aws-feeds&sheet=default) [Details](news-feeds.md) [Deploy](news-feeds.md) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**AWS Budgets Dashboard**](budgets-dashboard.md) helps you plan and track your cloud spending across the organization.  |   ![\[AWS Budgets Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/budgets_view.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=aws-budgets&sheet=default) [Details](budgets-dashboard.md) [Deploy](budgets-dashboard.md) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**AWS Support Cases Radar Dashboard**](support-cases-radar.md) allows you to consolidate, track and analyze AWS Support cases across all linked accounts and multiple AWS organizations in a single place. With the optional Summarization plugin, powered by Amazon Bedrock, the dashboard can also provide executive summaries of case communications, presenting the issue, actions and outcomes in a clear and concise manner.  |   ![\[Support Cases Radar Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/support_cases_radar_dashboard.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=support-cases-radar) [Details](support-cases-radar.md) [Deploy](support-cases-radar.md) [Feedback](feedback-support.md)   |  Product owners, FinOps, DevOps, Engineering, CCOE, Security teams  | 
|   [**ResilienceVue Dashboard**](resiliencevue-dashboard.md) provides you visibility into your current AWS workloads resiliency assessed by AWS Resilience Hub. ResilienceVue helps you view the resilience posture for your applications within AWS Organization across AWS accounts and regions.  |   ![\[ResilienceVue Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/rv_demo.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=resiliencevue) [Details](resiliencevue-dashboard.md) [Deploy](resiliencevue-dashboard.md) [Feedback](feedback-support.md)   |  Product owners, DevOps, Engineering, SRE, Security teams  | 
|   [**AWS End User Computing (EUC) Dashboard**](euc-dashboard.md) provides insights into your Amazon WorkSpaces usage, costs, and performance metrics. It helps you optimize your WorkSpaces deployment and understand user behavior patterns.  |   ![\[EUC Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/executive_summary.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=euc-dashboard) [Details](euc-dashboard.md) [Deploy](euc-dashboard.md) [Feedback](feedback-support.md)   |  IT Administrators, FinOps, Product Owners  | 
|   [**Data Collection Monitor Dashboard**](data-collection-monitor.md) provides instrumentation log data to help you monitor the executions of the various modules of the Data Collection Framework. Starting with version 3.11, most Data Collection modules emit basic log data to track module execution and potential errors encountered. This dashboard reads that instrumentation data to present multiple views to track historical executions as well as troubleshoot any issues.  |   ![\[Data Collection Monitor\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data_collection_monitor_01.png)   |   [Details](data-collection-monitor.md) [Deploy](data-collection-monitor.md) [Feedback](feedback-support.md)   |  IT Administrators, FinOps  | 
|   [**Media Services Insights Hub**](media-services-insights.md) dashboard provides comprehensive visibility into AWS Elemental Media Services usage, costs, and performance metrics. This dashboard leverages AWS Cost and Usage Report(CUR) data to deliver actionable insights for optimizing media workflows and managing costs across your media infrastructure.  |   ![\[Data Collection Monitor\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/media_services_insights_01.png)   |   [Details](media-services-insights.md) [Deploy](media-services-insights.md#media-services-insights-deployment) [Feedback](feedback-support.md)   |  Product owners, DevOps, Engineering, SRE  | 

## Additional


Other dashboards that require different additional Data Sources or niche use cases from AWS Cost And Usage Report data.


|  |  |  |  | 
| --- |--- |--- |--- |
|  Dashboard  |  Preview  |  Links  |  Target Audience  | 
|   [**CORA Dashboard**](cora-dashboard.md) (Cost Optimization Recommended Actions Dashboard) is based on the data from [AWS Cost Optimization Hub](https://docs.aws.amazon.com/cost-management/latest/userguide/cost-optimization-hub.html) providing Rightsizing, Migration to Graviton, Idle resources, Savings Plan and Reserved instances recommendations. This dashboard can help you tracing your Cost Optimization Opportunities over time and group recommendations by owners of workloads.  |   ![\[CORA\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cora.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=cora) [Details](cora-dashboard.md) [Deploy](cora-dashboard.md#cora-dashboard-deploy) [Feedback](feedback-support.md)   |  Executives, Finance/Procurement, FinOps, Product Owners  | 
|   [Cloud Intelligence Dashboard for Azure](https://catalog.workshops.aws/cidforazure/en-US), a solution that allows you to create Azure cost visualizations and reports in Amazon Quick Sight.  |   ![\[CID for Azure\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cidazure.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=cid-for-azure) [Details](https://aws.amazon.com/blogs/modernizing-with-aws/cloud-intelligence-dashboard-for-azure) [Deploy](https://catalog.workshops.aws/cidforazure/en-US/03-setup) [Feedback](https://catalog.workshops.aws/cidforazure/en-US#feedback)   |  Executives, Finance/Procurement, FinOps, Product Owners  | 
|   [Cloud Intelligence Dashboard for GCP](https://catalog.workshops.aws/cid-gcp-cost-dashboard), A solution enabling export Billing Data from Google Cloud Platform (GCP) and visual representations and reporting using Amazon Quick Sight.  |   ![\[CID for GCP\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cidgcp.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=gcp-cost-dashboard) [Details](https://catalog.workshops.aws/cid-gcp-cost-dashboard) [Deploy](https://catalog.workshops.aws/cid-gcp-cost-dashboard) [Feedback](mailto:cloud-intelligence-dashboards-gcp@amazon.com)   |  Executives, Finance/Procurement, FinOps, Product Owners  | 
|   [**FOCUS Dashboard**](focus-dashboard.md) is an open source and customizable dashboard which provides pre-defined visuals to get actionable insights from FOCUS data in Amazon Quick Sight. It allows you to quickly get started with using FOCUS specification in your organization  |   ![\[FOCUS\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/focus_dashboard.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=focus-dashboard&sheet=default) [Details](focus-dashboard.md) [Deploy](focus-dashboard.md#focus-dashboard-deployment) [Feedback](feedback-support.md)   |  Executives, Finance/Procurement, FinOps, Product Owners  | 
|   [**AWS Marketplace Single Pane of Glass (SPG) Dashboard**](marketplace-dashboard.md) is the one-stop dashboard for AWS Marketplace buyers in procurement, FinOps, and legal, to visualize AWS Marketplace spend and usage. It enables AWS Marketplace buyers to get insights into Marketplace subscriptions without navigating to multiple AWS consoles, requiring specific IAM privileges, or having deep technical proficiency on AWS services. It covers all Marketplace subscriptions including self-service public offers, private offers and all types of Marketplace offerings (software, data and services).  |   ![\[Marketplace Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/aws-marketplace-spg.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=aws-marketplace) [Details](https://catalog.workshops.aws/aws-marketplace-buyer/en-US/costmanagement/analysis/quicksight) [Deploy](marketplace-dashboard.md) [Feedback](marketplace-dashboard.md#marketplace-dashboard-feedback-support)   |  AWS Marketplace Buyers, Procurement, Sourcing, Finance, FinOps, Legal, GRC, IT, BizApps  | 
|   [**Kubecost Containers Cost Allocation Dashboard**](kubecost-containers-dashboard.md) is a solution that enables DevOps teams, FinOps teams and any stakeholder, to get insights into Kubernetes in-cluster cost and usage based on data collected from Kubecost. It provides teams with the ability to allocate cost to Kubernetes workloads, and apply showback and chargeback for multi-tenant Kubernetes clusters. It also allows teams to understand the clusters efficiency, in the goal of right-sizing containers requests.  |   ![\[Kubecost - Containers Cost Allocation Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/kubecost_containers_cost_allocation.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=containers-cost-allocation) [Details](kubecost-containers-dashboard.md) [Deploy](https://github.com/awslabs/containers-cost-allocation-dashboard) [Feedback](kubecost-containers-dashboard.md#kubecost-containers-dashboard-feedback-support)   |  DevOps, FinOps, Cloud Engineering, Product Management  | 
|   [**SCAD Containers Cost Allocation Dashboard**](scad-containers-dashboard.md) is a solution that enables DevOps teams, FinOps teams and any stakeholder, to get insights into EKS and ECS in-cluster cost based on data from CUR’s Split Cost Allocation Data (SCAD) feature. It provides teams with the ability to allocate cost to EKS and ECS workloads, and apply showback and chargeback for multi-tenant EKS and ECS clusters.  |   ![\[SCAD - Containers Cost Allocation Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_containers_cost_allocation.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=scad-containers-cost-allocation) [Details](scad-containers-dashboard.md) [Deploy](scad-containers-dashboard-deployment.md) [Feedback](scad-containers-dashboard.md#scad-containers-dashboard-feedback-support)   |  DevOps, FinOps, Cloud Engineering, Product Management  | 
|   [The Sustainability Proxy Metrics and Carbon Emissions Dashboard ](sustainability-proxy-metrics-dashboard.md) helps customers look for opportunities to reduce a customer’s sustainability impact by making changes to their AWS infrastructure. This dashboard shows resource use in key areas defined in the Sustainability Pillar of the Well Architected Framework. It helps customers implement an impact aware architecture. It also acts as customizable view on carbon emission data  |   ![\[Sustainability Proxy Metrics Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/SPMD.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=sustainability-proxy-metrics) [Details](sustainability-proxy-metrics-dashboard.md) [Deploy](sustainability-proxy-metrics-dashboard.md) [Feedback](sustainability-proxy-metrics-dashboard.md#sustainability-proxy-metrics-dashboard-feedback-support)   |  Product owners, FinOps, DevOps, Engineering teams  | 
|   [**Trends Dashboard**](trends-dashboard.md) provides Financial and Technology organizational leaders access to proactive trends, signals, insights and anomalies to understand and analyze their AWS cloud usage  |   ![\[Trends Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/Trends.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=trends-dashboard) [Details](trends-dashboard.md) [Deploy](trends-dashboard.md) [Feedback](feedback-support.md)   |  Executives, Finance/Procurement  | 
|   [**Data Transfer Dashboard**](datatransfer-dashboard.md) helps customers gain insights into their data transfer. It will analyze any data transfer that incurs a cost such as Outbound/Internet, Inter Region and Inter AZ data transfer from all services.  |   ![\[Data Transfer Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data_transfer_dashboard.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=datatransfer-cost-analysis-dashboard) [Details](datatransfer-dashboard.md) [Deploy](datatransfer-dashboard.md) [Feedback](feedback-support.md)   |  Network Team  | 
|   [**Amazon Connect Cost Insights Dashboard**](connect-cost-insight.md) provides an overview of [Amazon Connect service](https://aws.amazon.com/pm/connect) spend and usage. Key features include usage trend overviews, visualizations for voice and telecom services, granular data insights, call cost breakdowns, and quick search capabilities.  |   ![\[Amazon Connect Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/Amazon_Connect_dash.png)   |   [Demo](https://cid.workshops.aws.dev/demo?dashboard=amazon-connect-cost-insight-dashboard) [Details](connect-cost-insight.md) [Deploy](connect-cost-insight.md) [Feedback](connect-cost-insight.md#connect-cost-insights-feedback-support)   |  FinOps, Telecom Engineering, Product Management  | 
|   [**Config Resource Compliance Dashboard**](config-resource-compliance-dashboard.md) shows the inventory of your AWS resources, along with their compliance status, across multiple AWS accounts and regions by leveraging your AWS Config data  |   ![\[Config Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-compliance-1.png)   |   [Demo](https://cid.workshops.aws.dev/demo/?dashboard=cid-crcd) [Details](config-resource-compliance-dashboard.md) [Deploy](config-resource-compliance-dashboard.md) [Feedback](feedback-support.md)   |  Security teams, SecOps, DevOps, Product Owners, Engineering teams  | 

# Foundational
Foundational

Foundational Dashboards require CUR [Data Exports](data-exports.md) or Legacy CUR.

This section covers the following dashboards:

Contents
+  [CUDOS Dashboard](cudos-cid-kpi.md#foundational-cudos-dashboard) 
+  [Cost Intelligence Dashboard (CID)](cudos-cid-kpi.md#foundational-cid-dashboard) 
+  [KPI Dashboard](cudos-cid-kpi.md#foundational-kpi-dashboard) 

# CUDOS, CID, KPI
CUDOS, CID, KPI

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

## Introduction


In this section we provide a description of [CUDOS Dashboard](#foundational-cudos-dashboard), [Cost Intelligence Dashboard (CID)](#foundational-cid-dashboard) and [KPI Dashboard](#foundational-kpi-dashboard) which use data exclusively from the [AWS Cost and Usage Report.](https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/) 

All these dashboards are based on the AWS Cost & Usage Report (CUR) that contains the most comprehensive set of AWS cost and usage data available, including additional metadata about AWS services, pricing, Reserved Instances, and Savings Plans. The CUR itemizes usage at the account or Organization level by product code, usage type and operation. These costs can be further organized by enabling Cost Allocation tags and Cost Categories.

![\[Recommended Deployment Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/basic_deployment_arch.png)


1.  [AWS Data Exports](https://aws.amazon.com/aws-cost-management/aws-data-exports/) delivers daily the Cost & Usage Report (CUR2) to an [Amazon S3 Bucket](https://aws.amazon.com/s3/) in the Management Account.

1.  [Amazon S3](https://aws.amazon.com/s3/) replication rule copies Export data to a dedicated Data Collection Account S3 bucket automatically.

1.  [Amazon Athena](https://aws.amazon.com/athena/) allows querying data directly from the S3 bucket using an [AWS Glue](https://aws.amazon.com/glue/) table schema definition.

1.  [Amazon Quick Sight](https://aws.amazon.com/quicksight/) creates datasets from [Amazon Athena](https://aws.amazon.com/athena/), refreshes daily and caches in [SPICE](https://docs.aws.amazon.com/quicksight/latest/user/spice.html)(Super-fast, Parallel, In-memory Calculation Engine) for [Amazon Quick Sight](https://aws.amazon.com/quicksight/) 

1. User Teams (Executives, FinOps, Engineers) can access Cloud Intelligence Dashboards in [Amazon Quick Sight](https://aws.amazon.com/quicksight/). Access is secured through [AWS IAM](https://aws.amazon.com/iam/), IIC ([AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/), formerly SSO), and optional [Row Level Security](https://catalog.workshops.aws/awscid/en-US/customizations/row-level-security).

If you do not have access to the Management account, you can also deploy CID for a subset of Linked Accounts.

## CUDOS Dashboard


### Authors

+ Yuriy Prykhodko, AWS Principal Technical Account Manager
+ Timur Tulyaganov, Ex-Amazonian

### Contributors

+ Alee Whitman, Principal Solutions Architect
+ Iakov Gan, Ex-Amazonian
+ Judith Lehner, Senior Technical Account Manager
+ Udi Dahan, Senior Technical Account Manager
+ Mylen Rath, Senior Technical Account Manager
+ Christopher Morris, Senior Technical Account Manager
+ Xianshu Zeng, Senior FinOps Commercial Architect
+ Oleksandr Moskalenko, Ex-Amazonian
+ Natalia Cummings, Senior FinOps Commercial Architect
+ Adam Richter, Senior Optimization Solutions Architect
+ Sabith Venkitachalapathy, Senior Storage Specialist SA
+ Brenno Passanha, Senior Technical Account Manager

The CUDOS (Cost and Usage Dashboard Operations Solution) is an in-depth, granular, and recommendation-driven dashboard to help customers dive deep into cost and usage and to fine-tune efficiency. Executives, directors, and other individuals within the CIO or CTO line of business or who manage DevOps and IT organizations will find the CUDOS Dashboard highly detailed and tailored to solve their use cases. Out-of-the-box benefits of the CUDOS dashboard include (but are not limited to):
+ Use the built-in tag explorer to group and filter cost and usage by your tags.
+ View resource-level detail such as your hourly AWS Lambda or individual Amazon S3 bucket costs.
+ Get alerted to service-level areas of focus such as top 3 On-Demand database instances by cost.

### Demo Dashboard


Explore a [sample CUDOS Dashboard](https://cid.workshops.aws.dev/demo?dashboard=cudos) 

### Deploy


Follow [deployment guide](deployment-in-global-regions.md) 

### Learn more

+  [What’s New in CUDOS versions 5.1 to 5.3](https://www.youtube.com/watch?v=3LuKzbFxuz8) 
+  [What’s new in CUDOS Dashboard v 4.77](https://www.youtube.com/watch?v=5IWAoKujOqo) 
+  [CUDOS Insights Learning Series on YouTube](https://www.youtube.com/watch?v=2N24ERSwPE4&list=PLevHThZeBjf85JyGgZGep0ib9eE-53A2T) 

### Changelog

+  [Changelog](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/changes/CHANGELOG-cudos.md) 

## Cost Intelligence Dashboard (CID)


### Authors

+ Alee Whitman, Principal Solutions Architect

### Contributors

+ Aaron Edell, Head of Accelerators, AWS
+ Aidin Khosrowshahi, AWS Sr. Technical Account Manager
+ Yuriy Prykhodko, AWS Principal Technical Account Manager
+ Arun Santhosh, Principal Specialist SA (Amazon Quick Sight)
+ Kareem Syed-Mohammed, Senior Product Manager - Technical (Amazon Quick Sight)
+ Timur Tulyaganov, Ex-Amazonian

 [Watch 10min video overview of CID dashboard](https://d3h9zoi3eqyz7s.cloudfront.net/Cost/Videos/DashboardCostIntelligence.mp4) 

The Cost Intelligence Dashboard is a customizable and accessible dashboard to help create the foundation of your own cost management and optimization (FinOps) tool. Executives, directors, and other individuals within the CFO’s line of business or who manage cloud financials for an organization will find the Cloud Intelligence Dashboard easy to use and relevant to their use cases. Little to no technical knowledge or understanding of AWS Services is required. Out-of-the-box benefits of the CID include (but are not limited to):
+ Create chargeback or showback reports for internal business units, accounts, or cost centers.
+ Track how Savings Plans (SP), Reserved Instances (RI), and Spot Instance usage has impacted your unit metrics such as your average hourly cost of Amazon EC2.
+ Keep track of which accounts or internal business units receive savings and when RIs and SPs expire.

### Demo Dashboard


Explore a [sample Cost Intelligence Dashboard](https://cid.workshops.aws.dev/demo?dashboard=cid) 

### Deploy


Follow [deployment guide](deployment-in-global-regions.md) 

## KPI Dashboard


### Authors

+ Alee Whitman, Principal Solutions Architect

### Contributors

+ Aaron Edell, Head of Accelerators, AWS
+ Alex Head, Sr. Manager, AWS OPTICS
+ Georgios Rozakis, AWS Sr. Technical Account Manager
+ Oleksandr Moskalenko, Ex-Amazonian
+ Timur Tulyaganov, Ex-Amazonian
+ Yash Bindlish, AWS Enterprise Support Manager
+ Yuriy Prykhodko, AWS Principal Technical Account Manager
+ Anjali Dhanerwal, AWS Senior Technical Account Manager

The KPI and Modernization Dashboard helps your organization combine DevOps and IT infrastructure with Finance and the C-Suite to grow more efficiently and effectively on AWS. This dashboard lets you set and track modernization and optimization goals such as percent OnDemand, Spot adoption, and Graviton usage. By enabling every line of business to create and track usage goals, and your cloud center of excellence to make recommendations organization-wide, you can grow more efficiently and innovate more quickly on AWS. Out-of-the-box benefits of the KPI dashboard include (but are not limited to):
+ Track percent on-demand across all your teams.
+ See potential cost savings by meeting certain KPIs and goals for your organization.
+ Quickly locate cost-optimization opportunities such as infrequently used S3 buckets, old EBS snapshots, and Graviton eligible instance usage.

### Demo Dashboard


Explore a [sample KPI Dashboard](https://cid.workshops.aws.dev/demo?dashboard=kpi) 

### Deploy


Follow [deployment guide](deployment-in-global-regions.md) 

### Learn more

+  [What’s new in KPI Dashboard](https://www.youtube.com/watch?v=1yDuYqNbcr4) 

## Time to complete


If using automation steps, setup should take approximately 15-30 minutes to complete. Please note that the first data refresh of Cost and Usage Report may take 24 hours to arrive.

## Steps

+  [Deployment in Global Regions](deployment-in-global-regions.md) 
+  [Column Definitions](column-definitions.md) 
+  [Add Account Names ( Optional )](add-account-names.md) 
+  [Migration to CUR 2.0](migration-to-cur.md) 
+  [Deployment In China](deployment-in-china.md) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Deployment in Global Regions
Deployment in Global Regions

**Note**  
Since November 2024, Cloud Intelligence Dashboards use [AWS Cost And Usage Report 2.0](https://docs.aws.amazon.com/cur/latest/userguide/table-dictionary-cur2.html) (CUR 2.0) as the main source for Foundational Dashboards. If you are deploying in China Regions, please follow the [China deployment instructions](deployment-in-china.md). If you have Legacy CUR setup, you can check [migration process](migration-to-cur.md).

## Architecture


We recommend the deployment of the Dashboards in a dedicated Data Collection Account, other than your Management (Payer) Account in order to respect AWS Best Practices [[1](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html#bp_mgmt-acct_avoid-deploying), [2](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/design-principles-for-your-multi-account-strategy.html#avoid-deploying-workloads-to-the-organizations-management-account)]. This Guide provides a CloudFormation template to copy CUR 2.0 data from your Management Account to a dedicated one. You can use it to aggregate data from multiple Management (Payer) Accounts or multiple Linked Accounts.

If you do not have access to the Management/Payer Account, you can still collect the data across multiple Linked accounts using the [same approach](data-collection-without-org.md).

![\[Recommended Deployment Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur2/cid-foundation-cur2-high-level-architecture.png)


Deployment process consists of 3 main steps:

1. Step 1: Deploy Amazon S3 Bucket and Athena Tables in the **Data Collection Account**.

1. Step 2: Deploy AWS Data Exports, Amazon S3 Bucket and a replication policy in **Source** Accounts (one or many).

1. Step 3: Deploy Cloud Intelligence Dashboards (CID) Stack in the **Data Collection Account**.

## Deployment


## Before you start


1. Choose the **region** for your deployment. Make sure to install all stacks in the same region to avoid cross region data transfer charges.

1. Define your Data Collection Account. Create or reuse an existing shared account. We do not recommend using the Management(Payer) Account for data collection.

1. Make sure you have the permissions for deploying CloudFormation Stacks.

### See Required Permissions

+ In the Management/Payer Account you will need permission to access AWS CloudFormation, AWS Cost & Usage Reports, AWS IAM, AWS Lambda and Amazon S3.
+ In the Data Collection Account you will need permission to access Amazon Athena, AWS CloudFormation, AWS Directory Service, Amazon EventBridge, AWS Glue, AWS IAM, AWS Lambda, Amazon Quick Sight, and Amazon S3 via both the console and the Command Line Tool.
+ For a CLI deployment,you will not require CloudFormation permissions.
+ You can use this [CloudFormation template](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/cfn-templates/cid-admin-policies.yaml) to provision an IAM role with minimal permissions required for dashboard deployment. It takes an IAM role name as a parameter and adds the required policies to the role.

1. If you use AWS Lake Formation in your Data Collection Account:

### See additional requirements for Lake Formation


Currently only foundational dashboards, CORA, Sustainability and FOCUS Dashboards support Lake Formation.
+ You will need to install an additional stack before [cid-lakeformation-prerequisite.yaml](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/cfn-templates/cid-lakeformation-prerequisite.yaml).
+ Also you will need to set `LakeFormationEnabled` parameter to `yes` in the Steps 1 and 3.

## Step 1. [Data Collection Account] Create Destination For CUR Aggregation


1. Sign in to your Data Collection Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation console. This stack will create bucket open for replication and Athena Tables.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/data-exports-aggregation.yaml&stackName=CID-DataExports-Destination&param_ManageCUR2=yes&param_ManageCOH=no&param_DestinationAccountId=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_SourceAccountIds=PUT%20HERE%20PAYER%20ACCOUNT%20IDS](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/data-exports-aggregation.yaml&stackName=CID-DataExports-Destination&param_ManageCUR2=yes&param_ManageCOH=no&param_DestinationAccountId=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_SourceAccountIds=PUT%20HERE%20PAYER%20ACCOUNT%20IDS) 

### More info about stack parameters and the process

+ Update `DestinationAccountId` parameter as your **Data Collection** Account ID (Current Account ID).
+ Make sure `Manage CUR 2.0` is set `yes`. You can optionally select Cost Optimization Hub (if you have this service activated) and FOCUS exports. This will allow you to use [CORA](cora-dashboard.md) and [FOCUS](focus-dashboard.md) dashboards.
+ Enter your Source Account(s) IDs, using commas to separate multiple Account IDs. These are accounts that will send their Data Exports to the bucket in the current Account. If you decided to deploy dashboards in Management/Payer Account (not recommended), make sure that SourceAccountId contains the current Account Id as the first element and skip Step 2.
+ Review the configuration, click **I acknowledge that AWS CloudFormation might create IAM resources** and click **Create stack**.
+ You will see the stack will start with **CREATE\$1IN\$1PROGRESS**. This step can take 5 - 15 mins. Once complete, the stack will show **CREATE\$1COMPLETE**.

You can only have one instance of this Stack in your Account. If you see errors indicating that one of exports exists already, update the existing stack setting parameter `CUR2` to `yes`.

You can add or delete Source Accounts later by updating this stack and adding or deleting Account IDs in a comma separated list of Source Account parameter.

## Step 2. [In Management/Payer/Source Account] Create CUR 2.0 and Replication


1. Click the **Launch Stack button** below to open the **stack template** in your AWS CloudFormation console.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/data-exports-aggregation.yaml&stackName=CID-DataExports-Source&param_ManageCUR2=yes&param_ManageCOH=no&param_DestinationAccountId=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20IDs&param_SourceAccountIds=](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/data-exports-aggregation.yaml&stackName=CID-DataExports-Source&param_ManageCUR2=yes&param_ManageCOH=no&param_DestinationAccountId=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20IDs&param_SourceAccountIds=) 

### Click here for the configuration steps


1. Enter a **Stack name** for your template such as **CID-DataExports-Source**.

1. Enter your **Destination Account ID** parameter (Your Data Collection Account, where you will deploy dashboards).

1. Choose the exports to manage. The choice must be consistent with the configuration in the Data Collection Account (as in Step 1).

1. Review the configuration, click **I acknowledge that AWS CloudFormation might create IAM resources**, and click **Create stack**.

1. You will see the stack will start with **CREATE\$1IN\$1PROGRESS**. This step can take \$15 mins. Once completed, the stack will show **CREATE\$1COMPLETE**.

1. Repeat for other Source Accounts.

It will typically take about 24 hours for the first delivery of AWS Data Exports replication to the Destination Account, but it might take up to 72 hours (3 days). You can continue with the dashboards deployment however data will appear on the dashboards the next day after the first data delivery.

## Backfill Data Export


You can now [create a Support Case](https://support.console.aws.amazon.com/support/home#/case/create), requesting a [backfill](https://docs.aws.amazon.com/cur/latest/userguide/troubleshooting.html#backfill-data) of your reports (CUR or FOCUS) with up to 36 months of historical data. Case must be created from your Source Account (typically Management/Payer Account). If you are using multiple Management/Payer Accounts, the support ticket must be created in each.

### Support ticket example


Support ticket example:

```
Service: Billing
Category: Other Billing Questions
Subject: Backfill Data

Hello Dear Billing Team,
Please can you backfill the data in DataExport named `cid-cur2` for last 12 months.
Thanks in advance,
```

You can also use following command in AWS CloudShell to create this case via command line (requires AWS Enterprise or OnRamp Support):

```
aws support create-case \
    --subject "Backfill Data" \
    --service-code "billing" \
    --severity-code "normal" \
    --category-code "other-billing-questions" \
    --communication-body "
        Hello Dear Billing Team,
        Please can you backfill the data in DataExport named 'cid-cur2' for last 12 months.
        Thanks in advance"
```

Make sure you create the case from your Source Accounts (Typically Management/Payer Accounts).

## Step 3. [Data Collection Account] Deploy Dashboards


### 3.1 - Prepare Amazon Quick Sight (Quick Suite)


Amazon Quick Sight is the AWS Business Intelligence tool, part of Amazon Quick Suite service. You can install Dashboards into your Amazon Quick Sight account and customize them to your needs. If you are already a regular Amazon Quick Sight user you can skip these steps and move on to the next step. If not, complete the steps below.

#### Click here to expand Amazon Quick Suite Sign Up Workflow


1. Log into your Destination Linked Account and search for **Quick Suite** in the list of Services

1. You will be asked to **Sign up** before you will be able to use it
   + Ensure you select the **Region** that is most appropriate based on where you plan to deploy the dashboards.
   + Enter a **name** for your Quick Suite account. This must be unique across all Quick Suite accounts.
   + Enter an **email address** for notifications to be sent to. This email will be linked to your Quick Suite user account so it can be your email.

1. You will then need to fill in a series of options in order to finish creating your account:
   + Please select the appropriate **Authentication** method
**Note**  
Select `Use AWS IAM Identity Center` if you want to use and share the CID dashboards in Production with your wider Organization using your existing Identity Provider such as Azure AD, Okta, or others. Follow the steps [here](publishing-as-sso-application.md). You may select `Use IAM federated identities & Quick Sight-managed users` to get started quickly, however, **NOTE:** You will **NOT** be able to change the Quick Sight Authentication method later

1. Click **Create Account** and wait for the congratulations screen to display. Go to 'Manage Quick Suite'.
   + (optional, not recommended) Downgrade your user to avoid charges for Amazon Q in Quick Suite.
   + Make sure that Pixel Perfect and Amazon Q in Quick Suite are deactivated.
   + Click on the SPICE Capacity option and choose `auto purchase` or purchase enough SPICE capacity so that the total is roughly 40GB. If you get SPICE capacity errors later, you can come back here to purchase more. If you’ve purchased too much you can also release it after you’ve deployed the dashboards.

![\[Quick Sight Sign up Workflow Image\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/qs-enterprise-activation.gif)


### 3.2 Deploy Dashboards


Make sure you use the same Region as in Step 1 to avoid cross region Data Transfer costs. Also your AWS Account must have a `quicksight:DescribeTemplate` permission for reading from us-east-1 region.

In this step we will use CloudFormation stack to create Athena Workgroup, S3 bucket, Glue Table, Glue Crawler, Quick Sight datasets, and finally the Dashboards. The template uses a custom resource (a Lambda with [this CLI tool](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md)) to create, delete, or update assets.

**Example**  

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml&stackName=Cloud-Intelligence-Dashboards&param_DeployCUDOSv5=yes&param_DeployKPIDashboard=yes&param_DeployCostIntelligenceDashboard=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml&stackName=Cloud-Intelligence-Dashboards&param_DeployCUDOSv5=yes&param_DeployKPIDashboard=yes&param_DeployCostIntelligenceDashboard=yes) 

1. Enter a **Stack name** for your template such as **Cloud-Intelligence-Dashboards** 

1. Review **Common Parameters** and confirm prerequisites before specifying the other parameters. You must answer `yes` to both prerequisites questions.

1. Copy and paste your **Quick SightUserName** into the parameter text box. To find your Quick Sight username:
   + Open a new tab or window and navigate to the **Quick Sight** console
   + Find your username from the person icon in the top right corner  
![\[Quick Sight page with username drop down in the top right highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cf_dash_qs_2.png)

1. Select the Dashboards you want to install. We recommend deploying all three: Cost Intelligence Dashboard, CUDOS, and the KPI Dashboard.

1. Review the configuration, click **I acknowledge that AWS CloudFormation might create IAM resources, and click Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. This step can take \$115 minutes. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs. Please note that dashboards will be empty by this point. We recommend initiate a backfill via a Support Cases (see [Backfill](#deployment-global-backfill-data-export) section).

    **Troubleshooting:** 

    **No export named cid-DataExports-ReadAccessPolicyARN found.** 

   If you see `No export named cid-DataExports-ReadAccessPolicyARN found.` then you probably did not installed CUR2 with Cloud formation stack as per Step 1. Alternatively you can also use Legacy CUR but in this case you need explicitly specify the parameter `CurVersion=1.0`.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open [AWS CloudShell](https://console.aws.amazon.com/cloudshell/home) 

1. Install cid-cmd tool. Run the following command and make sure you hit enter :

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. Deploy CUDOS Dashboard:

   ```
    cid-cmd deploy --dashboard-id cudos-v5
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

1. Repeat deployment command for Cost Intelligence Dashboard and KPI:

   ```
    cid-cmd deploy
   ```

   Please note that Advanced Dashboard will require Advanced [Data Collection](data-collection.md) 
WIP

**Note**  
After update Quick Sight datasets will be refreshed automatically. During the refresh process you may see "Dataset changed too much" error which should disappear once datasets are fully refreshed

## Update of the stack


**Note**  
We recommend customers updating both cid-cmd tool and CloudFormation stack to a version 4.2.3 or more recent.

Please note that dashboards are not updated with update of CloudFormation Stack. You need to use [command line for updates](update-dashboards.md) as it preserves potential customization.

You can check the latest Cloud Formation Stack [Here](https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml) and the source code on [GitHub](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/cfn-templates/cid-cfn.yml). Please note the version in Description.

### Update of Cloud-Intelligence-Dashboards Stack


1. Open CloudFormation console and identify the stack (default name is `Cloud-Intelligence-Dashboards`).

1. Open the Stack and press Update button.

1. Choose to update the template and insert this link: https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml

1. Review the parameters. Please make sure to choose the right version of CUR in CurVersion parameter. Choose 1.0 to stay on CUR1. Choose 2.0 to switch all new dashboards to CUR 2. To preform a full migration please reference [CUR2 migration guide](migration-to-cur.md).

## Troubleshooting


### No data in Dashboards after 24-48 hours


Please check the following:

1. In Quick Sight, go to Datasets and click on Summary View. Check for errors (if you see a status `Failed`, you can click it to see more info).

1. Check if CUR 2.0 data has arrived to the S3 bucket. If you just created CUR you will need to wait 24-48 hours before the first data arrives.

1. The Quick Sight datasets refresh once per day, if your first CUR was delivered after your latest refresh, you may need to click manual refresh on each dataset to see data in the dashboard.

 **Any issues? Visit our [FAQs](faq.md).** 

## Next steps

+ Deploy [CORA](cora-dashboard.md) 
+ Deploy [Compute Optimizer Dashboard](compute-optimizer-dashboard.md) and [Trusted Advisor Organizational (TAO) Dashboard](trusted-advisor-dashboard.md) 

# Column Definitions
Column Definitions

## summary\$1view


 [summary\$1view](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/cid/builtin/core/data/queries/cid/summary_view.sql) is a view in Amazon Athena, created on top of Cost and Usage Report [CUR](https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html), it allow users to have a concise overview of their AWS spend. It provides aggregated insights into costs and usage across various services and accounts.


| Column | Data Type | Description | 
| --- | --- | --- | 
|   **year**   |  string  |  Year of the billing period that is covered by report.  | 
|   **month**   |  string  |  Month of the billing period that is covered by report.  | 
|   **billing\$1period**   |  timestamp  |  The start date of the billing period that is covered by this report, in UTC. The format is `YYYY-MM-DDTHH:mm:ssZ`.  | 
|   **usage\$1date**   |  timestamp  |  If Start date is older than 3 months then it converts into first date of month, else actual date.The start date for the line item in UTC, inclusive. The format is `YYYY-MM-DDTHH:mm:ssZ`.  | 
|   **payer\$1account\$1id**   |  string  |  The account ID of the paying account. For an organization in AWS Organizations, this is the account ID of the management account.  | 
|   **linked\$1account\$1id**   |  string  |  The account ID of the account that used this line item. For organizations, this can be either the management account or a member account. You can use this field to track costs or usage by account.  | 
|   **invoice\$1id**   |  string  |  The ID associated with a specific line item. Until the report is final, the InvoiceId is blank, generally after the 6th or 7th of the month (example: June data available after July 6 or 7).  | 
|   **charge\$1type**   |  string  |  The type of charge covered by this line item. Some possible types are the following: Credit, Discount, Fee & Refund. For more charge type please refer this [Link](https://docs.aws.amazon.com/cur/latest/userguide/Lineitem-columns.html#Lineitem-details-L-LineItemType).  | 
|   **charge\$1category**   |  varchar(13)  |  Describes charge category as "running\$1usage" or "non\$1usage".In case of Charge type "DiscountedUsage","SavingsPlanCoveredUsage" & "Usage" it converts into "running\$1usage" else "non\$1usage".  | 
|   **purchase\$1option**   |  varchar(11)  |  Describes the available purchasing models for an AWS service. For example: AWS provides four main Amazon EC2 instance purchasing options: On-Demand, Reserved Instances & Spot Instances.  | 
|   **ri\$1sp\$1arn**   |  string  |  Provides Savings Plan and RI arn, if resource not covered by SP or RI, it returns blank.  | 
|   **product\$1code**   |  string  |  The code of the product measured. For example: Amazon EC2 is the product code for Amazon Elastic Compute Cloud.  | 
|   **product\$1name**   |  string  |  Describes the full name of the AWS service. Use this column to filter AWS usage by AWS service. Sample values: AWS Backup, AWS Config, Amazon Registrar, Amazon Elastic File System & Amazon Elastic Compute Cloud.  | 
|   **service**   |  string  |  This identifies the specific AWS service to the customer as a unique short abbreviation including Marketplace. Sample values: Amazon EC2 , AWS KMS, AWS Budgets, AWS Backup & AWS Certificate Manager.  | 
|   **product\$1family**   |  string  |  This describes category for the type of product. Sample values: Alarm, AWS Budgets, Stopped Instance, Storage Snapshot & Compute.  | 
|   **usage\$1type**   |  string  |  The usage details of the line item. For example: `USW2-BoxUsage:m2.2xlarge` describes an `m2` High Memory Double Extra Large instance in the US West (Oregon) Region.  | 
|   **operation**   |  string  |  The specific AWS operation covered by this line item. This describes the specific usage of the line item. For example: a value of RunInstances indicates the operation of an Amazon EC2 instance.  | 
|   **item\$1description**   |  string  |  The description of the line item type.For example: The description of a usage line item summarizes what type of usage you incurred during a specific time period. For size-flexible RIs, the description corresponds to the RI the benefit was applied to. If a line item corresponds to a t2.micro and a t2.small RI was applied to the usage, the lineItem/LineItemDescription displays t2.small.  | 
|   **availability\$1zone**   |  string  |  The Availability Zone that hosts this line item. For example: us-east-1a or us-east-1b.  | 
|   **region**   |  string  |  This describes geographical area that hosts your AWS services. Use this field to analyze spend across a particular Region. Sample values: eu-west-3, us-west-1, us-east-1, ap-northeast-2 & sa-east-1.  | 
|   **instance\$1type\$1family**   |  string  |  This describes the instance family that is associated with the given usage. Sample values: t2, m4 & m3.  | 
|   **instance\$1type**   |  string  |  Describes the instance type, size, and family, which define the CPU, networking, and storage capacity of your instance. Sample values: t2.small, m4.xlarge, t2.micro, m4.large & t2.large  | 
|   **platform**   |  string  |  Describes the operating system of your Amazon EC2 instance. Sample values: Amazon Linux, Ubuntu, Windows Server, Oracle Linux & FreeBSD.  | 
|   **tenancy**   |  string  |  Describes the type of tenancy allowed on the Amazon EC2 instance. Sample values: Dedicated, Reserved, Shared, NA & Host.  | 
|   **processor**   |  string  |  Describes the processor on your Amazon EC2 instance. Sample values: High Frequency Intel Xeon E7-8880 v3 (Haswell) & Intel Xeon E5-2670 & AMD EPYC 7571.  | 
|   **processor\$1features**   |  string  |  Describes the processor features of your instances. Sample values: Intel AVX, Intel AVX2, Intel AVX512 & Intel Turbo.  | 
|   **database\$1engine**   |  string  |  Describes which database engine is being used. Sample Values: Aurora MySQL, Aurora PostgreSQL, Oracle & MySQL.  | 
|   **product\$1group**   |  string  |  A construct of several products that are similar by definition, or grouped together. For example: the Amazon EC2 team can categorize their products into shared instances, dedicated host, and dedicated usage.  | 
|   **product\$1from\$1location**   |  string  |  Describes the location where the usage originated from. Sample values: External, US East (N. Virginia) & Global.  | 
|   **product\$1to\$1location**   |  string  |  Describes the location usage destination. Sample values: External & US East (N. Virginia).  | 
|   **current\$1generation**   |  string  |  Describes the instance’s generation is current or not, if it is current generation instance, the record will show "Yes" if not it will show "No".  | 
|   **legal\$1entity**   |  string  |  The Seller of Record of a specific product or service. In most cases, the invoicing entity and legal entity are the same. The values might differ for third-party AWS Marketplace transactions. Possible values include: Amazon Web Services, Inc. -- The entity that sells AWS services Amazon Web Services India Private Limited — The local Indian entity that acts as a reseller for AWS services in India.  | 
|   **billing\$1entity**   |  string  |  Helps you identify whether your invoices or transactions are for AWS Marketplace or for purchases of other AWS services. Possible values include: AWS — Identifies a transaction for AWS services other than in AWS Marketplace. AWS Marketplace — Identifies a purchase in AWS Marketplace.  | 
|   **pricing\$1unit**   |  string  |  The smallest billing unit for an AWS service. For example: 0.01c per API call.  | 
|   **resource\$1id\$1count**   |  bigint  |  Count of Distinct ResourceIDs, whereas a ResourceID is an ID of individual resource that you provisioned. For example: an Amazon S3 storage bucket, an Amazon EC2 compute instance, or an Amazon RDS database can each have a resource ID.  | 
|   **usage\$1quantity**   |  double  |  Sum of the amount of usage that you incurred during the specified time period. It specifically covers usage covered by Savings plan and on-demand usage.  | 
|   **unblended\$1cost**   |  double  |  Sum of the unblended cost, whereas the UnblendedCost is the UnblendedRate multiplied by the UsageAmount.  | 
|   **amortized\$1cost**   |  double  |  Sum of amortized cost, the costs are amortized over the billing period. This means that the costs are broken out into the effective daily rate. AWS estimates your amortized costs by combining your unblended costs with the amortized portion of your upfront and recurring reservation fees.  | 
|   **ri\$1sp\$1trueup**   |  double  |  In case of No Upfront or Partial Upfront Savings Plans, it shows the amount of upfront fee a Savings Plan subscription is costing you for the billing period in negative. The initial upfront payment for All Upfront Savings Plan and Partial Upfront Savings Plan amortized over the current month.  | 
|   **ri\$1sp\$1upfront\$1fees**   |  double  |  Describes upfront payment of Savings plan and Reserved Instances.  | 
|   **public\$1cost**   |  double  |  Sum of the total cost for the line item based on public On-Demand Instance rates. If you have SKUs with multiple On-Demand public costs, the equivalent cost for the highest tier is displayed. For example: services offering free-tiers or tiered pricing.  | 

# Add Account Names ( Optional )
Add Account Names ( Optional )

## Account Map


The Cost & Usage Report data doesn’t currently contain account names and other business or organization specific mapping so you can create a view that enhances your CUR data. There are a few options you can leverage to create your account\$1map view to provide opportunities to leverage your existing mapping tables, organization information, or other business mappings allowing for deeper insights. This view will be used to create the **Account Map** for your dashboards.

The steps below are necessary **ONLY** if you have deployed your dashboards using legacy CUR. Dashboards created using CUR 2.0 have account names integrated as part of deployment process.

### Option 1: Leverage your existing AWS Organizations account mapping (Recommended)


This option allows you to bring in your AWS Organizations data including OU groupings

#### Click here to expand


You will need to go through an additional Lab for this. This can collect multiple types of data across accounts and AWS Organization, including Trusted Advisor and Compute Optimizer Data. For Account Names you will need only one module **AWS Organization Module**, but we recommend to explore other modules of this lab as well.

 [Click to navigate to Optimization Data Collection Lab](data-collection-deployment.md) 

After successful deployment create or update your account\$1map view by running the following query in Athena Query Editor.

```
CREATE OR REPLACE VIEW account_map AS
SELECT DISTINCT
    "id" "account_id",
    "name" "account_name",
    ' ' "parent_account_id",
    ' ' "account_email_id"
FROM
    "optimization_data"."organization_data"
```

Also you must update the role that Quick Sight uses to update datasets. This can be a standard Quick Sight role that you can manage in Quick Sight Admin space (Security and Permissions section). Or this can be a role named CidQuick SightDataSourceRole. This role can be managed by Cloud-Intelligence-Dashboards stack in CloudFormation. Please make sure that you configure there the same bucket name as in [Data Collection Lab](data-collection.md).

### Option 2: Leverage AWS Cost Categories to add account names


This option allows you to bring in account names using AWS Cost Categories. If you have multiple payer accounts, please ensure you use the same name for your cost category in each of the payer accounts, so that consolidated cost and usage report in the data collection account will be consistent. Recommended cost category name: **accountname** 

#### Click here to expand


Navigate to cost categories by either searching for cost categories in the AWS console search bar

![\[Searching for cost categories in AWS console search highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/search_cc.png)


OR by going to the Billing console and choosing Cost Categories from the navigation menu

![\[Choosing cost categories in billing console highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/billing_console.png)


In the Cost Categories console Select **Create cost category** 

![\[Choosing create Cost Category in CC console\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/cc_create.png)


Name your cost category as **accountname** or any other name you’d like. Be consistent with the name across multiple payer accounts if you are consolidating data from other payer accounts

For lookback period select the second option **Apply cost category rules starting any specified month from the previous 12 months** and then choose a month which is atleast 3 months prior to the current month. Select **Next** 

![\[Creating Cost Category name\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/cc_name_option.png)


In the category rules under Rule Builder choose Rule Type as **Inherited value** and Dimension as **Account** 

Specify a default value as **unnamed**. You can use anything you’d like to define accounts which do not have an account name but be consistent across multiple payer accounts. Select **Next** 

![\[Creating Cost Category rules\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/cc_rule_option.png)


Select **create cost category** 

![\[Finishing Cost Category creation\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/cc_create_final.png)


The CUR will now have a column called **CostCategory/accountname** with the account names populated in them. Please note, it might take **24-48 hours** for the CUR to be updated. In Athena the column name in the CUR table will be something similar to **cost\$1category\$1accountname** 

Once the cost category is available in your CUR Athena table, update your account\$1map view with the below query with the following modifications

On line 4 and line 9, replace **cost\$1category\$1accountname** with the name of the cost category you chose for account name. If you chose just accountname as shown in the example above then no change is needed.

On line 8, replace **(database).(tablename)** with your CUR database and table name (e.g. cid\$1cur.cur)

Run the query after the modification. Your account\$1map view will now have account names from the cost category created.

```
CREATE OR REPLACE VIEW "account_map" AS
SELECT DISTINCT
line_item_usage_account_id "account_id"
, max_by(cost_category_accountname,line_item_usage_start_date) "account_name"
, ’ ’ parent_account_id
, ’ ’ account_email_id
FROM
(database).(tablename)
WHERE ((cost_category_accountname <> ’') AND (("bill_billing_period_start_date" >= ("date_trunc"(’month', current_timestamp) - INTERVAL '2' MONTH)) AND (CAST("concat"("year", '-', "month", '-01') AS date) >= ("date_trunc"('month', current_date) - INTERVAL '2' MONTH))))
group by line_item_usage_account_id
```

### Option 3: Account Map CSV file using your existing AWS Account mapping data


Many organizations already maintain their account mapping outside of AWS. You can leverage your existing mapping data by creating a csv file with your account mapping data including any additional organization attributes.

#### Step 1:Click here to create using your own account mapping csv and Amazon S3


 **Create your account\$1map csv file** 

This example will show you how to create using a sample account\$1map csv file

1. Create an account\$1map csv file locally, you can use the sample here and requirements below as a starting point: [account\$1map.csv](samples/account_map.csv.zip) 

1. Update your account\$1map csv with your account mapping data

 **Upload your account\$1map csv file to Amazon S3** 

1. Navigate to **Amazon S3** 

1. Select **Create Bucket** 

![\[Amazon S3 console with create bucket button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_create_bucket.png)


1. Name your bucket, we recommend **cost-account-map-** to easily locate

![\[Amazon S3 create bucket with bucket name field highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_name_bucket.png)


1. Scroll to the bottom and select **Create Bucket** 

![\[Amazon S3 create bucket with create bucket button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_save_bucket.png)


1. Navigate to your newly created s3 bucket

![\[Amazon S3 bucket list with newly created bucket highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_select_bucket.png)


1. Select **Create folder** 

![\[Amazon S3 bucket object page with create folder button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_create_folder.png)


1. Name your folder **account-map** and select **Create folder** 

![\[Create folder page with folder name field and create folder button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_name_folder.png)


1. Click on your newly created **account-map** folder

![\[Amazon S3 bucket screen in cost-account-map folder with account-map folder highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_select_folder.png)


1. Select **Upload** 

![\[account-map folder page with upload button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_upload.png)


1. In your newly created folder, **drag and drop** your account\$1map.csv file then select **Upload** 

![\[Amazon S3 upload page with the drag and drop file upload section and upload button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_upload_csv.png)


1. Copy down the **S3 Destination** of the account-map.csv. You will need this to create your Athena table

![\[Amazon S3 upload status page with destination part highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur/view0_copy.png)


 **Create your account\$1mapping Athena table** 

1. Navigate to **Amazon Athena** 

1. Modify the below query with your account\$1map.csv information. Replace the **(S3.Destination) value in row 15** with your account\$1map folder S3 destination from step 8 of the last section (e.g. cost-account-map-123456789012/account-map)

**Note**  
Validate rows **2-5** match your csv columns. If you removed one of the fields in the csv you will need to remove it in the query. If you added any additional fields you will need to add the attribute to the query.\$1

```
CREATE EXTERNAL TABLE +account_mapping+(
    +account_id+ string,
    +account_name+ string,
    +business_unit+ string,
    +team+ string,
    +cost_center+ string
    )
ROW FORMAT DELIMITED
    FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
    'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
    'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
    '(S3.Destination)'
TBLPROPERTIES (
    'has_encrypted_data'='false',
    'skip.header.line.count'='1')
```

#### Step 2:Click here to create your account\$1map view in Athena using the table you created in the above step


 **Create your account\$1map Athena view** 

The account\$1map Athena view ensures any new accounts are not missed in your dashboard by creating a view off of your CUR table and account\$1mapping Athena table.

Modify the following query with your table names:

1. Replace **(database).(tablename)** in line 13 with your CUR database and table name (e.g. cid\$1cur.cur)

1. Replace **(database).(tablename)** in line 23 with your account\$1mapping database and table name (e.g. cid\$1cur.account\$1mapping)

```
CREATE OR REPLACE VIEW account_map AS
SELECT DISTINCT
a.line_item_usage_account_id "account_id"
, b.account_name
, b.business_unit
, b.team
, b.cost_center
FROM
    ((
    SELECT DISTINCT line_item_usage_account_id
    FROM (database).(tablename) ) a
LEFT JOIN (
    SELECT DISTINCT
        "lpad"("account_id", 12, '0') "account_id"
    , account_name
    , business_unit
    , team
    , cost_center
    FROM
    (database).(tablename) ) b ON (b.account_id = a.line_item_usage_account_id))
```

You must update the role that Quick Sight uses to update datasets. This can be a standard Quick Sight role that you can manage in Quick Sight Admin space (Security and Permissions section). Or this can be a role named CidQuick SightDataSourceRole. This role can be managed by Cloud-Intelligence-Dashboards stack in CloudFormation. Please make sure that you configure there the same bucket name as in [Data Collection Lab](data-collection.md).

Alternatively you can also choose to do an one-time update of your account map view using one of the options below

#### Click here to one-time update account map from CSV data with cid-cmd tool


```
cid-cmd map --account-map-source csv --account-map-file FILE.CSV
```

### Final Steps


Once you update and test the account\$1map view in Athena, you need to make sure Quick Sight has access to the bucket containing Optimization Data Collection data and then refresh summary\$1view dataset in Quick Sight.

# Migration to CUR 2.0
Migration to CUR 2.0

## Migration Overview


AWS Provides a [Cost and Usage Report 2.0](https://docs.aws.amazon.com/cur/latest/userguide/dataexports-migrate.html) that will gradually replace the [Legacy Cost and Usage Report](https://docs.aws.amazon.com/cur/latest/userguide/cur-overview.html). This guide helps you to migrate existing CID dashboards to the new CUR 2.0.

Use this guide if you already have CID dashboards installed via CloudFormation or CLI methods.

![\[Migration Phases\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cur2/migration-phases.png)


Migration can be done in 3 steps:

1. Deploy CUR 2.0 via AWS Data Exports

1. Update Dashboards

1. (Optional) Decommission Legacy CUR

### Step 1 / 3: Deploy Data Exports


If you already have Data Exports Stack deployed for other dashboards (CORA or FOCUS) please just make sure you have CUR2 option activated.

If you do not have Data Exports Stack please install it using [this guide](data-exports.md). Please do not forget to request Back Fill from source accounts for up to 36 months. If you need more then this it can be possible to migrate your Legacy CUR data to the dataset that will be close enough to CUR2 schema.

By the end of this step and if backfill completed you must have a table with CUR2 available in Athena.

You can query the data to make sure that the data are identical.

**Example**  

```
SELECT
    billing_period,
    sum(line_item_unblended_cost)
FROM cid_data_export.cur2
GROUP BY 1
```

```
SELECT
    concat("`year`", '`-`', "`month`") AS billing_period,
    sum(line_item_unblended_cost)
FROM cid_cur.cur -- replace with your legacy CUR table
GROUP BY 1
```

Proceed to the next step once you have the data returned by Athena query above for several billing periods.

### Step 2 / 3: Update Dashboards


Dashboards can be installed in 2 different ways: With CloudFormation stack or with Command Line Tools. The update will be done with Command Line Tool regardless of the deployment method, but if you used CloudFormation for initial deployment, you need to update the stack.

With Command Line update you are in control of all modifications and you can backport your customizations if needed.

#### Step 2.1: CloudFormation Update (If needed)


You can skip this if you used only Command Line for dashboards deployment (cid-cmd).

1. Download [CloudFormation Stack](https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml).

1. Open [CloudFormation Console](https://console.aws.amazon.com/cloudformation/home?#/stacks) in your Destination/Dashboard Account. Make sure to use the same region where you deployed the stack previously.

1. Update the stack (default name is Cloud-Intelligence-Dashboards) with the version you get from GitHub and set the parameter "CURVersion = 2". If you want to keep Legacy CUR, set "Keep Legacy CUR Table" (KeepLegacyCURTable) parameter to "Yes" and proceed with CLI update (Step 2.2).

These steps update the role used by Quick Sight DataSources to access Amazon S3 bucket and Athena Database with CUR2. At this point your dashboard will not be updated (if you choose to KeepLegacyCURTable).

Once CloudFormation Stack is updated you need to proceed to Command Line Update.

#### Step 2.2: Command line Update (Mandatory)


1. Open CloudShell in the same region and install the (cid-cmd) tool:

```
pip3 install -U cid-cmd
```

1. Run the tool to update your dashboard and dependencies:

```
cid-cmd update --force --recursive
```

Please select `cid_data_export.cur2` when asked to choose the CUR table.

The tool will provide you with diff between the current Athena views and the updated views SQL query. You can choose to "proceed and override" or you can adjust your Athena views manually using the diff information and choose "keep existing". Another option can be to backport changed after the migration.

### Step 3 / 3: Decommission (Optional)


Once your dashboards are updated to CUR2 you can delete the Legacy CUR using CloudFormation (CUR-Source/CUR-Destination) or manually depending on how it was created.

## Troubleshooting and FAQ


### I do not see data on dashboards after migration


1. Run `cid-cmd status` to get more info about dataset status. Or manually check dataset status in Quick Sight UI

1. Double check that Quick Sight has permissions to read from your CUR bucket. If you use a default Quick Sight Role please add manually permissions to read from `cid-ACCOUNTID-data-exports` bucket.

1. Check if data are in Athena table and view `SELECT * FROM summary_view LIMIT 10` 

### How I can rollback


If you need to revert to the previous version

```
pip3 install cid-cmd==0.3.10
```

```
cid-cmd update --force --recursive
```

Choose Table with Legacy CUR when asked

## Feedback


Please [contact the team](feedback-support.md) if any issue.

# Deployment In China
Deployment In China

**Note**  
For deployments in AWS China Regions, please note there are specific regional considerations and limitations. For all other AWS Regions, please follow the [standard deployment guide](deployment-in-global-regions.md) 

## Architecture


There are 2 options how you can analyze your Cost and Usage. You can consolidate all your Cost and Usage data to Global Regions (for example using [Data Transfer Hub](https://github.com/aws-solutions/data-transfer-hub)) or you can deploy Cloud Intelligence Dashboards in China Regions. Here we will provide a specific guidance for deployment in China Regions.

We recommend deployment of the Dashboards in a dedicated Data Collection Account, other than your Management (Payer) Account. This guidance provides a CloudFormation template to copy Cost and Usage Report(CUR) data from your Management Account to the dedicated one. You can use it to aggregate data from multiple Management Accounts or multiple Linked Accounts.

If you do not have access to the Management/Payer Account, you can still collect the data across multiple Linked accounts using the same approach.

![\[Foundational Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/china/china-foundamental-architecture.png)


1.  [AWS Cost and Usage Report](https://aws.amazon.com/aws-cost-management/aws-data-exports/) delivers daily the Cost & Usage data to an [Amazon S3 Bucket](https://aws.amazon.com/s3/) in the Management Account.

1.  [Amazon S3](https://aws.amazon.com/s3/) replication rule copies CUR data to a dedicated Data Collection Account S3 bucket automatically.

1.  [Amazon Athena](https://aws.amazon.com/athena/) allows querying data directly from the S3 bucket using an [AWS Glue](https://aws.amazon.com/glue/) table schema definition.

1.  [Amazon Quick Sight](https://aws.amazon.com/quicksight/) creates datasets from [Amazon Athena](https://aws.amazon.com/athena/), refreshes daily and caches in [SPICE](https://docs.aws.amazon.com/quicksight/latest/user/spice.html)(Super-fast, Parallel, In-memory Calculation Engine) for [Amazon Quick Sight](https://aws.amazon.com/quicksight/) 

1. User Teams (Executives, FinOps, Engineers) can access Cloud Intelligence Dashboards in [Amazon Quick Sight](https://aws.amazon.com/quicksight/). Access is secured through [AWS IAM](https://aws.amazon.com/iam/), IIC ([AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/), formerly SSO), and optional [Row Level Security](https://catalog.workshops.aws/awscid/en-US/customizations/row-level-security).

## Deployment


![\[Deployment Steps\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/china/china-deploy-simple.png)


Deployment process consists of 3 main steps:

1. Deploy Amazon S3 Bucket and Athena Tables in the **Data Collection Account** 

1. Amazon S3 Bucket and a replication policy in **Source** Accounts (one or many)

1. Deploy Cloud Intelligence Dashboards (CID) Stack in the **Data Collection Account** 

## Deployment


### Before you start


1. Choose **Beijing Region (cn-north-1)** for your deployment as Quick Sight is only available in this region for AWS China.

1. Define your Data Collection Account. Create or reuse an existing shared account. We do not recommend using the Management(Payer) Account for data collection.

1. Make sure you have permissions for deploying CloudFormation Stacks.

#### See Required Permissions

+ In the Management/Payer Account you will need permission to access AWS CloudFormation, AWS Cost & Usage Reports, AWS IAM, AWS Lambda and Amazon S3.
+ In the Data Collection Account you will need permission to access Amazon Athena, AWS CloudFormation, AWS Directory Service, Amazon EventBridge, AWS Glue, AWS IAM, AWS Lambda, Amazon Quick Sight, and Amazon S3 via both the console and the Command Line Tool.
+ For a CLI deployment, you will not require CloudFormation permissions.
+ You can use this CloudFormation template to provision an IAM role with minimal permissions required for dashboard deployment. It takes an IAM role name as a parameter and adds the required policies to the role.

### Step 1. [Data Collection Account] Create Destination For CUR Aggregation


1. Sign in to your Data Collection Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation console. This Stack will create bucket open for replication and Athena Tables.

    [https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/quickcreate?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cur-aggregation.yaml&stackName=CID-CUR-Destination&param_CreateCUR=False&param_DestinationAccountId=REPLACE%20WITH%20THE%20CURRENT%20ACCOUNT%20ID&param_SourceAccountIds=PUT%20HERE%20PAYER%20ACCOUNT%20ID](https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/quickcreate?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cur-aggregation.yaml&stackName=CID-CUR-Destination&param_CreateCUR=False&param_DestinationAccountId=REPLACE%20WITH%20THE%20CURRENT%20ACCOUNT%20ID&param_SourceAccountIds=PUT%20HERE%20PAYER%20ACCOUNT%20ID) 

### Step 2. [Source/Management Account] Create CUR and Configure Replication


1. Sign in to your Source Account (Management/Payer Account).

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation console.

    [https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/quickcreate?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cur-aggregation.yaml&stackName=CID-CUR-Replication&param_CreateCUR=True&param_DestinationAccountId=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_SourceAccountIds=](https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/quickcreate?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cur-aggregation.yaml&stackName=CID-CUR-Replication&param_CreateCUR=True&param_DestinationAccountId=REPLACE%20WITH%20DATA%20COLLECTION%20ACCOUNT%20ID&param_SourceAccountIds=) 

### Step 3. [Data Collection Account] Deploy Dashboards


#### 3.1 - Prepare Amazon Quick Sight (Quick Suite)


##### Click here to expand Amazon Quick Suite Sign Up Workflow for AWS China Beijing Region


**Note**  
Quick Suite is only available in cn-north-1 Beijing region for AWS China

1. Sign in to your Data Collection Account and navigate to the AWS Management Console and search for **Quick Suite** in the services menu.

1. Select **Sign up for Quick Suite** if this is your first time accessing the service.

1. On the Quick Suite setup page, you’ll need to choose an authentication method:
   +  **IAM Identity Center** - Recommended for simplified user management and SSO capabilities
   +  **Active Directory** - Suitable for enterprises with existing AD infrastructure

     You cannot change authentication method after the initial setup. You will need to re-create the Amazon Quick Suite account.

1. If selecting IAM Identity Center:
   + Configure user groups for Quick Suite access levels (Admin/Reader)
   + Follow the [IAM Identity Center user management guide](https://docs.aws.amazon.com/singlesignon/latest/userguide/addusers.html) to set up groups and permissions

Note: Choose your authentication method based on your organization’s requirements and existing identity management infrastructure.

1. At the bottom of the sign up page, there is an optional add-on for Pixel-Perfect Reports:

**Note**  
Make sure to uncheck Pixel-Perfect Reports option unless specifically needed, as it incurs additional charges. This feature can be enabled later if needed.

![\[Quick Sight configuration page - uncheck Pixel-Perfect Reports option\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/pixel-perfect-china.png)


1. Complete the account creation:
   + Select the appropriate Authentication method
   + Enter a unique name for your Quick Suite account
   + Enter an email address for notifications
   + (Optional) Click Select S3 buckets and choose all cid buckets (cid-\$1)
   + Click Finish and wait for the congratulations screen

#### 3.2 - Deploy Foundational Dashboards


**Note**  
To avoid cross-region data transfer costs, use the Beijing Region (cn-north-1) - the only region where Quick Suite is available in China.

1. Sign in to your Data Collection Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation console.

    [https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/quickcreate?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml&stackName=Cloud-Intelligence-Dashboards&param_DeployCUDOSv5=yes&param_DeployKPIDashboard=yes&param_DeployCostIntelligenceDashboard=yes&param_CreateLocalAssetsBucket=yes&param_CURVersion=1.0&param_KeepLegacyCURTable=yes&param_CurrencySymbol=JPY](https://console.amazonaws.cn/cloudformation/home?region=cn-north-1#/stacks/quickcreate?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-cfn.yml&stackName=Cloud-Intelligence-Dashboards&param_DeployCUDOSv5=yes&param_DeployKPIDashboard=yes&param_DeployCostIntelligenceDashboard=yes&param_CreateLocalAssetsBucket=yes&param_CURVersion=1.0&param_KeepLegacyCURTable=yes&param_CurrencySymbol=JPY) 

1. Configure stack parameters:

##### Click here to expand Amazon Quick Suite Sign Up Workflow for AWS China Beijing Region

+ Enter a Stack name for your template such as Cloud-Intelligence-Dashboards
+ Review Common Parameters and confirm prerequisites before specifying the other parameters. You must answer "yes" to both prerequisites questions.
+ Copy and paste your **Quick SightUserName** into the parameter text box. To find your Quick Sight username:
  + Open a new tab or window and navigate to the **Quick Sight** console
  + Find your username from the person icon in the top right corner  
![\[Quick Sight page with username drop down in the top right highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cd_dash_qs_china.png)
+ Select the Dashboards you want to install. We recommend deploying all three: Cost Intelligence Dashboard, CUDOS, and the KPI Dashboard.
+ Make sure Parameters **CreateLocalAssetsBucket** set to **yes** and **CURVersion** set to **1.0** 
+ The **CurrencySymbol** parameter is defaulted to JPY (Japanese Yen - ¥). Please select the appropriate symbol from the dropdown option to match your CUR settings.
+ Review the configuration, select the checkbox **I acknowledge that Amazon CloudFormation might create IAM resources with custom names**, and click **Create stack**.
+ You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. This step can take \$120 minutes. Once complete, the stack will show **CREATE\$1COMPLETE** 

**Note**  
Dashboards will be empty initially. We recommend initiating a backfill via Support Cases

### Step 4 (optional). Request Data Backfill


You can create a Support Case requesting a back-fill of your Cost And Usage Report with up to 36 months of historical data. Case must be created from each of your Source Accounts (typically Management/Payer Accounts).

## Post-Deployment Steps


After successful deployment:

1. Check stack outputs for dashboard URLs

1. Verify Quick Sight access

1. Wait for data to populate (typically 24-48 hours for first data delivery)

1. Consider requesting a backfill through AWS Support if you need historical data

## FAQ


### How can I see AWS Usage in China and other Partitions?

+ You can consolidate Cost and Usage report from China and Global regions in one account (can be in any partition of your choice). We recommend using [Data Transfer Hub](https://github.com/aws-solutions/data-transfer-hub). Please consult with your legal team before moving data across AWS Partitions. If you aggregate data in different currencies you might need additionally a [currency conversion](spend-in-local-currency.md).

#### See Sample Architecture


![\[Data Transfer Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/china/china-cur-transfer.png)


1. Amazon S3 replicates AWS CUR data from a Management account in Global region to a Data Collection Account.

1. Cloud Intelligence Dashboards leverage Amazon Athena and Amazon Quick Sight for viualization.

1.  [Data Transfer Hub](https://github.com/aws-solutions/data-transfer-hub) moves data from China region to the Data collection account in Global Region.

1. Additional solution can be used for pulling up to date exchange rate information from a 3rd party source.

### What dashboards are available in China?

+ At the moment only Foundational Dashboards (CUDOS, CID, KPI) are available. We are working on other dashboard as well.

Other questions? Visit our [FAQs](faq.md).

# Advanced
Advanced

Advanced Dashboards require [CID Data Collection Stack](data-collection.md). This section covers the following dashboards:

## Contents

+  [Compute Optimizer Dashboard](compute-optimizer-dashboard.md) 
+  [Trusted Advisor Organizational (TAO) Dashboard](trusted-advisor-dashboard.md) 
+  [AWS Budgets Dashboard](budgets-dashboard.md) 
+  [AWS News Feeds](news-feeds.md) 
+  [Cost Anomaly Dashboard](cost-anomaly-dashboard.md) 
+  [Extended Support - Cost Projection](extended-support.md) 
+  [Graviton Savings Dashboard](graviton-savings-dashboard.md) 
+  [Health Events Dashboard](health-events-dashboard.md) 
+  [Support Cases Radar Dashboard](support-cases-radar.md) 
+  [AWS End User Computing (EUC) Dashboard](euc-dashboard.md) 
+  [ResilienceVue Dashboard](resiliencevue-dashboard.md) 
+  [Data Collection Monitor](data-collection-monitor.md) 
+  [Media Services Insights Hub](media-services-insights.md) 

# Compute Optimizer Dashboard
Compute Optimizer Dashboard

## Introduction


AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Compute Optimizer Dashboard lets you view cost optimization and risk reduction opportunities for all accounts in your AWS Organizations across all AWS Regions. Out-of-the-box benefits of the COD include (but are not limited to):
+ Find over and underutilized resources (EC2, AutoScaling Groups, EBS, Lambda).
+ Get right-sizing recommendations.
+ Identify potential savings across all payer accounts and regions.
+ Track optimization progress over time by AWS Account team or business unit.

## Architecture


![\[architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/compute-optimizer-architecture.png)


1. AWS Compute Optimizer Collects the data about running instances and uses Machine Learning to generate recommendations

1. CID Data Collection has a Lambda functions that on schedule assume a role in Management account(s) to trigger an export of AWS Compute Optimizer in each region configured in data collection stack. By default it i scheduled for every 14 days, but it can be changed in the parameters of the Data Collection Stack.

1. The AWS Compute Optimizer exports data to regional buckets.

1. The replication mechanisms consolidates all data from regional buckets to one data bucket. All exports on regional bucket will be deleted after 1 day as per lifecycle policy.

1. Quick Sight Dataset refreshes daily to show the latest state on the dashboard.

## Learn more

+  [3 New Features on the Compute Optimizer Dashboard](https://www.youtube.com/watch?v=IP_k2OXHPoo) 

See also:
+ A basics of Compute Optimizer Right Sizing in a [lab 200 Rightsizing with Compute Optimizer](https://wellarchitectedlabs.com/cost/200_labs/200_aws_resource_optimization/) 
+  [AWS Compute Optimizer FAQ](https://aws.amazon.com/compute-optimizer/faqs/) 

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=compute-optimizer-dashboard) 

![\[Image of a compute optimizer dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_demo.png)


## Prerequisites


1. To get right sizing recommendations you need to [Enroll all accounts to Compute Optimizer](https://docs.aws.amazon.com/compute-optimizer/latest/ug/getting-started.html#account-opt-in). You can use free version that provides recommendations based on 14 days of look-back period.

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure AWS Compute Optimizer Data Collection Module is enabled.

1. Ensure you have [Compute Optimizer enabled at Organization level](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-compute-optimizer.html).

## Deployment


**Example**  
If you already have CUDOS, Cost Intelligence Dashboard or KPI Dashboard installed via CloudFormation as described [here](deployment-in-global-regions.md), you can update the Stack by setting DeployComputeOptimizerDashboard to "yes" and updating the path of Data Collection S3 bucket (if different from default).  
If you do not have the stack installed, you can install using the instructions [here](deployment-in-global-regions.md) (you can ignore the Cost and Usage report part as it is not required for this dashboard).
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id compute-optimizer-dashboard
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

1. You can provide also additional tag names. This dashboard supports 2 tags: Primary and Secondary. Tags are the "key" part of the Resource Tag. Please note that Tags here are case sensitive and they not AWS Cost Allocation Tags.

   Recommendation: You can use one Tag to define ownership of resource and another tag to define if this particular resource is eligible for RightSizing (default) or must be excluded for valid reason (DRP, Vendor Compliance, Test, or any other reason for resource to be over-provisioned).If your are not using tags, leave defaults.

   Tags that are using a dash as separator (`-`) are not supported by default. If you have this kind of tags you might need additional customization of a dataset compute\$1optimizer\$1all\$1options. For example in `primary_tag` field you can use `parseJson(replace(tags, '-', '_'),'$.My_Tag')` 

   You can also define these tags later and apply to dashboard using `cid-cmd update --force --recursive` 

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id compute-optimizer-dashboard
```

## OPTIONAL STEPS


 **Manage Business Units Map** 

For managing Business Units please modify business\$1units\$1map view. You can update view definition providing your values, or you can create an csv file upload to s3, create a table and set business\$1units\$1map view to select from this table.

```
CREATE OR REPLACE VIEW business_units_map AS
SELECT *
FROM
    (
    VALUES
        ROW ('`111111111`', '`account1`', '`Business Unit 1`')
        , ROW ('`222222222`','`account2`', '`Business Unit 2`')
    ) ignored_table_name (account_id, account_name, bu)
```

Also you can use business\$1units\$1map view as a proxy to other data sources.

In case if you do not need Business Units functionality and you have CUDOS dashboard installed with account\$1map, you can use this view to SELECT from account\$1map.

```
CREATE OR REPLACE VIEW business_units_map AS
SELECT
    account_id as account_id,
    account_name as account_name,
    '`Undefined`' as bu
FROM account_map
```

## Authors

+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager
+ Voicu Chirtes, Senior Technical Account Manager
+ Timur Tulyaganov, Ex-Amazonian

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Trusted Advisor Organizational (TAO) Dashboard
Trusted Advisor Organizational (TAO) Dashboard

## Introduction


Amazon Trusted Advisor helps you optimize your AWS infrastructure, improve security and performance, reduce overall costs, and monitors service limits. Organizational view lets you view Trusted Advisor checks for all accounts in your AWS Organizations. The only way to visualize the organizational view is to use the TAO dashboard. The TAO dashboard is a set of visualizations that provide comprehensive details and trends across your entire AWS Organization. Out-of-the-box benefits of the TAO dashboard include (but are not limited to):
+ Quickly locate accounts that haven’t rotated their AWS IAM keys.
+ Find then sort unutilized and underutilized resources by cost or account.
+ Find accounts with not enabled CloudTrail logs
+ See a list of accounts that have reached 80% of individual service limits.

**Note**  
All accounts must have a **Business**, **On-Ramp** or **Enterprise** Support Plan.

![\[Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/ta.png)


## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=tao) 

![\[Amazon Quick Sight Trusted Advisor demo dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/tao_demo.png)


## Prerequisites


1. Check Support Plan

   Make sure all concerned accounts have a **Business**, **On-Ramp** or **Enterprise** [Support Plan](https://console.aws.amazon.com/support/plans/home).

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Trusted Advisor Data Collection Module is enabled. Version 3.14.1 or higher required.

## Deployment


**Example**  
If you already have CUDOS, Cost Intelligence Dashboard or KPI Dashboard installed via CloudFormation as described [here](deployment-in-global-regions.md), you can update the Stack by setting **DeployTaoDashboard** to "yes" and updating the path of Data Collection S3 bucket (if different from default).  
If you do not have the stack installed, you can install using the instructions [here](deployment-in-global-regions.md) (Step 3) and setting **DeployTaoDashboard** parameter to "yes" (you can ignore the Cost and Usage report part Step 1 and Step 2 as it is not required for this dashboard).
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id ta-organizational-view
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id ta-organizational-view
```

## Authors

+ Yuriy Prykhodko, Principal Technical Account Manager, AWS
+ Timur Tulyaganov, Ex-Amazonian
+ Sumit Dhuwalia, Senior Technical Account Manager, AWS

## Contributors

+ Oleksandr Moskalenko, Ex-Amazonian
+ Georgios Rozakis, Technical Account Manager, AWS

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# AWS Budgets Dashboard
AWS Budgets Dashboard

## Introduction


 [AWS Budgets](https://aws.amazon.com/aws-cost-management/aws-budgets/) is a service that helps customers plan and track their cloud spending across AWS services. It allows customers to set custom budgets for their AWS costs and usage, enabling better financial management and control over their AWS resources.

This dashboard gives you a clear, hierarchical view of your organization’s budgets, from the top-level down to individual departments and applications. Now, you can easily track budgeted, forecasted, and actual spend all in one place. With customizable visualizations and real-time insights, you’ll be empowered to make informed, data-driven decisions that drive strategic alignment and optimization. Identify areas for improvement, forecast future needs, and ensure your financial resources are being used efficiently.

![\[Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/budgets.png)


## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=aws-budgets) 

![\[Budget View\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/budgets_view.png)


![\[Budget Levels\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/budgets_view_levels.png)


## Tagging and Hierarchy


Using tags with AWS Budgets adds a powerful layer of organization and flexibility to the budgeting process. Tags are key-value pairs that customers can attach to AWS resources as metadata. When applied to budgets, tags allow customers to:

1. Organize budgets hierarchically: Customers can create a tag structure that mirrors their organizational hierarchy, such as department/team/project.

1. Represent their financial structure: Tags can reflect a company’s cost centers, business units, or other financial divisions.

1. Create granular budgets: Set up budgets for specific projects, applications, or environments using relevant tags.

1. Improve cost allocation: Easily track and allocate costs to the appropriate business units or projects based on tags.

1. Enhance reporting: Generate more detailed and meaningful reports by filtering and grouping budget data using tags.

By leveraging tags with AWS Budgets, customers can create a more organized and insightful budgeting system that aligns with their organization’s structure and financial practices (refer the below diagram). This approach provides greater visibility into AWS spend and helps make more informed decisions about resource allocation and cost optimization.

This Dashboard shows budgets with a specific tag key `cid:budget-level`.

![\[Structure Of Tags\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/budgets-financial-structure.png)


## Prerequisites


1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Budgets and Organization Data collection modules are enabled. Version 3.0.3 or higher required.

1. Tagging your budgets enables the option of introducing hierarchy within the organizational budgets. To achieve this, we recommend to set the tags with a key value pair as below:

   ```
   Key: cid:budget-level
   Value: Organization
   ```

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=AWS-Budgets-Dashboard&param_DashboardId=aws-budgets&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=AWS-Budgets-Dashboard&param_DashboardId=aws-budgets&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install ---upgrade cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id aws-budgets
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id aws-budgets
```

## Troubleshooting


 **Column "optimization\$1data.budgets\$1data.tags" cannot be resolved** 

If you see this issue on deployment please make sure you have updated the Data Collection stack to the version required on Prerequisites.

## Authors

+ Mohideen Hajamohideen, Sr. Cloud Infrastructure Architect
+ Marco De Bianchi, Sr. Cloud & FinOps Architect

## Contributors

+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# AWS News Feeds
AWS News Feeds

## Introduction


This dashboard provides recent AWS feeds including What’s New, Blog Posts, Videos and Security Bulletins. It allows you to filter news by AWS service, feed type and category and quickly focus on the most important topics. You can deploy dashboard in your AWS account and embed in your internal web sites. We keep our demo dashboard up to date and you are welcome to use it to stay updated with AWS related news and watch youtube videos directly from the demo dashboard.

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=aws-feeds) 

![\[AWS Feeds\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/aws-feeds.png)


## Prerequisites


Deploy [Data Collection Lab](data-collection.md) and make sure AWS Feeds Module is selected. Version 3.0.8 or higher required.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=AWS-News-Feeds-Dashboard&param_DashboardId=aws-feeds&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=AWS-News-Feeds-Dashboard&param_DashboardId=aws-feeds&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Oncecomplete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id aws-feeds --data-collection-database-name optimization_data
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id aws-feeds
```

## Authors

+ Francisco Tresgallo, Senior Technical Account Manager

## Contributors

+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Cost Anomaly Dashboard
Cost Anomaly Dashboard

## Introduction


AWS Cost Anomaly Detection uses advanced Machine Learning to identify anomalous spend and root causes, empowering the customers to take action quickly. To make it easier for them to identify any sudden spike in the spends, they can visualize the insights into those anomalous spends across multiple accounts using Amazon Quick Sight that retrieves and refreshes the data periodically. Out-of-the-box benefits of the COD include (but are not limited to):
+ Early Detection - A centralized cloud cost anomaly dashboard will allow our customers to quickly identify and investigate cost anomalies.
+ Trend Analysis - identify trends and patterns associated with cost anomalies MOM, Account, Service etc.
+ Governance - Centralized Dashboard view across organization (Payer) for FinOps Team to track and monitor AWS Cost Anomalies at the Organization level.
+ Early Resolution - With CAD, FinOps team proactively work with different teams in the organization to prevent overruns.

See also:
+  [AWS Cost Anomaly FAQ](https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/faqs/) 

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=cost-anomaly-dashboard) 

![\[Image of a cost anomaly dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/ca_demo.png)


## Prerequisites


1. To get cost anomalies on abnormal or sudden spend increases in your AWS account you need to [Enable Cost Anomaly Detection in your account](https://docs.aws.amazon.com/cost-management/latest/userguide/settingup-ad.html). AWS Cost Anomaly Detection is a feature within Cost Explorer. To access AWS Cost Anomaly Detection, enable Cost Explorer. For instructions on how to enable Cost Explorer using the console, [see Enabling Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-enable.html).

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Cost Anomalies Data Collection Module is enabled.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Cost-Anomaly-Dashboard&param_DashboardId=aws-cost-anomalies&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Cost-Anomaly-Dashboard&param_DashboardId=aws-cost-anomalies&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Oncecomplete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id aws-cost-anomalies --athena-database optimization_data
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id aws-cost-anomalies
```

## Authors

+ Yash Bindlish, Enterprise Support Manager
+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Extended Support - Cost Projection
Extended Support - Cost Projection

## Introduction


This dashboard provides insights on resources reaching extended support and projects the cost of extended support based on resource usage over a given period of time.

Services with extended support covered by this dashboard:

 **ElastiCache Extended Support** 

With ElastiCache Extended Support, you can continue running your cache on a major engine version past the end of standard support date for an additional cost. If you don’t upgrade after the end of standard support date, you will be charged.

Extended Support provides the following updates and technical support:
+ Security updates for critical and high CVEs for your cache and cache engine
+ Bug fixes and patches for critical issues
+ The ability to open support cases and receive troubleshooting help within the standard ElastiCache service level agreement

This dashboard provides a clear view on ElastiCache clusters reaching extended support in the next 3, 6, 12 months, and beyond.

It presents the estimated monthly cost of extended support, and allows you to drill down to cluster level, to review where your usage and estimated cost will be if, and when, your clusters enter the extended support period.

See also:
+  [Amazon ElastiCache Extended Support](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/extended-support.html) 

 **EKS Extended Support** 

With Amazon EKS Extended Support, you can continue running your EKS clusters on a version that has reached the end of its standard support, for an additional 12 months.

During extended support, Amazon EKS clusters will receive ongoing security patches for the Kubernetes control plane. Additionally, Amazon EKS will release patches for specific add-ons.

This dashboard provides a clear view on EKS clusters reaching extended support in the next 3, 6, 12 months, and beyond.

It presents the estimated monthly cost of extended support, and allows you to drill down to cluster level, to review where your usage and estimated cost will be if, and when, your clusters enter the extended support period.

See also:
+  [Amazon EKS Extended Support FAQs](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#extended-support-faqs) 

 **RDS Extended Support** 

With Amazon RDS Extended Support, you can continue running your database on a major engine version past the RDS end of standard support date for an additional cost.

The dashboard provides a clear view on databases reaching extended support in the next 3, 6, 12 months, and beyond.

It presents the estimated monthly cost of extended support, and allows you to drill down to database instance level, to review where your usage and estimated cost will be if, and when, your databases enter the extended support period.

See also:
+  [Using Amazon RDS Extended Support](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/extended-support.html) 

 **OpenSearch Extended Support** 

With Amazon OpenSearch Extended Support, you can continue running your legacy ElasticSearch and OpenSearch versions beyond end of Standard Support for an incremental flat fee over regular instance pricing.

The dashboard provides a clear view on ElasticSearch and OpenSearch domains reaching extended support in the next 3, 6, 12 months, and beyond.

It presents the estimated monthly cost of extended support, and allows you to drill down to domain level, to review where your usage and estimated cost will be if, and when, your domains enter the extended support period.

See also:
+  [Using Amazon OpenSearch Extended Support](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html#standard-support-extended-suppport) 

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=extended-support-cost-projection&sheet=default) 

![\[Extended Support Cost Projection Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/rdsxtsuppcp.png)


## Prerequisites


1. Deploy the [Foundational dashboards (CUDOS, CID, KPI)](deployment-in-global-regions.md) to ensure cost and usage information is available to produce the cost projection for RDS Extended Support based on actual usage for a given period of time.

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Inventory Data collection module is enabled. Version 3.2.0 or higher required.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Extended-Support-Cost-Projection&param_DashboardId=extended-support-cost-projection&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Extended-Support-Cost-Projection&param_DashboardId=extended-support-cost-projection&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Oncecomplete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id extended-support-cost-projection
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id extended-support-cost-projection
```

## Authors

+ Julio Chaves, Technical Account Manager
+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Graviton Savings Dashboard
Graviton Savings Dashboard

## Introduction


The Graviton Savings Dashboard (GSD) visualizes your current usage of [AWS Graviton Processors](https://aws.amazon.com/ec2/graviton/) and estimates potential cost savings when switching to Graviton. This tool helps you make informed decisions about optimizing your cloud infrastructure and track your progress.

 [AWS Graviton Processors](https://aws.amazon.com/ec2/graviton/) are a CPU developed by Amazon using the Arm instruction set designed to deliver the best price performance for your cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2). AWS Graviton-based instances cost up to 20% less and use up to 60% less energy than comparable x86-based Amazon EC2 instances.

Main features of Graviton Savings Dashboard:
+  **Current Graviton Usage and Realized Savings** - View your current Graviton usage and realized savings for Amazon EC2, RDS, OpenSearch and Elasticache
+  **Potential Graviton Savings** - Detect current workloads that are eligible for Graviton and evaluate potential savings from the transition.
+  **Governance** - Centralized Dashboard view allows FinOps Team to track and monitor AWS Graviton savings and opportunities across one or multiple AWS Organizations (Payers).  
[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/F_wskaHIfUk?rel=0/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/F_wskaHIfUk?rel=0)

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=graviton-savings-dashboard).

![\[EC2 - Existing Usage\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/EC2_Graviton_Opportunity_GSD.png)


See more screenshots in the [usage guide](#graviton-savings-usage-overview).

## Architecture Overview


The Dashboard uses AWS CUR from [Foundational Dashboards Stack](deployment-in-global-regions.md), and additionally AWS Pricing and Inventory Modules from [Data Collection Stack](data-collection.md). These stacks automatically collect data and store on Amazon S3. Customers can then leverage Amazon Athena and provided Amazon Quick Sight dashboard for visualization and analysis.

![\[Data Collection Overview\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/basic-data-collection.png)


## Prerequisites


1. If you do not have your Cost and Usage Report (CUR) set up, follow Steps 1 and 2 from the [CUDOS, CID, and KPI Dashboard](deployment-in-global-regions.md) deployment guide.

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Inventory Data collection module is enabled.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account. 1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Graviton-Savings-Dashboard&param_DashboardId=graviton-savings&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Graviton-Savings-Dashboard&param_DashboardId=graviton-savings&param_RequiresDataCollection=yes) 

   1. You can change **Stack name** for your template if you wish.

   1. Leave **Parameters** values as it is.

   1. Review the configuration and click **Create stack**.

   1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

   1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id graviton-savings
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id graviton-savings
```

## Usage Overview


### Click here for the detailed usage overview


The Graviton Savings Dashboard provides the ability for users to track current Graviton usage and realized savings, as well as identify potential migration opportunities. Each of the 4 services represented in the dashboard (EC2, RDS, ElastiCache, and OpenSearch) have dedicated tabs to review current usage and potential savings.

 **EC2** 

 **Current Usage and Savings** 

The Current Amazon EC2 Graviton Usage and Savings section provides a comprehensive overview of your current usage of EC2 Graviton-based instances and the potential cost savings you realized by migrating workloads to Graviton. These savings are calculated in comparison to the latest Intel-based instance generation of the same size. The section also allows you to explore Graviton coverage by month, usage/savings by account and instance family, and unit costs trends to see how your Graviton adoption has impacted your workloads. This detailed information can help you assess the benefits and cost optimization opportunities of adopting Graviton-based EC2 instances.

![\[EC2 - Existing Usage\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/EC2_UsageSavings_GSD.png)


 **Graviton Opportunities** 

The Amazon EC2 Graviton Opportunity section provides insights into the potential cost savings you could realize by migrating eligible workloads to Graviton-based instances. This section allows you to analyze your Graviton coverage - both at the account level and by instance family. This can help you identify clusters of workloads that present the greatest opportunities to benefit from the cost advantages of Graviton.

![\[EC2 - Opportunities\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/EC2_Graviton_Opportunity_GSD.png)


 **RDS** 

 **Current Usage and Savings** 

RDS has a similar Current Usage and Savings visuals to EC2, providing details on your current usage, realized savings, cost, and savings percentage. It also provides details by RDS Engine usage, as this is a large driver of eligibility for Graviton.

![\[RDS - Existing Usage\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/RDS_UsageSavings_GSD.png)


 **Graviton Opportunities** 

Eligibility for RDS Graviton is driven based on RDS Type, Engine, and version number. The below table details out eligibility for Graviton:


| Type | MySQL | PostgreSQL | MariaDB | 
| --- | --- | --- | --- | 
|  Amazon RDS  |  8.0.17 and higher  |  12.3, 13 and higher  |  10.4.13, 10.5 and higher  | 
|  Amazon Aurora  |  2.09.2 and higher  |  11.9 and higher, 12.4 and higher  |  n/a  | 

The RDS Graviton Opportunities section provides a breakdown of your Graviton eligibility based on several criteria. If your current database is using a compatible engine version and meets the required engine version, it will be displayed as "Eligible". If it meets the engine type requirements but does not meet the version number requirements, it will be listed as "Requires Update". Otherwise, it will be listed as "Ineligible". The "Eligibility and Savings by RDS Resource ID" table can be used as an explorer to identify workloads and determine potential savings for particular usage. You can use the controls at the top of the page and the filters on the table to isolate particular usage, and export the report to send to application teams to showcase instance-level savings.

You can learn more about RDS Graviton Eligibility [here](https://aws.amazon.com/blogs/database/key-considerations-in-moving-to-graviton2-for-amazon-rds-and-amazon-aurora-databases/) 

![\[RDS - Opportunities\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/RDS_Opportunity_GSD.png)


 **ElastiCache** 

 **Current Usage and Savings** 

Similar to EC2 and RDS the Current Usage and Savings visuals for Elasticache, provide details on your current usage, realized savings, cost, and savings percentage. It also provides details by cache engine, as this is a large driver of eligibility for Graviton.

![\[ElastiCache\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/ElastiCache_UsageSavings_GSD.png)


 **Graviton Opportunities** 

The ElastiCache Graviton Opportunity section highlights the potential monthly savings from using Graviton processors for your ElastiCache clusters. This section identifies the eligible clusters that meet the criteria to migrate to Graviton-based instances. The eligibility for Graviton usage is based on the specific database engine and version running on the cluster. Your caches are eligible to move to Graviton if they are:
+ Redis - 5.0.6 and above
+ Memcached - 1.5.16 and above

For more information, see the following supported versions documentation for [Redis](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/supported-engine-versions.html) and [Memcached](https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/supported-engine-versions-mc.html) 

![\[ElastiCache\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/ElastiCache_Opportunity_GSD.png)


 **OpenSearch** 

 **Current Usage and Savings** 

Similar to other services, the Current Amazon OpenSearch Graviton Usage and Savings section provides insights into your usage by engine to give you context into the eligibility of Graviton usage.

![\[OpenSearch - Existing Usage\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/OpenSearch_UsageSavings_GSD.png)


 **Graviton Opportunities** 

OpenSearch Graviton eligibility is a bit more straight forward than other AWS Managed Services. The below table describes eligibility for both Amazon ElasticSearch and AmazonOpenSearch:


| Type | Required Version | 
| --- | --- | 
|  Amazon ElasticSearch  |  7.9 or higher  | 
|  Amazon OpenSearch  |  All versions eligible  | 

For more information on supported OpenSearch instance types, read more in the official [Amazon OpenSearch Service Documentation](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/supported-instance-types.html) 

![\[OpenSearch - Opportunity Explorer\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/OpenSearch_Opportunity_GSD.png)


### Authors

+ Rosa Corley, Senior FinOps Commercial Architect
+ Rajani Guptan, Senior Technical Account Manager
+ Rem Baumann, Ex-Amazonian
+ Erik Petersen, Ex-Amazonian

### Contributors

+ Iakov Gan, Ex-Amazonian
+ Eric Christensen, Technical Account Manager
+ Yuriy Prykhodko, Principal Technical Account Manager
+ Travis James, Optimization Solutions Architect
+ John Masci, Principle Optimization Solutions Architect
+ Vinay Gaonkar, Principal Go To Market, EC2 Spot
+ Hahnara Hyun, Senior Specialist Solutions Architect, EC2 Graviton
+ Zi Shen Lim, Sustainability GTM, Graviton
+ Bhavik Gandhi, FinOps Commercial Architect
+ Shankar Gopalan, WWSO Specialist

## Feedback Support


If you have feedback or questions on the dashboard, please send your inquiries to [aws-cid-graviton-savings-dashboard@amazon.com](mailto:aws-cid-graviton-savings-dashboard@amazon.com) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Health Events Dashboard
Health Events Dashboard

## Introduction


 [AWS Health](https://aws.amazon.com/premiumsupport/technology/aws-health/) is the authoritative data source for events and changes affecting your AWS cloud resources. Through a centralized view across your organization, AWS Health integrates with 200\$1 AWS services to aggregate important information in a timely manner. AWS Health notifies you about service events, planned changes, and other account matters to help you manage your resources and take actions where necessary.

The CID Health Events Dashboard uses data collected from [AWS Health Organizational View API](https://docs.aws.amazon.com/health/latest/ug/aggregate-events.html) and creates a variety of visualizations for your past, current, and upcoming AWS Health events. The dashboard’s charts allow you analyze individual or multiple events to raise awareness and facilitate your operational planning.

Some of the features of this dashboard include:
+ Drill down from summary views to granular details - See the most impactful events and drill down to lists of affected resources
+ Deprecating versions tracking - Analyze and plan for deprecating versions of different AWS services, such as RDS and Lambda
+ Upcoming event timeline - See the scope and dates of future events to facilitate operational planning
+ Consolidation - Centralized view of all accounts in an organization or across multiple payer accounts

The Data Collection Stack uses AWS Organizations API to collect daily the AWS Health data. See more in [prerequisites](#health-event-dashboard-prerequisites).

**Note**  
Please note that the data on this dashboard may have a lag of 48 hour or more. Please do not use this dashboard for monitoring or real time operational events. This dashboard is exclusively for review and longer term operational planning. Please use [AWS Health Notifications](https://docs.aws.amazon.com/health/latest/ug/manage-user-notifications.html) to get the real time information when needed.

**Note**  
AWS Health might not include records of events that occurred in your organization before you enabled the organizational view feature. This limitation also applies to scheduled change announcements.

**Note**  
You must have a Business, Enterprise On-Ramp, or Enterprise Support plan from AWS Support to use the AWS Health API.

## Architecture


We recommend installing the Health Events Dashboards in a separate Data Collection Account (Can be the same with your other CID dashboards).

![\[Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/aws-health-archi.png)


1. The [Data Collection Stack](data-collection.md) provides an Amazon Lambda function that assumes a role in one or multiple Management accounts to retrieve daily the AWS Health Data and store it on Amazon S3. The Lambda only pulls data that are updated since the last retrieval. The stack also provides AWS Glue Tables to query collected data.

1. Cloud Intelligence Dashboards provide Amazon Athena views for querying data directly from the S3 bucket using an AWS Glue tables and Amazon Quick Sight Datasets and Dashboards, allowing Operation Teams acceding AWS Health data. Access can be secured through AWS IAM, IIC (SSO), and optional Row Level Security.

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo/?dashboard=health-events-dashboard) 

![\[Health Dashboard Screenshot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/he_dashboard.png)


## Prerequisites


1. Enable AWS Health events across accounts with organizational view.

   For this dashboard you need to [enable the organizational view of Health events](https://docs.aws.amazon.com/health/latest/ug/enable-organizational-view-in-health-console.html#enable-organizational-view-console). By default, you can use AWS Health to view the AWS Health events of a single AWS account. If you use AWS Organizations, you can also view AWS Health events centrally across your organization. This feature provides access to the same information as single account operations and it is the mechanism used to render this dashboard. You must have a Business, Enterprise On-Ramp, or Enterprise Support plan from AWS Support to use the AWS Health API.

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Health Events Data Collection Module is enabled. Version 3.0.8 or higher required.

1. Prepare Athena

   If this is the first time you will be using Athena you will need to complete a few setup steps before you are able to create the views needed. If you are already a regular Athena user you can skip these steps and move on to the Enable Amazon Quick Sight section below.

### Click here to get Athena warmed up


1. From the services list, choose **S3** 

1. Create a new S3 bucket for Athena queries to be logged to. Keep to the same region as the S3 bucket created for your Compute Optimizer data created via Data Collection Lab.

1. From the services list, choose **Athena** 

1. Select **Get Started** to enable Athena and start the basic configuration  
![\[Athena getting started page from the AWS console\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_athena.png)

1. At the top of this screen select **Before you run your first query, you need to set up a query result location in Amazon S3**.  
![\[Athena Query editor in the AWS console\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_athena_v2.png)

1. Validate your Athena primary workgroup has an output location by
   + Open a new tab or window and navigate to the **Athena** console
   + Select **Workgroup: primary**   
![\[Athena Query editor with primary workgroup highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_athena_v3.png)
   + Confirm your **Query result location** is configured with an S3 bucket path.
     + If not configured, continue to setting up by clicking **Edit workgroup**   
![\[Athena workgroup settings with the edit workgroup button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_athena_v4.png)
   + Add the **S3 bucket path** you have selected for your Query result location and click save  
![\[Athena edit workgroup with the query results location input highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_athena_v5.png)

1. Enable Amazon Quick Sight

   Amazon Quick Sight is the AWS Business Intelligence tool that will allow you to not only view the Standard AWS provided insights into all of your accounts, but will also allow to produce new versions of the Dashboards we provide or create something entirely customized to you. If you are already a regular Amazon Quick Sight user you can skip these steps.

### Click here to get started with Amazon Quick Sight


1. Log into your AWS Account and search for **Quick Sight** in the list of Services

1. You will be asked to **sign up** before you will be able to use it  
![\[Page with a button to sign up for Amazon Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/qs.png)

1. After pressing the **Sign up** button you will be presented with 2 options, please ensure you select the **Enterprise Edition** during this step

1. Select **continue** and you will need to fill in a series of options in order to finish creating your account.
   + Ensure you select the region that is most appropriate based on where your S3 Bucket is located containing your CO report files.  
![\[Quick Sight configuration page with the Amazon S3 checkbox highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_qs_v2.png)
   + Enable the Amazon S3 option and select the bucket where your Compute Optimizer data created via Data Collection Lab are located  
![\[Quick Sight Amazon S3 bucket selection dialog\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_qs_v3.png)

1. Click **Finish** & wait for the congratulations screen to display

1. Click **Go to Amazon Quick Sight**   
![\[Amazon Quick Sight finished configuration page with button to go to Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_qs_v4.png)

1. Check you have **Amazon Quick Sight Enterprise Edition**   
![\[Quick Sight page with callouts to select Manage Quick Sight from the menu to confirm the Quick Sight edition\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/co_qs_v5.png)

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account. 1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Health-Events-Dashboard&param_DashboardId=health-events-dashboard&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Health-Events-Dashboard&param_DashboardId=health-events-dashboard&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id health-events-dashboard
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
 cid-cmd update --dashboard-id health-events-dashboard
```

## Authors

+ Eric Christensen, Senior Technical Account Manager
+ Iakov Gan, Ex-Amazonian

## Contributors

+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Support Cases Radar Dashboard
Support Cases Radar Dashboard

## Introduction


AWS Support Cases Radar Dashboard provides a centralized platform to consolidate, monitor and analyze [AWS Support Cases](https://docs.aws.amazon.com/awssupport/latest/user/getting-started.html) across all linked accounts and multiple AWS organizations. With unified view of all support cases, this dashboard empowers cloud governance teams to enhance operational efficiency and maximize the value delivered by AWS Support.

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=support-cases-radar) 

![\[image of a support cases radar dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/support_cases_radar_dashboard.png)


## Architecture Overview


![\[Data Collection Overview\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/support_cases_radar_arch.png)


The Data Collection Stack collects the information about cases on a daily basis. Only the cases which have changes are collected. An Amazon Step Function saves case information on Amazon S3 and sends an event with case reference to the Default Bus of EventBridge. The Quick Sight dashboard is refreshed every night to provide case summary and insights.

## Prerequisites


1. Make sure all concerned accounts have a **Business**, **On-Ramp** or **Enterprise** [Support Plan](https://console.aws.amazon.com/support/plans/home).

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure Support Cases Data Collection Module is enabled.

## Deployment


**Example**  
 **Prerequisite:** To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Support-Cases-Radar-Dashboard&param_DashboardId=support-cases-radar&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Support-Cases-Radar-Dashboard&param_DashboardId=support-cases-radar&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.

    **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy ---dashboard-id support-cases-radar
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Optional Plugins


Support Cases Radar has optional plugins that can be deployed to enable additional capabilities such as a generative AI case summarization.

 [Optional Plugins](optional-plugin.md) 

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id support-cases-radar
```

## Authors

+ Raffy Armistead, Senior Technical Account Manager
+ Samuel Chniber, Senior Solution Architect
+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Optional Plugins
Optional Plugins

## Steps

+  [Summarization Plugin](summarization-plugin.md) 

# Summarization Plugin
Summarization Plugin

**Note**  
The AWS Support Cases Summarization powered by Amazon Bedrock may not capture all nuances of the original conversation with the AWS Support. It should be verified against the full transcript which remains the single source of truth for accuracy and completeness. See [AWS Responsible AI Policy](https://aws.amazon.com/ai/responsible-ai/policy/).

**Note**  
The AWS Support Cases Dashboard is not realtime and might have 24h-48h delay. For the ongoing cases please check AWS Support Center in the respective account.

## Authors

+ Samuel Chniber, Senior Solution Architect
+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

## Introduction


AWS Support Cases Summarization is a Plugin to the AWS Support Cases Radar Dashboard that leverages the power of Generative AI (GenAI) through the use of Amazon Bedrock.

This plugin aims at summarizing the [AWS support cases](https://docs.aws.amazon.com/awssupport/latest/user/getting-started.html) problem statement, communications with the AWS Support Engineers and recapping eventual actions to be carried out either by AWS or by the customer for resolution.

## Architecture Overview


![\[Data Collection Overview\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/support_cases_summarization_arch.png)


The Summarization Stack deploys a rule to Default Eventbridge bus to capture events sent by the Data Collection Stack. The Eventbridge Rule processes the message by sending it to an Amazon SQS queue. The SQS Queue is responsible for triggering a lambda function that executes Bedrock API call for the summarization and enriches the collected support case data by adding the summaries back to Amazon S3.

## Deployment steps


### Step 1 of 4: Deploy the AWS Support Cases Radar Dashboard


This plugin has a dependency on the successful deployment of the AWS Support Cases Radard Dashboard.

 [Read More](support-cases-radar.md) 

### Step 2 of 4: Enable Foundation Model on Amazon Bedrock


To get AWS Support Cases Summarized you need to [add access to Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html).

#### Click here to see instructions on the AWS Console


![\[Enable Foundation Model on Amazon Bedrock\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/support_cases_summarization_model_access.gif)


### Step 3 of 4: (Optional) Deploy an Amazon Bedrock Guardrail in the Data Collection Account in the Inference Region


 [Amazon Bedrock Guardrails](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html) is a crucial security feature for generative AI applications that helps implement safeguards based on specific use cases and responsible AI policies. It provides an additional layer of protection on top of the native safeguards offered by foundation models (FMs).

We provide an example of Amazon Guardrails stack, but if your company is already using Guardrails you can skip this Step and continue to installation of the Plugin Stack (Step 4).

 [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/plugins/support-case-summarization/guardrail/guardrail.yaml&stackName=CidSupportCaseBedrockGuardrailStack](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards-us-east-1.s3.amazonaws.com/cfn/plugins/support-case-summarization/guardrail/guardrail.yaml&stackName=CidSupportCaseBedrockGuardrailStack) 

#### Click here to know more about plugin stack parameters


This plugin comes with the following reasonable defaults that can be overridden through the parameters exposed by the CloudFormation template:


| Parameter | Description | Default | 
| --- | --- | --- | 
|  BlockedInputMessage  |  Message to return when the Amazon Bedrock Guardrail blocks a prompt.  |  \$1"executive\$1summary":"Amazon Bedrock Guardrails has blocked the AWS Support Case Summarization.","proposed\$1solutions":"","actions":"","references":[],"tam\$1involved":"","feedback":""\$1  | 
|  BlockedOutputMessage  |  Message to return when the Amazon Bedrock Guardrail blocks a model response  |  ’’  | 
|  IncludeSexualContentFilter  |  Whether to include Sexual Content Filter in the Guardrail or not  |  'yes'  | 
|  SexualContentFilterInputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.  |  'HIGH'  | 
|  SexualContentFilterOutputStrength  |  The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  IncludeViolentContentFilter  |  Whether to include Violent Content Filter in the Guardrail or not  |  'yes'  | 
|  ViolentContentFilterInputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  ViolentContentFilterOutputStrength  |  The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  IncludeHateContentFilter  |  Whether to include Violent Content Filter in the Guardrail or not  |  'yes'  | 
|  HateContentFilterInputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  HateContentFilterOutputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  IncludeInsultsContentFilter  |  Whether to include Insults Content Filter in the Guardrail or not  |  'yes'  | 
|  InsultsContentFilterInputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  InsultsContentFilterOutputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  IncludeMisconductContentFilter  |  Whether to include Insults Content Filter in the Guardrail or not  |  'yes'  | 
|  MisconductContentFilterInputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  MisconductContentFilterOutputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 
|  IncludePromptAttackContentFilter  |  Whether to include Insults Content Filter in the Guardrail or not  |  'yes'  | 
|  PromptAttackContentFilterInputStrength  |  The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces  |  'HIGH'  | 

### Step 3 of 4: Deploy the AWS Support Case Summarization Stack In the Data Collection Account


In this step we will deploy the summarization Plugin stack via cloud formation.

 [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/case-summarization/case-summarization.yaml&stackName=CidSupportCaseSummarizationStack](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/case-summarization/case-summarization.yaml&stackName=CidSupportCaseSummarizationStack) 

#### Click here to know more about plugin stack parameters


This plugin comes with the following reasonable defaults that can be overridden through the parameters exposed by the CloudFormation template:


| Parameter | Description | Default | 
| --- | --- | --- | 
|  BedrockRegion  |  The AWS Region from which the Summarization is performed  |  us-east-1  | 
|  Instructions  |  Additional instructions passed to the Large Language Model for the summarization process customizability  |  ’’  | 
|  Provider  |  Large Language Model Provider for the summarization process customizability  |  Anthropic  | 
|  FoundationModel  |  Foundation Model to be used for the summarization process  |  Claude 3.5 Sonnet  | 
|  InferenceType  |  Summarization process Inference Type  |   "ON\$1DEMAND"   | 
|  Temperature  |  Summarization process Temperature  |  0  | 
|  MaxTokens  |  Summarization process Maximum Tokens  |  8096  | 
|  MaxRetries  |  Summarization process Maximum Retries  |  30  | 
|  Timeout  |  Summarization process Timeout in seconds  |  60  | 
|  BatchSize  |  Summarization process Batch Size for parallel processing  |  1  | 
|  GuardrailId  |  Amazon Bedrock Guardrail ID to be used (Use this parameter in case you are using an externally managed Guardrail configuration. Leave empty if not planning to use Amazon Bedrock Guardrail)  |  ’’  | 
|  GuardrailVersion  |  Amazon Bedrock Guardrail Version to be used (Use this parameter in case you are using an externally managed Guardrail configuration. Leave empty if not planning to use Amazon Bedrock Guardrail)  |  ’’  | 
|  GuardrailTrace  |  The trace behavior for the guardrail  |   "ENABLED"   | 

## Post Deployment


### Where i can see summarizations?


You will be able to see summarizations in the dashboard. There are short executive summaries in the table with cases and also a more detailed information with the summary of solution and current actions in the table below once you select the case. The summaries only appear after 48-24h and only for updated cases.

### Can I force summarization for all cases?


Yes, you can trigger the refresh of all Support Cases and it will generate summaries as well.

1. Go to the Amazon S3 Bucket of data collection and delete the folder `support-cases/` to trigger collection for the last 12 months. (Or you can modify `last_read` field in json file `s3://cid-data-XXX/support-cases/support-cases-status/payer_id=XXX/XXX.json` to trigger new collection and summaries generation for less then for 12 months).

1. Go to Amazon Step Functions and execute `CID-DC-support-cases-StateMachine` this must collect all Support Cases.

1. You can check the activity of the lambda `CID-DC-support-case-summarization-Lambda`. Make sure this lambda is not triggered errors (Typical issue is the enabling of access to the model. See above). 1. Go to Quick Sight and refresh the dataset `support_cases_communications_view`. Once the refresh finishes you can check the Dashboard. The table with cases and case details must contain executive and detailed summaries.

### How often Summarization are updates?


The summarization plugin automatically generates summaries of support case histories based on past communications. The data collection by default operates on a daily basis. Once the plugin is deployed, the summarization stack enriches cases with summaries right after daily collecting new data. Each day, data collection gathers all communications that have occurred since the previous collection. The Quick Sight dashboard updates data on the daily schedule as well. As a result, there may be up to a 24-hour delay between when a communication occurs and when it appears in the dashboard. We recommend using the AWS Support center to get the latest information on the status of the current cases.

# AWS End User Computing (EUC) Dashboard
AWS End User Computing (EUC) Dashboard

## Introduction


The End User Computing (EUC) Dashboard provides a unified view of your AWS EUC environment through an intuitive Quick Sight interface. Key capabilities include:
+ Operational visibility into Amazon WorkSpaces and Amazon AppStream 2.0 usage patterns
+ Cost optimization insights and spending analytics
+ Performance monitoring with CloudWatch metrics integration
+ WorkSpaces Logon statistics
+ Resource utilization tracking and trending
+ Recommendations for environment optimization

This solution helps teams make data-driven decisions to optimize costs, improve operational efficiency, and enhance the end-user experience across their EUC estate.

![\[EUC Dashboard Screenshot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/executive_summary.png)


The dashboard has six tabs:
+  **Summary**:
  + Break down of EUC services costs for last 3 months.
  + Top Spending accounts for each service.
  + High level summary of your EUC estate.

![\[Insights\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/workspace_insights.png)

+  **Amazon WorkSpaces Insights**:
  + In-dept break down of WorkSpaces costs for entire environment, additional insights not available in the Cost Usage Report including:
    + Protocol
    + Operating Systems
  + Daily Cost breakdown.
  + WorkSpaces Monthly usage.
  + WorkSpaces Cost Breakdown.
  + Workspaces Software bundle information.

![\[Insights\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/workspace_usage.png)


![\[Insights\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/workspace_logon_details.png)

+  **Amazon WorkSpaces Usage**:
  + WorkSpaces User connections.
  + Top10 Daily usage.
  + Directory cost breakdown.
  + WorkSpaces daily usage and Hours used.
  + WorkSpaces Logon information
    + Last Logon
    + Low Usage
    + AlwaysOn WorkSpaces Logon information
    + Never Logged on
+  **Amazon WorkSpaces Metrics**:
  + This tab is additional will breaks down Cloudwatch CPU/Memory utilization of WorkSpaces.

![\[AppStream 2.0 Highlights\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/as2_details.png)

+  **Amazon AppStream 2.0** 
  + Detail overview of AppStream 2.0 environment.
+  **EUC Cost Optimization** 
  + Cost saving opportunities in your EUC environment.

## Architecture


![\[Image of Amazon EUC Dashboard architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/euc/euc_dashboard_cid.png)


1. The EUC Dashboard is depended on AWS Data Exports service delivers Cost & Usage Report (CUR2) daily to an Amazon S3 Bucket in the Management Account.

1. The EUC Dashboard also requires Data Collection lab for the Amazon Lambda to capture WorkSpaces data and CloudWatch metrics and copies Export data to a dedicated Data Collection Account automatically. EUC Dashboard can be configured during setup to use AWS Organizations (all linked accounts) or specific linked accounts to capture this data.

## Prerequisites


1. Deploy one or more of the foundational dashboards: [CUDOS, Cost Intelligence, or KPI Dashboard](cudos-cid-kpi.md). This will enable CUR and will enable required Quick Sight and Athena resources needed for this dashboard.

1.  [Deploy](data-collection-deployment.md) or [Update](data-collection-update.md) the Data Collection Lab and make sure the following modules are enabled. Version 3.2.0 or higher required.
   +  **Include Inventory Collector Module** (Mandatory) - This enables the collection of WorkSpaces environmental information using the WorkSpaces API.
   +  **Include WorkSpaces Utilization Data Collection Module** (Optional) - This enables the collection of Cloudwatch metrics for WorkSpaces. Please see **Visualizing WorkSpaces Cloudwatch Metric** section below to configure this.
   +  **EUC Module Settings** (Optional) - You can choose to scan all linked accounts in an organization or specify accounts that have WorkSpaces deployed, provide a comma-separated list of account IDS in the field to only scan these accounts. Leaving blank will scan all accounts.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=EUC-Dashboard&param_DashboardId=euc-dashboard&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=EUC-Dashboard&param_DashboardId=euc-dashboard&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.

    **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/?tab=readme-ov-file#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id euc-dashboard
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

1. Select the EUC Dashboard and hit enter

1. Follow any instructions in the command line tool

1. EUC Dashboard will deploy with a link

## **Visualizing WorkSpaces Cloudwatch Metric** (Optional)


In the EUC Dashboard, to view the WorkSpaces Cloudwatch metrics in the **WorkSpaces Metrics** tab, follow these steps:
+ During Deployment, make sure you selected **yes** for the **Include WorkSpaces Utilization Data Collection Module** parameter.
+ Go to the [Amazon Athena](https://console.aws.amazon.com/athena/) Query Editor.
+ Select the database that has the views for CID. By default it can be CUR 1 **cid\$1cur** cur or CID 2 **cid\$1data\$1export** cur2 database.
+ Run the following query to update **euc\$1metrics\$1view** view in Amazon Athena, **replacing the cur table name 'Line 89' based on version of cur running. e.g. "cid\$1data\$1export"."cur2" cur** 

### Expand SQL Query


```
CREATE OR REPLACE VIEW "euc_metrics_view" AS
WITH
  workspace_metrics AS (
   SELECT
     m."WorkspaceId"
   , m."UserName"
   , CAST(parse_datetime(m.timestamp, 'yyyy-MM-dd HH:mm:ss') AS timestamp) cw_timestamp
   , m."State"
   , m."BundleId"
   , m."DirectoryId"
   , m."ComputerName"
   , m."RunningMode"
   , m."RootVolumeSizeGib"
   , m."UserVolumeSizeGib"
   , m."accountid"
   , m."region"
   , m."CPUUsage"
   , m."MemoryUsage"
   , m."InSessionLatency"
   , m."UserVolumeDiskUsage"
   , m."RootVolumeDiskUsage"
   , m."UpTime"
   , CAST(parse_datetime(w.lastconnected, 'MM/dd/yy HH:mm:ss') AS timestamp) lastconnected
   FROM
     ("optimization_data"."workspaces_metrics_data" m
   LEFT JOIN "optimization_data"."inventory_workspaces_data" w ON (m."WorkspaceId" = w."WorkspaceId"))
)
SELECT
  wi."WorkspaceId"
, wi."UserName"
, wi.cw_timestamp
, wi."State"
, wi."BundleId"
, wi."DirectoryId"
, wi."ComputerName"
, wi."RunningMode"
, wi."RootVolumeSizeGib"
, wi."UserVolumeSizeGib"
, wi."accountid"
, wi."region"
, wi."CPUUsage"
, wi."MemoryUsage"
, wi."InSessionLatency"
, wi."UserVolumeDiskUsage"
, wi."RootVolumeDiskUsage"
, wi."UpTime"
, wi.lastconnected
, split_part(billing_period, '-', 1) year
, split_part(billing_period, '-', 2) month
, bill_billing_period_start_date billing_period
, date_trunc('day', CAST(line_item_usage_start_date AS timestamp)) usage_date
, bill_payer_account_id payer_account_id
, line_item_usage_account_id linked_account_id
, line_item_line_item_type charge_type
, line_item_product_code
, line_item_usage_type
, line_item_operation
, line_item_line_item_description
, line_item_resource_id
, product['product_family'] product_product_family
, product_instance_type
, product_instance_family
, product['product_name'] product_product_name
, product['operating_system'] product_operating_system
, product['group'] product_group
, product['bundle_description'] product_bundle_description
, product['bundle_group'] product_bundle_group
, product['resource_type'] product_resource_type
, product['storage'] product_storage
, product['running_mode'] product_running_mode
, product['group_description'] product_group_description
, product['software_included'] product_software_included
, pricing_unit
, pricing_term
, split_part(line_item_resource_id, '/', 2) resource_id
, split_part(line_item_resource_id, ':', 6) resource_type
, split_part(line_item_resource_id, 'directory/', 2) resource_directory_id
, CAST(line_item_unblended_cost AS DECIMAL(18, 6)) line_item_unblended_cost
, CAST(line_item_usage_amount AS DECIMAL(18, 6)) line_item_usage_amount
, CAST(pricing_public_on_demand_cost AS DECIMAL(18, 6)) pricing_public_on_demand_cost
, sum((CASE WHEN ("line_item_line_item_type" = 'Usage') THEN "line_item_usage_amount" ELSE 0 END)) "usage_quantity"
, sum("line_item_unblended_cost") "unblended_cost"
FROM
  (<CUR2TABLE> cur2
LEFT JOIN workspace_metrics wi ON ((split_part(cur2.line_item_resource_id, '/', 2) = wi.workspaceid) AND (cur2.line_item_usage_account_id = wi.accountid)))
WHERE ((("bill_billing_period_start_date" >= ("date_trunc"('month', current_timestamp) - INTERVAL  '7' MONTH)) AND (CAST("concat"("billing_period", '-01') AS date) >= ("date_trunc"('month', current_date) - INTERVAL  '7' MONTH)) AND (line_item_product_code = 'AmazonWorkSpaces')) OR (line_item_product_code = 'AmazonAppStream') OR (line_item_product_code = 'AWSDirectoryService'))
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52
```

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id euc-dashboard
```

## Authors

+ Christian O’Donoghue, Senior Technical Account Manager

## Contributors

+ Daniel Matlock, Technical Account Manager
+ James Gaskell, Ex-Amazonian
+ Yuriy Prykhodko, AWS Principal Technical Account Manager
+ Iakov Gan, Ex-Amazonian
+ Brian Sheppard, AWS Principal Technical Account Manager
+ Natassa Eleftheriou, Senior Technical Account Manager

## Feedback & Support


Have a success story to share with the Team, suggest an improvement or report an error?
+ Please email: [euc-dashboard@amazon.com](mailto:euc-dashboard@amazon.com) 
+ Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# ResilienceVue Dashboard
ResilienceVue Dashboard

## Introduction


The ResilienceVue Dashboard provides you with valuable insights into your current AWS workloads assessed by AWS Resilience Hub by consolidating the assessments across accounts and workloads in your AWS Organizations across all AWS Regions and Payer accounts. Out-of-the-box benefits of the ResilienceVue include (but are not limited to):
+ Monitor Application Resilience across Organization
+ Assess Application RTO/RPO targets
+ Quick visibility into Policy breaches across all accounts and regions
+ Analyze Infrastructure recommendation trends
+ Track Unimplemented operational recommendations

To learn more about AWS Resilience Hub, See:
+  [How to Manage Application Resilience using AWS Resilience hub](https://catalog.workshops.aws/aws-resilience-hub-lab/en-US) 
+  [AWS Resilience Hub - Getting Started](https://docs.aws.amazon.com/resilience-hub/latest/userguide/getting-started.html) 

## Architecture


![\[Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/rh_architecture.png)


1. The "[Data Collection Lab](data-collection.md)" provides an Amazon Lambda function that assumes a role in one or multiple Linked accounts to retrieve daily the AWS Resilience Hub assessment Data and store it on Amazon S3. The Lambda only pulls data that are updated since the last retrieval. The stack also provides AWS Glue Tables to query collected data.

1. Cloud Intelligence Dashboards provide Amazon Athena views for querying data directly from the S3 bucket using an AWS Glue tables and Amazon Quick Sight Datasets and Dashboards, allowing Enterprise Teams access to AWS Resilience Hub data. Access can be secured through AWS IAM, IIC (SSO), and optional Row Level Security.

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=resiliencevue) 

![\[Image of ResilienceVue dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/rv_demo.png)


## Prerequisites


1. To assess resilience for an application, you need to [Add an application to AWS ResilienceHub](https://docs.aws.amazon.com/resilience-hub/latest/userguide/describe-applicationlication.html). You can try AWS Resilience Hub free for 6 months, for your first 3 applications. For more information on pricing, follow [AWS Resilience Hub pricing](https://aws.amazon.com/resilience-hub/pricing/) 

1. Deploy or update [Data Collection Lab](data-collection.md) and make sure ResilienceVue Data Collection Module is enabled. Version 3.12.0 or higher required.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Resiliencevue-Dashboard&param_DashboardId=resiliencevue&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Resiliencevue-Dashboard&param_DashboardId=resiliencevue&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id resiliencevue
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id resiliencevue
```

## Authors

+ Subha Kalia, Senior Technical Account Manager
+ Ravindra Kori, Senior Solutions Architect
+ Praney Mahajan, Senior Technical Account Manager
+ Beau Henry, Technical Account Manager

## Contributors

+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Data Collection Monitor
Data Collection Monitor

## Introduction


The [Data Collection Lab](data-collection.md) tooling provides instrumentation log data to help you monitor the executions of the various modules of the Data Collection Framework. Starting with version 3.11, most Data Collection modules emit basic log data to track module execution and potential errors encountered. This dashboard reads that instrumentation data to present multiple views to track historical executions as well as troubleshoot any issues.

As of this version, the monitoring is based on Step Function execution. Failures in individual Lambda functions are caught by the module’s Step Function and, in some circumstances, the Lambda error is available through the dashboard. Errors for any given execution will be listed on the dashboard with a link to the Step Function execution instance pertaining to it. You can investigate the error by clicking on the link. Errors are also reported to CloudWatch metrics under the namespace "CID-DataCollection", for which you can optionally setup an alarm for more active monitoring.

Future releases will convey more granular Lambda error details and links to those execution logs.

## Preview


![\[Data Collection Monitor Screenshot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data_collection_monitor_01.png)


## Prerequisites


Deploy or update the [Data Collection Lab](data-collection.md) to version 3.11 or higher and include whatever modules you intend to use. Once deployed, any modules installed or updated will automatically be enabled for logging and picked up by the dashboard with the exception of the Feeds and Pricing modules. They will be enabled for instrumentation at a later time.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account. 1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Data-Collection-Monitor&param_DashboardId=dc-monitor&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Data-Collection-Monitor&param_DashboardId=dc-monitor&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
    pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
    cid-cmd deploy --dashboard-id dc-monitor
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
 cid-cmd update --dashboard-id dc-monitor
```

## Authors

+ Eric Christensen, Senior Technical Account Manager

## Contributors

+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Media Services Insights Hub
Media Services Insights Hub

## Introduction


The Media Services Insights Hub (MSIH) dashboard provides comprehensive visibility into AWS Elemental Media Services usage, costs, and performance metrics. This dashboard leverages AWS Cost and Usage Report (CUR) data to deliver actionable insights for optimizing media workflows and managing costs across your media infrastructure.

The dashboard covers key AWS Elemental Media Services including:
+  [AWS Elemental MediaConnect](https://aws.amazon.com/mediaconnect/) - Secure, reliable live video transport
+  [AWS Elemental MediaConvert](https://aws.amazon.com/mediaconvert/) - File-based video transcoding
+  [AWS Elemental MediaLive](https://aws.amazon.com/medialive/) - Live video processing
+  [AWS Elemental MediaPackage](https://aws.amazon.com/mediapackage/) - Video origination and packaging
+  [AWS Elemental MediaTailor](https://aws.amazon.com/mediatailor/) - Video personalization and monetization

![\[Image of Media Services Insights Hub architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/media_services_insights_02.png)


The MSIH dashboard is organized into intuitive tabs:

1.  **Executive Summary** High-level overview of media services costs, usage trends, and key performance indicators across all services.

1.  **MediaLive Reservations & Savings** Deep-dive into current and potential savings achieved through AWS Elemental MediaLive reservations.

1.  **MediaConnect** Detailed analysis of live video transport costs, connection usage, and data transfer metrics.

1.  **MediaConvert** Comprehensive view of transcoding job costs, queue utilization, and processing time analysis.

1.  **MediaLive** In-depth monitoring of live streaming costs, channel utilization, and reservation optimization opportunities.

1.  **MediaTailor** Insights into ad insertion costs, session metrics, and revenue optimization opportunities.

1.  **MediaPackage** Analysis of video packaging and origination costs, endpoint usage, and content delivery metrics.

Each tab provides progressively detailed insights to help you optimize your media workflows and control costs effectively.

## Demo Dashboard


Get more familiar with the dashboard using the live, interactive demo by following this [link](https://cid.workshops.aws.dev/demo?dashboard=media-services-insights) 

![\[Image of Media Services Insights Hub in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/media_services_insights_01.png)


## Prerequisites


Deploy the [CID Foundational Dashboards](dashboard-foundational.md) stack. This will enable your CUR, Amazon Athena and Quick Sight resources required for this and other dashboards.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install the [Data Exports Lab](data-exports.md) 

1. Log in to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Media-Services-Insights-Hub&param_DashboardId=media-services-insights&param_RequiresDataExports=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Media-Services-Insights-Hub&param_DashboardId=media-services-insights&param_RequiresDataExports=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id media-services-insights
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that currently dashboards can be initially deployed via CloudFormation but they cannot be updated through CloudFormation Stack updates. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id media-services-insights
```

## Dashboard Customization


1. Unleash your data creativity\$1 Dive into custom analysis by creating your own visuals from this dashboard. Follow our quick [guide](create-analysis.md) to get started.

1. To integrate CID with AWS Organizations for enhanced cost visibility across multiple accounts and organizational units follow the [documentation to add taxonomy details](add-org-taxonomy.md) 

1. To set up cost allocation tags for better resource tracking and cost attribution across your media services, follow the [Cost Allocation Tags documentation](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) 

## Usage Guide


There are multiple tabs available in this dashboard.

**Example**  
Start with the Executive Summary to get a high-level view of your media services spending and usage patterns. This tab provides:  
+ Total media services costs and month-over-month trends
+ Service-wise cost breakdown and utilization metrics
+ Top spending accounts and regions
+ Cost per service comparison and growth trends
+ Key performance indicators and cost optimization opportunities
+ Regional distribution of media services usage
+ Monthly cost forecasting and budget tracking
Monitor live video transport costs and connection performance:  
+ Connection usage patterns and data transfer volumes
+ Cost breakdown by connection type and region
+ Bandwidth utilization and peak usage analysis
+ Source and destination flow analysis
+ Data transfer cost optimization opportunities
+ Connection uptime and reliability metrics
+ Regional cost comparison for optimal placement
Track transcoding job costs and queue performance:  
+ Job processing costs by queue and priority
+ Queue utilization and processing time analysis
+ Input/output format cost comparison
+ Reserved capacity utilization and recommendations
+ Job failure rates and retry costs
+ Processing time trends and optimization opportunities
+ Cost per minute of content processed
+ Peak usage periods and capacity planning
Analyze live streaming costs and channel utilization:  
+ Channel costs by type and configuration
+ Input/output bandwidth utilization
+ Reserved instance vs on-demand cost analysis
+ Channel uptime and availability metrics
+ Regional deployment cost comparison
+ Encoding profile cost optimization
+ Redundancy configuration cost impact
+ Peak concurrent channel usage
Review packaging and origination costs:  
+ Endpoint usage and request volume analysis
+ Content delivery cost breakdown
+ Origin request patterns and caching efficiency
+ Packaging format cost comparison
+ Regional endpoint performance and costs
+ Content protection and DRM costs
+ Harvest job costs and optimization
+ CDN integration cost analysis
Examine ad insertion costs and session analytics:  
+ Session volume and ad request patterns
+ Personalization costs and revenue impact
+ Configuration usage and optimization
+ Ad decision server integration costs
+ Content delivery network costs
+ Session duration and engagement metrics
+ Revenue per session analysis
+ Peak traffic handling and scaling costs

## Cost Optimization Recommendations


There are a several ways to optimize your Elemental costs. Below are some of them.

**Example**  
+ Purchase MediaLive reserved instances for predictable live streaming workloads (up to 75% savings)
+ Consider MediaConvert reserved capacity for consistent transcoding volumes
+ Analyze usage patterns to determine optimal reservation terms (1-year vs 3-year)
+ Monitor reservation utilization and adjust capacity as needed
+ Review MediaLive channel configurations and encoding profiles
+ Optimize MediaConvert job settings for cost-effective processing
+ Implement appropriate redundancy levels based on content criticality
+ Use efficient encoding presets to reduce processing time and costs
+ Leverage CloudFront for global content delivery and reduced origin costs
+ Place MediaConnect flows and MediaLive channels in optimal regions
+ Implement efficient caching strategies for MediaPackage endpoints
+ Use VPC endpoints to reduce data transfer costs between services
+ Implement automated scaling for MediaLive channels based on demand
+ Use MediaTailor’s server-side ad insertion to reduce CDN costs
+ Monitor and optimize MediaPackage harvest jobs scheduling
+ Implement proper content lifecycle management to reduce storage costs
+ Set up cost anomaly detection for unusual spending patterns
+ Create budget alerts for each media service
+ Monitor service-specific metrics to identify optimization opportunities
+ Regular review of unused or underutilized resources

## Authors & Contributors

+ Krutarth Doshi, Sr. Technical Account Manager
+ Eric Christensen, Sr. Technical Account Manager
+ Ala Muhtaseb, Sr. Solutions Architect
+ Imane Zeroual, Sr. Cloud Operations Architect
+ Guillaume Girault, Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

Have a success story to share with the Team, suggest an improvement or report an error?
+ Please email: [cloud-intelligence-dashboards-media-services@amazon.com](mailto:cloud-intelligence-dashboards-media-services@amazon.com) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Additional
Additional

This section covers the following dashboards:

**Topics**
+ [FOCUS Dashboard](focus-dashboard.md)
+ [CORA Dashboard](cora-dashboard.md)
+ [Trends Dashboard](trends-dashboard.md)
+ [DataTransfer Cost Analysis Dashboard](datatransfer-dashboard.md)
+ [AWS Marketplace Single Pane of Glass (SPG) Dashboard](marketplace-dashboard.md)
+ [Kubecost Containers Cost Allocation Dashboard](kubecost-containers-dashboard.md)
+ [SCAD Containers Cost Allocation Dashboard](scad-containers-dashboard.md)
+ [Amazon Connect Cost Insight Dashboard](connect-cost-insight.md)
+ [AWS Config Resource Compliance Dashboard](config-resource-compliance-dashboard.md)
+ [Sustainability Proxy Metrics and Carbon Emissions Dashboard](sustainability-proxy-metrics-dashboard.md)

# FOCUS Dashboard
FOCUS Dashboard

## Introduction


 [The FinOps Cost and Usage Specification](https://focus.finops.org/) (FOCUS) is an open-source specification that defines clear requirements for cloud vendors to produce consistent cost and usage datasets.

Supported by the FinOps Foundation, FOCUS aims to reduce complexity for FinOps Practitioners so they can drive data-driven decision-making and maximize the business value of cloud, while making their skills more transferable across clouds, tools, and organizations.

The CID FOCUS Dashboard is an open-source and customizable dashboard that provides pre-defined visuals to get actionable insights from FOCUS data in Amazon QuickSight. It allows you to quickly get started with using FOCUS in your organization. The FOCUS Dashboard provides the following features:
+  **Consolidated view** of FOCUS data from multiple dimensions across your entire organization
+ Support for consolidation of **multiple FOCUS specification versions** from different cloud providers
+ Month-over-month trends with the ability to drill down from high-level visibility into resource-level details in a few clicks
+ Support of organizational taxonomy from tags
+ Effective discount rate calculations

## High Level Architecture


![\[Architecture High Level\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/focus-high-level.png)


1. AWS Data Exports service provides FOCUS data (currently supporting FOCUS 1.2 for AWS). Use the CID [Data Exports](data-exports.md) stack to activate it in your Management (Payer) Account and automatically configure the replication to a Data Collection Account.

1. Install [CID FOCUS Dashboard](#focus-dashboard-deployment) that leverages FOCUS data and provides a dynamically generated consolidated view in Amazon Athena. This view can be extended once you add FOCUS data from other providers.

1. Install the FOCUS data collection stack(s) that collect data from other providers. Currently we provide integrations to collect FOCUS data from [Microsoft Azure](https://catalog.workshops.aws/cidforazure/en-US/03-setup), [Google Cloud Platform](https://catalog.workshops.aws/cid-gcp-cost-dashboard/en-US/02-solution-design), and [Oracle Cloud Infrastructure](https://github.com/awslabs/cid-oci-cost-dashboard/). [Learn more](#focus-dashboard-add-focus-data-from-other-cloud-providers) about integration of FOCUS data. Typically these stacks leverage scheduled AWS Lambda or AWS Glue Jobs and retrieve data via APIs using credentials stored in AWS Secrets Manager. The data is encrypted with custom KMS keys to protect sensitive billing information from unauthorized access.

1. Update the focus\$1consolidation\$1view in Athena to include tables with FOCUS data from other cloud providers.

1. You can also export cost data from on-premises datacenters or SaaS providers in the same format and integrate them in a similar way.

## Demo Dashboard


Get more familiar with the dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=focus-dashboard&sheet=default).

Explore key FOCUS Dashboard capabilities in the [interactive presentation](https://app.storylane.io/share/zutqlizt5v45).

![\[FOCUS Dashboard Screenshot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/focus_dashboard.png)


## Prerequisites


Before installing the FOCUS Dashboard, you need to enable FOCUS Data Export and consolidate it from your Management (Payer) Accounts in the Data Collection Account.

![\[High Level Focus Export From AWS\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/focus-aws.png)


1. Create a FOCUS Data Export following the steps in the [Data Export](data-exports.md) page and return to this page once completed.

The Data Export prerequisites support FOCUS 1.2 for AWS. If you are migrating from FOCUS 1.0, see [Migration to AWS FOCUS 1.2](migration-to-focus-1-2.md).

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard).

1. Log in to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation console.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=FOCUS-Dashboard&param_DashboardId=focus-dashboard&param_RequiresDataExports=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=FOCUS-Dashboard&param_DashboardId=focus-dashboard&param_RequiresDataExports=yes) 

1. You can change the **Stack name** if you wish.

1. Leave **Parameters** values as they are.

1. Review the configuration and click **Create stack**.

1. The stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE**.

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see the error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed the prerequisite steps.
An alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to your **Data Collection** Account.

1. Open a command-line interface with permissions to run API requests in your AWS account. We recommend using [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface, run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), install Python 3.11 first:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. Run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id focus-dashboard
   ```

   Follow the instructions from the deployment wizard. More info about command-line options is available in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or via `cid-cmd --help`.

Data Exports can take up to 24-48 hours to deliver the first reports. If you just installed Data Exports, the dashboard will most likely be empty. Please allow up to 24 hours for data to arrive.

## Add FOCUS Data from Other Cloud Providers


The FOCUS Dashboard supports consolidation of FOCUS data from multiple FOCUS specification versions, allowing you to combine data from different cloud providers regardless of the FOCUS version they export.

After deploying each cloud provider integration, run the `cid-cmd update` command as described in [common update steps](#focus-dashboard-update-consolidation-view) to detect available FOCUS tables and select which ones to include in the consolidated view to be presented on FOCUS Dashboard.

### Microsoft Azure


![\[High Level Focus Export From Microsoft Azure\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/focus-azure.png)


1. Deploy the [FOCUS Dashboard](#focus-dashboard-deployment).

1. Deploy the [Cloud Intelligence Dashboard for Azure workshop](https://catalog.workshops.aws/cidforazure/en-US/03-setup), choosing FOCUS in [Export Type](https://catalog.workshops.aws/cidforazure/en-US/03_Setup/05_Parameters/#export-type).

1. Follow the [common update steps](#focus-dashboard-update-consolidation-view) below.

### Google Cloud Platform (GCP)


![\[High Level Focus Export From GCP\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/focus-gcp.png)


1. Deploy the [FOCUS Dashboard](#focus-dashboard-deployment).

1. Deploy the [GCP](https://catalog.workshops.aws/cid-gcp-cost-dashboard/en-US/02-solution-design) workshop.

1. Follow the [common update steps](#focus-dashboard-update-consolidation-view) below.

### Oracle Cloud Infrastructure (OCI)


![\[High Level Focus Export From OCI\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/architecture/focus-oci.png)


1. Deploy the [FOCUS Dashboard](#focus-dashboard-deployment).

1. Deploy the [OCI](https://catalog.workshops.aws/cidforoci/en-US/) workshop.

1. Follow the [common update steps](#focus-dashboard-update-consolidation-view) below.

### Common Update Steps


After deploying any cloud provider integration above, update the FOCUS Dashboard and consolidation view:

1. Log in to your **Data Collection** Account.

1. Open a command-line interface with permissions to run API requests in your AWS account. We recommend using [CloudShell](https://console.aws.amazon.com/cloudshell).

1. Run the following commands:

   ```
   pip3 install --upgrade cid-cmd
   cid-cmd update --recursive --force --dashboard-id focus-dashboard
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), replace the first line with:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. Select the FOCUS tables you would like to include in the consolidated view when prompted.

![\[Selecting FOCUS tables for consolidation\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/focus_update.gif)


## Update


Dashboards are not updated with an update of the CloudFormation stack. There are two update options depending on your needs:

### Dashboard Update


When a new version of the dashboard template is released, update your dashboard by running:

```
pip3 install --upgrade cid-cmd
cid-cmd update --dashboard-id focus-dashboard
```

If using [CloudShell](https://console.aws.amazon.com/cloudshell), replace the first line with:

```
sudo yum install python3.11-pip -y
python3.11 -m pip install -U cid-cmd
```

This updates the dashboard visuals to the latest template version.

### Dashboard and Data Schema Update


To add new FOCUS tables from other cloud providers, include new tags, or update the consolidation view schema, run:

```
pip3 install --upgrade cid-cmd
cid-cmd update --recursive --force --dashboard-id focus-dashboard
```

If using [CloudShell](https://console.aws.amazon.com/cloudshell), replace the first line with:

```
sudo yum install python3.11-pip -y
python3.11 -m pip install -U cid-cmd
```

This updates the dashboard, datasets and all dependent Athena views. You will be prompted to select which FOCUS tables to include in the consolidated view.

## Authors

+ Yuriy Prykhodko, Principal Technical Account Manager
+ Iakov Gan, Ex-Amazonian
+ Zach Erdman, Senior Product Manager
+ Mo Mohoboob, Senior Specialist SA
+ Marco De Bianchi, Sr. Delivery Consultant
+ Soham Majumder, Technical Account Manager

## Contributors

+ Petro Kashlikov, Senior Solutions Architect

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

## FAQ


### Can we replace CUR/CUDOS dashboard with FOCUS?


The FOCUS format does not currently include important information like [lineItem/Operation](https://docs.aws.amazon.com/cur/latest/userguide/Lineitem-columns.html#Lineitem-details-O) that is critical for FinOps use cases. Until the FOCUS specification is extended to support this, we cannot recommend FOCUS for Cost Optimization scenarios. Nevertheless, FOCUS can be useful for a wide range of high-level reporting use cases.

### How can we add another FOCUS provider?


Feel free to contribute data export mechanisms for other FOCUS providers. We will be happy to review and reference them.

### Does the dashboard support multiple FOCUS versions?


Yes. The FOCUS Dashboard supports consolidation of multiple FOCUS specification versions. You can combine data from providers exporting different FOCUS versions (for example, FOCUS 1.0 and FOCUS 1.2) in the same consolidated view.

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Migration to AWS FOCUS 1.2
Migration to AWS FOCUS 1.2

## Migration Overview


 [FOCUS 1.2](https://focus.finops.org/) introduces additional columns and improvements to the FinOps Cost and Usage Specification. If you already have [AWS Data Exports](data-exports.md) FOCUS 1.0 enabled with the Cloud Intelligence Dashboards CloudFormation stack `CID-DataExports`, you can migrate to FOCUS 1.2 by following the steps below.

**Note**  
AWS Data Exports CloudFormation does not support in-place updates of the FOCUS table version. The migration requires temporarily disabling the FOCUS 1.0 export before enabling FOCUS 1.2.

## Prerequisites

+ Existing FOCUS 1.0 Data Export deployed via [AWS Data Exports](data-exports.md) CloudFormation stacks
+ Access to both the Management (Payer) Account and the Data Collection (Destination) Account

## Migration Steps


AWS Data Exports does not support in-place table version updates within an existing report. To migrate to FOCUS 1.2, you must first delete the existing FOCUS 1.0 report and then create a new FOCUS 1.2 report using the latest version of the CloudFormation template. The steps below walk you through this process.

### Step 1: Disable FOCUS 1.0 Export


1. Log in to your **Management (Payer)** Account.

1. Open the [CloudFormation Console](https://console.aws.amazon.com/cloudformation/home) and locate the **CID-DataExports-Source** stack.

1. Select **Update stack** → **Make a direct update** → **Use existing template** and change the **FOCUS** parameter to **No**.

1. Complete the stack update. This will temporarily delete the FOCUS 1.0 Data Export.

#### Click to see the example


![\[Disable FOCUS 1.0 Export\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/focus-migration-disable-1-0.png)


### Step 2: Enable FOCUS 1.2 Export in Source Account


1. Download the latest CloudFormation template from `https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/data-exports-aggregation.yaml`.

1. In the **Management (Payer)** Account, open the **CID-DataExports-Source** stack again.

1. Select **Update stack** → **Make a direct update** → **Replace existing template** and choose **Upload a template file**, then upload the template you downloaded.

1. Set the **FOCUS** parameter to **Yes** to enable the FOCUS 1.2 version of the report.

1. Complete the stack update.

#### Click to see the example


![\[Enable FOCUS 1.2 in Source Account\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/focus-migration-enable-1-2-source.png)


### Step 3: Update Destination Account Stack


1. Log in to your **Data Collection (Destination)** Account.

1. Open the [CloudFormation Console](https://console.aws.amazon.com/cloudformation/home) and locate the **CID-DataExports-Destination** stack.

1. Select **Update stack** → **Make a direct update** → **Replace existing template** and choose **Upload a template file**, then upload the same template downloaded in Step 2.

1. Make sure the **FOCUS** parameter is set to **Yes**.

1. Complete the stack update.

#### Click to see the example


![\[Update Destination Account Stack\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/focus-migration-enable-1-2-source.png)


### Step 4: Update FOCUS Dashboard and Views


To bring in the new FOCUS 1.2 columns and update the FOCUS Dashboard to version 1.2.0:

1. Log in to your **Data Collection (Destination)** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend using [CloudShell](https://console.aws.amazon.com/cloudshell).

1. Install or upgrade the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. Run the following command to update the dashboard and all dependent views:

   ```
   cid-cmd update --recursive --force --dashboard-id focus-dashboard
   ```

1. If you have integrations with other cloud providers installed, select the respective Athena tables with FOCUS data from those providers along with the AWS FOCUS table when prompted. `cid-cmd` will automatically discover available columns in each table and generate the `focus_consolidation_view`.

## Next Steps


After migrating to FOCUS 1.2, new FOCUS data will be delivered to the same S3 path where your FOCUS 1.0 data was stored. While the schemas are compatible, we recommend requesting a backfill to get historical AWS FOCUS data in version 1.2.

### Backfill Historical Data


You can [create a Support Case](https://support.console.aws.amazon.com/support/home#/case/create) requesting a [backfill](https://docs.aws.amazon.com/cur/latest/userguide/troubleshooting.html#backfill-data) of your FOCUS report with up to 36 months of historical data. The case must be created from each of your Source Accounts (typically Management/Payer Accounts).

Support ticket example:

```
Service: Billing
Category: Other Billing Questions
Subject: Backfill Data

Hello Dear Billing Team,
Please can you backfill the data in DataExport named `cid-focus` for last 12 months.
Thanks in advance,
```

You can also use the following command in CloudShell to create this case via command line:

```
aws support create-case \
    --subject "Backfill Data" \
    --service-code "billing" \
    --severity-code "normal" \
    --category-code "other-billing-questions" \
    --communication-body "
        Hello Dear Billing Team,
        Please can you backfill the data in DataExport named 'cid-focus' for last 12 months.
        Thanks in advance"
```

Make sure you create the case from your Source Accounts (typically Management/Payer Accounts).

### Verify Dashboard Status


Run the following command to check dataset status in Amazon QuickSight:

```
cid-cmd status --dashboard-id focus-dashboard
```

## Feedback


Please [contact the team](feedback-support.md) if you encounter any issues.

# CORA Dashboard
CORA Dashboard

## Introduction


The CORA (Cost Optimization Recommended Actions) Dashboard provides FinOps professionals, executives, and engineering leads with a comprehensive visualization of data from the [AWS Cost Optimization Hub](https://aws.amazon.com/aws-cost-management/cost-optimization-hub/).

AWS Cost Optimization Hub offers customers a range of `Usage Optimization` and `Rate Optimization` aggregated from AWS Compute Optimizer and AWS Billing Console services

Usage Optimizations include:
+ Rightsizing suggestions
+ Idle resource detection
+ Migration and Upgrade recommendations

Rate Optimization consist of:
+ Savings Plans Recommendations
+ Reserved Instances recommendations

This dashboard leverages the [AWS Data Export of Cost Optimization Recommendations](data-exports.md) and enables additional capabilities:
+ Visualization of saving opportunities across multiple AWS Organizations (Payer Accounts) in one place.
+ Tracking of savings opportunities over time including daily updates and historical view.
+ Enablement of integration with customer business unit information to connect savings with workload owners (business units, teams, etc.)

Using the RLS (Row-Level Security) mechanism, this dashboard can be shared within your organization, allowing owners to track and take actions locally to maximize cost savings.

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=cora) 

![\[CORA Dashboard Screenshot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cora.png)


## Prerequisites


Before installing CORA Dashboard you need to enable COH Data Export and consolidate it from your Management (Payer) Accounts in Data Collection Account.

![\[Data Exports\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/cora-archi.png)


1. Create COH Data Export following the steps in [Data Export](data-exports.md) page and return to this page once completed.

1. Enable [AWS Cost Optimization Hub](https://aws.amazon.com/aws-cost-management/cost-optimization-hub/faqs/#:~:text=How%20do%20I%20enable%20Cost%20Optimization%20Hub).

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=CORA-Dashboard&param_DashboardId=cora&param_RequiresDataExports=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=CORA-Dashboard&param_DashboardId=cora&param_RequiresDataExports=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id cora
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id cora
```

## Usage Guide


The dashboard has several tabs and we recommend starting with the Summary. Here you can see the biggest recommendation categories, the potential savings they represent along with effort required for each resource and the number of resources.

We recommend first estimate your Rightsizing, Upgrade, Migration and Stop-Idle opportunities. Typically it takes time to implement these recommendations especially in the scale. We recommend you proceed with commitment (reserved instances and savings plan) for the volume that take into account your rightsizing plan.

## Using Cost Allocation Tags


The dashboard also utilizes [Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html).
+ Use the `FinopsException` tag to exclude workloads from the view if they are not eligible for cost optimization recommendations.
+ Configure `Name` as Cost Allocation tag to have this visibility on the dashboard by default.
+ Use `TagName` to specify a tag that you are interested in for GroupBy.

## Difference with UI


The AWS Cost Optimization Hub Console UI displays opportunities in a Donut chart with a special [deduplication](https://aws.amazon.com/aws-cost-management/cost-optimization-hub/faqs/#:~:text=How%20does%20Cost%20Optimization%20Hub%20ensure%20savings%20from%20multiple%20recommendations%20are%20not%20double%20counted%3F). Commitment savings are shown accounting for the potential impact of implementing rightsizing recommendations. However, the Data Export of Cost Optimization do not have data for this level of adjustment.

The dashboard deduplicates recommendations by resource ID. Please note that dashboard shows recommendations for Rate Optimization and Usage Optimization deduplicated separately. But Rate Optimization are NOT taking into account the results of Usage Optimization (Rightsizing, Stop, Upgrade and such).

You can see discrepancies between the Donut Chart in the AWS Cost Optimization Hub Console and the data this service exported for analysis. The Console shows Rate Optimization savings adjusted to the results of Usage Optimization and thus can be more realistic. Please note that when using Dashboard you need to estimate the results of Usage Optimization and readjust the forecast of Rate Optimization accordingly.

## Authors

+ Iakov Gan, Ex-Amazonian
+ Yuriy Prykhodko, Principal Technical Account Manager, AWS

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Trends Dashboard
Trends Dashboard

## Introduction


The Trends Dashboard provides Financial and Technology organizational leaders access to proactive trends, signals, insights and anomalies to understand and analyze their AWS cloud usage.

## Demo Dashboards


![\[Trends Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/Trends.png)


## Prerequisites


1. Deploy one or more of the foundational dashboards: [CUDOS, Cost Intelligence, or KPI Dashboard.](cudos-cid-kpi.md) 

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Trends-Dashboard&param_DashboardId=trends-dashboard](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Trends-Dashboard&param_DashboardId=trends-dashboard) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id trends-dashboard
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id trends-dashboard
```

## Authors

+ Manik Chopra, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# DataTransfer Cost Analysis Dashboard
DataTransfer Cost Analysis Dashboard

## Introduction


The Data Transfer Dashboard is an interactive, customizable and accessible Quick Sight dashboard to help customers gain insights into their data transfer. It will analyze any data transfer that incurs a cost such as outbound/internet, Inter Region and Inter AZ data transfer from all services.

This dashboard contains data transfer breakdowns with the following visuals:
+ Data Transfer Summary
+ Internet Data transfer, AWS Global Accelerator cost estimation details
+ Regional Data transfer Details
+ Data transfer AZ
+ CloudFront Cost and Usage Analysis

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=datatransfer-cost-analysis-dashboard) 

![\[Image of a Data transfer dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data_transfer_dashboard.png)


## Prerequisites


1. Deploy one or more of the foundational dashboards: [CUDOS, Cost Intelligence, or KPI Dashboard.](cudos-cid-kpi.md) 

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=DataTransfer-Cost-Analysis-Dashboard&param_DashboardId=datatransfer-cost-analysis-dashboard](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=DataTransfer-Cost-Analysis-Dashboard&param_DashboardId=datatransfer-cost-analysis-dashboard) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id datatransfer-cost-analysis-dashboard
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id datatransfer-cost-analysis-dashboard
```

## Authors

+ Chaitanya Shah, Principal Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# AWS Marketplace Single Pane of Glass (SPG) Dashboard
AWS Marketplace Single Pane of Glass (SPG) Dashboard

## Introduction


The AWS Marketplace Single Pane of Glass (SPG) dashboard delivers comprehensive procurement insights through an interactive, out-of-the-box Amazon Quick Sight interface. Designed for AWS Marketplace buyers, this solution offers detailed visibility into third-party software subscriptions, including AI Agents, data products, and professional services. Users can access visualizations spanning all AWS Marketplace offerings, encompassing both self-service public offers and custom seller private offers. The dashboard provides detailed analytics on AWS Marketplace spend and usage across multiple AWS Organizations, displays software license grants and entitlements, and surfaces critical agreement information such as "Deployed on AWS" badge status and contract terms. This unified view enables procurement teams to monitor and analyze their AWS Marketplace investments effectively.

The dashboard has five tabs:
+  **Spend Summary**:
  + AWS Marketplace Cumulative Spend by Seller
  + AWS Marketplace Cumulative Spend by Product
  + AWS Marketplace Spend by Seller
  + AWS Marketplace Spend and Usage by Seller Product
  + Marketplace Invoice Tracker
+  **Spend Deep Dive**:
  + Spend by Product
  + Spend by AWS Account ID
  + Spend Mapping by Seller
  + Spend Details by Invoice
+  **Bedrock 3P Foundational Model (FM) Spend** 
  + 3P FM Spend by Seller
  + Spend and Usage by FM Product
+  **Granted and Entitled Licenses** 
  + Upcoming Contract Expirations
  + Org View of Licenses
  + License Summary by Product
  + License Grant and Sharing Details
  + Product mapping to License Grants
+  **Marketplace Agreements** 
  + Active Agreement Count by Deployment Status
  + Active Agreement Value by Deployment Status
  + Agreement Information
  + Active Agreement Acceptances
  + Agreement Charges by Month
  + Agreement Charge Details
  + Agreement Legal Terms

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [Link] (https://cid.workshops.aws.dev/demo?dashboard=aws-marketplace)

![\[Image of a AWS Marketplace SPG dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/aws-marketplace-spg-pie.png)


## Prerequisites


1. Deploy one or more of the foundational dashboards: [CUDOS, Cost Intelligence, or KPI Dashboard](cudos-cid-kpi.md).

1. (Optional, only for Licenses Page) Deploy the [Data Collection](data-collection-deployment.md), with module *Include Marketplace Licensing Collection* parameter.

1. (Optional, only for Agreements Page) Deploy the [Data Collection](data-collection-deployment.md), with module *Include Marketplace Data Collection Module* parameter.

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=AWS-Marketplace-SPG-Dashboard&param_DashboardId=aws-marketplace&param_RequiresDataCollection=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=AWS-Marketplace-SPG-Dashboard&param_DashboardId=aws-marketplace&param_RequiresDataCollection=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id aws-marketplace
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id aws-marketplace
```

## Visualizing Third-Party Software License Procurement


In the SPG Dashboard, to view the AWS Marketplace licenses and grants in the **Granted and Entitled Licenses** tab, follow these steps:
+ Deploy the [Data Collection](data-collection-deployment.md), make sure to select **yes** for the *Include Marketplace Licensing Collection* parameter.
+ Go to the [Amazon Athena](https://console.aws.amazon.com/athena/) Query Editor.
+ Select the database that has the views for CID. By default it can be **cid\$1cur** database.
+ Run the following query to update **marketplace\$1licenses\$1grants\$1view** view in Amazon Athena.

```
CREATE OR REPLACE VIEW "marketplace_licenses_grants_view" AS
SELECT DISTINCT
  lt.payer_id management_account_id
, SPLIT_PART(lt.beneficiary, ':', 5) "subscribed_account_id"
, SPLIT_PART(gt.granteeprincipalarn, ':', 5) "grantee_account_id"
, FROM_UNIXTIME(CAST(lt.createtime AS DOUBLE)) "license_create_time"
, date_parse(substr(lt.validity.begin, 1, 10), '%Y-%m-%d') "license_start_date"
, date_parse(substr(lt.validity."end", 1, 10), '%Y-%m-%d') "license_end_date"
, SPLIT_PART(lt.licensearn, ':', 7) "license_id"
, lt.productname
, data.value "seller"
, SPLIT_PART(agreement.value, ':', 7) "agreement_id"
, lt.status "license_status"
, lt.productsku
, lt.issuer.NAME "license_issuer"
, lt.homeregion
, lt."version" "license_version"
, gt.grantname "grant_name"
, SPLIT_PART(gt.grantarn, ':', 7) "grant_id"
, gt.grantstatus "grant_status"
, gt.version "grant_version"
, ARRAY_JOIN(gt.grantedoperations, ',') "granted_operations"
, gt.options.activationoverridebehavior "activation_override_behavior"
FROM
  optimization_data.license_manager_licenses lt
, optimization_data.license_manager_grants gt
, UNNEST(licensemetadata) t ("data")
, UNNEST(licensemetadata) s (agreement)
WHERE ((lt.licensearn = gt.licensearn) AND (t."data".NAME = 'sellerOfRecord'))
```

## Visualizing AWS Marketplace Agreements


In the SPG Dashboard, to view the AWS Marketplace Agreements in the **Marketplace Agreements** tab, do the following:
+ Deploy the [Data Collection](data-collection-deployment.md), make sure to select **yes** for the *Include Marketplace Data Collection Module* parameter.
+ Go to the [Amazon Athena](https://console.aws.amazon.com/athena/) Query Editor.
+ Select the database that has the views for CID. By default it can be **cid\$1cur** database.
+ Run the following queries to update **terms** and **agreements** views in Amazon Athena.

```
CREATE OR REPLACE VIEW "terms" AS
SELECT *
FROM optimization_data.terms
```

```
CREATE OR REPLACE VIEW "agreements" AS
SELECT *
FROM optimization_data.agreements
```

**Note**  
The licenses and Agreements data in the dashboard refreshes once a day. If you would like to customize the schedule, please review [Tailoring Data Collector schedules](tailor-data-collector.md).

## Learn more


Explore more on [Buyer Workshop](https://catalog.workshops.aws/aws-marketplace-buyer/en-US/costmanagement/analysis/quicksight) 

## Authors

+ Ramya Vijayaraghavan, Ex-Amazonian
+ Kaushik Raha, Prinipal Specialist, AWS Marketplace
+ Soumya Vanga, Solutions Architect, AWS Marketplace

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

Have a success story to share with the Team, suggest an improvement or report an error?
+ Please email: [aws-marketplace-cid-spg-dashboard@amazon.com](mailto:aws-marketplace-cid-spg-dashboard@amazon.com) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Kubecost Containers Cost Allocation Dashboard
Kubecost Containers Cost Allocation Dashboard

## Introduction


The Kubecost Containers Cost Allocation Dashboard provides insights into Kubernetes in-cluster cost and usage based on data collection from a self-hosted Kubecost (supports any Kubecost tier - Kubecost free tier, Kubecost EKS-optimized bundle and Kubecost enterprise tier). DevOps teams, FinOps team or any relevant stakeholder can gain insights into cost and usage of Kubernetes workloads inside their Kubernetes clusters, down to the container level, and aggregated based on different Kubernetes constructs (pod, namespace, controller, and more). You can implement showback and chargeback methodologies for multi-tenant Kubernetes clusters, and also understand the efficiency (usage vs requests) of your Kubernetes clusters. The dashboard’s visualizations include high-level KPI visuals to understand general spend, interactive visuals that allow easy-to-use experience to drill down into Kubernetes in-cluster cost and usage, and customizable visuals per cost metric.

The dashboard has three tabs:
+ Executive Summary:
  + KPI visuals per cost metric (CPU cost, RAM cost, total cost, efficiency metrics, and more)
  + Total Cost by Account ID
  + Top Spending Clusters
+ Workloads Explorer:
  + Interactive stacked-bar chart and pivot table visuals that show cost by different dimensions based on in-dashboard aggregations and filters
+ EKS Breakdown
  + Distribution Graphs Area - a collection of pie charts showing pod distribution by different dimensions (capacity type, instance type, and more)
  + Coverage Graphs Area - a collection of stacked-bar charts showing pod coverage by different dimensions (capacity type, instance type)
  + Drill-down Graphs Area - a collection of charts showing in-cluster cost by namespace, based on different cost metrics (CPU cost, RAM cost, and more)

You can also check AWS Native [SCAD Containers Cost Allocation Dashboard](scad-containers-dashboard.md) and the [Comparison Table in FAQ](faq.md#faq-scad-kubecost-dashboard-difference).

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo/?dashboard=containers-cost-allocation) 

![\[Kubecost - Containers Cost Allocation Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/kubecost_containers_cost_allocation.png)


## CID’s Containers Cost Allocation Dashboards Comparison


The CID framework has two Containers Cost Allocation dashboards:
+ This one, which is based on data collection from Kubecost
+ The [SCAD Containers Cost Allocation Dashboard](scad-containers-dashboard.md), which is based on CUR’s Split Cost Allocation Data (SCAD)

Please visit review the [Containers Cost Allocation dashboards comparison in the FAQs](faq.md#faq-scad-kubecost-dashboard-difference) for more information.

## Deployment


Please follow the [instructions in the Containers Cost Allocation Dashboard GitHub repo](https://github.com/awslabs/containers-cost-allocation-dashboard/blob/main/README.md).

## Update


Please follow the [update instructions in the Containers Cost Allocation Dashboard GitHub repo](https://github.com/awslabs/containers-cost-allocation-dashboard/blob/main/UPDATE.md).

## Learn more

+ Find more information in the [solution’s GitHub repo](https://github.com/awslabs/containers-cost-allocation-dashboard) 
+ Explore more on Kubecost in the [Kubecost web site](https://www.kubecost.com/) and read more on the [self-hosted Kubecost deployment option](https://www.kubecost.com/products/self-hosted) 
+ Explore more on the Kubecost EKS-optimized bundle in the [launch blog post](https://aws.amazon.com/blogs/containers/aws-and-kubecost-collaborate-to-deliver-cost-monitoring-for-eks-customers/) 
  + Read on more features of this bundle such as [AMP integration](https://aws.amazon.com/blogs/mt/integrating-kubecost-with-amazon-managed-service-for-prometheus/), [multi-cluster visibility](https://aws.amazon.com/blogs/containers/multi-cluster-cost-monitoring-using-kubecost-with-amazon-eks-and-amazon-managed-service-for-prometheus/) and [securing access with Amazon Cognito](https://aws.amazon.com/blogs/containers/securing-kubecost-access-with-amazon-cognito/) 
  + See EKS-optimized bundle installation steps in the [EKS cost monitoring user guide](https://docs.aws.amazon.com/eks/latest/userguide/cost-monitoring.html) 
  + Review comparison between the EKS-optimized bundle and other Kubecost tiers in the first question in the [EKS cost monitoring FAQs](https://docs.aws.amazon.com/eks/latest/userguide/cost-monitoring.html#cost-monitoring-faq) 

## Authors

+ Udi Dahan, Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

Have a success story to share with the Team, suggest an improvement or report an error?
+ Please email: [containers-cost-allocation-dashboard@amazon.com](mailto:containers-cost-allocation-dashboard@amazon.com) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# SCAD Containers Cost Allocation Dashboard
SCAD Containers Cost Allocation Dashboard

## Introduction


The SCAD Containers Cost Allocation Dashboard provides insights into EKS and ECS in-cluster cost based on data from CUR’s Split Cost Allocation Data (SCAD) feature. DevOps teams, FinOps team or any relevant stakeholder can gain insights into cost of Kubernetes workloads inside their EKS and ECS clusters, down to the EKS pod/ECS task level, and aggregated based on different Kubernetes constructs (pod, namespace, controller, and more) or ECS and Batch dimensions. You can use it to implement showback and chargeback methodologies for multi-tenant EKS and ECS clusters. The dashboard’s visualizations include high-level KPI visuals to understand general spend, and interactive visuals that allow easy-to-use experience to drill down into EKS and ECS in-cluster cost.

The dashboard has three tabs:
+ Executive Summary:
  + KPI visuals per cost metric (CPU cost, GPU cost, RAM cost, shared cost, total cost)
  + Total Cost by Account ID
  + Top Spending Clusters
+ Workloads Explorer:
  + Interactive stacked-bar chart and pivot table visuals that show cost by different dimensions based on in-dashboard aggregations and filters
+ Cluster Breakdown:
  + Coverage and drill-down visuals
+ Labels/Tags Explorer:
  + Drill down into your pods/tasks split cost by dimensions that are customized using K8s pod labels/AWS ECS tasks tags, and combine them with tagged AWS resources costs to implement Total Cost of Ownership (TCO)
+ Data on EKS:
  + Allocate costs to Spark and Flink applications running on EKS (directly or using EMR on EKS), with ability to combine EMR on EKS service cost and split cost

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following [this link](https://cid.workshops.aws.dev/demo/?dashboard=scad-containers-cost-allocation) 

 **SCAD - Containers Cost Allocation Dashboard** 

![\[SCAD - Containers Cost Allocation Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_containers_cost_allocation.png)


## CID’s Containers Cost Allocation Dashboards Comparison


The CID framework has two Containers Cost Allocation dashboards:
+ This one, which is based on CUR’s Split Cost Allocation Data (SCAD)
+ The [Kubecost Containers Cost Allocation Dashboard](kubecost-containers-dashboard.md), which is based on data collection from Kubecost

Please visit review the [Containers Cost Allocation dashboards comparison in the FAQs](faq.md#faq-scad-kubecost-dashboard-difference) for more information.
+  [Prerequisites](scad-containers-dashboard-prerequisites.md) 
+  [Deployment](scad-containers-dashboard-deployment.md) 
+  [Post Deployment](scad-containers-dashboard-post-deployment.md) 
  +  [Adding K8s Pods Labels or Amazon ECS Tasks Tags to the Dashboard](scad-containers-dashboard-add-labels-tags.md) 
  +  [Total Cost of Ownership Using Kubernetes Labels and AWS Tags](scad-containers-dashboard-tco.md) 
  +  [Data on EKS - Cost Allocation for Spark and Flink Applications Running on EKS](scad-containers-dashboard-data-on-eks.md) 

## Learn more

+ Split Cost Allocation Data for EKS documentation:
  +  [SCAD EKS what’s new post](https://aws.amazon.com/about-aws/whats-new/2024/04/aws-split-cost-allocation-data-amazon-eks/) 
  +  [SCAD ECS and AWS Batch what’s new post](https://aws.amazon.com/about-aws/whats-new/2023/04/aws-split-cost-allocation-data-amazon-ecs-batch/) 
  +  [SCAD EKS Launch blog post](https://aws.amazon.com/blogs/aws-cloud-financial-management/improve-cost-visibility-of-amazon-eks-with-aws-split-cost-allocation-data/) 
  +  [SCAD ECS Launch blog post](https://aws.amazon.com/blogs/aws-cloud-financial-management/la-improve-cost-visibility-of-containerized-applications-with-aws-split-cost-allocation-data-for-ecs-and-batch-jobs/) 
  +  [Understanding split cost allocation data](https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data.html) 
  +  [EKS Cost Monitoring](https://docs.aws.amazon.com/eks/latest/userguide/cost-monitoring.html#cost-monitoring-aws) 
  +  [Legacy CUR Dictionary - Split Cost Allocation Data line items](https://docs.aws.amazon.com/cur/latest/userguide/split-line-item-columns.html) 
  +  [CUR 2.0 Dictionary - Split Cost Allocation Data line items](https://docs.aws.amazon.com/cur/latest/userguide/table-dictionary-cur2-split-line-item.html) 
+  [CUR Query Library - sample queries](https://catalog.workshops.aws/cur-query-library/en-US/queries/container#amazon-eks-split-cost-allocation-data) 

## Authors

+ Udi Dahan, Senior Technical Account Manager

### Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

Have a success story to share with the Team, suggest an improvement or report an error?
+ Please email: [containers-cost-allocation-dashboard@amazon.com](mailto:containers-cost-allocation-dashboard@amazon.com) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Prerequisites
Prerequisites

## Prerequisites


1. Enable AWS Split Cost Allocation Data (SCAD) in Cost Management Preferences:

![\[Enable SCAD - Cost Management Preferences\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/enable_scad_cost_management_preferences.png)


You can enable SCAD for ECS, SCAD for EKS or both. If you enable SCAD for EKS, selecting "Resource requests" will include only resource requests data, without actual usage. To have actual usage data for your pods in CUR, either select the "Amazon Managed Service for Promentheus" option and follow [this guide](https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data-resource-amp.html), or select the "Amazon CloudWatch Container Insights" option and follow [this guide](https://docs.aws.amazon.com/cur/latest/userguide/split-cost-allocation-data-cloudwatch.html) 

1. Deploy the [foundational dashboards](cudos-cid-kpi.md), and make sure the parameter "Enable Split Cost Allocation Data (SCAD) in CUR 2.0" is set to "yes". As part of deploying the foundational dashboards with the parameter "Enable Split Cost Allocation Data (SCAD)" set to "yes", a new CUR will be created, with Split Cost Allocation Data enabled.

**Note**  
Split Cost Allocation Data cannot be enabled or disabled in an existing CUR 2.0. Enabling or disabling Split Cost Allocation Data in an existing CUR is supported only in Legacy CUR

1. Make sure that the following AWS-generated cost allocation tags are active:  
**Example**  

------
#### [ Amazon EKS ]

![\[SCAD EKS Cost Allocation Tags\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_eks_cost_allocation_tags.png)


------
#### [ Amazon ECS ]

![\[SCAD ECS Cost Allocation Tags\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_ecs_cost_allocation_tags.png)


------
#### [ AWS Batch on Amazon ECS ]

![\[SCAD ECS AWS Batch Cost Allocation Tags\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_ecs_aws_batch_cost_allocation_tags.png)


------
#### [ AWS Batch on Amazon EKS ]

![\[SCAD EKS AWS Batch Cost Allocation Tags\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_cost_allocation_tags_batch_eks.png)


------

   Please notice that some of these cost allocation tags are present only once you enabled SCAD for the relevant service (EKS/ECS), and that it takes some time for them to be present after enabling SCAD. The cost allocation tags may not be present if you don’t use the respective service.

1. Wait till the SCAD data is updated in Athena

After enabling Split Cost Allocation Data for EKS, ECS or both, and activating the AWS-generated cost allocation tags, allow at least 24h (can get up to 48h) for new columns and data to be reflected in Athena CUR table.

Also, please note that CUR Backfill isn’t supported for SCAD. Even if you request the CUR Backfill from AWS Support, the SCAD fields won’t be populated. Data will only be populated for the current month onward, as stated in the [SCAD documentation](https://docs.aws.amazon.com/cur/latest/userguide/enabling-split-cost-allocation-data.html):

 *Once activated, split cost allocation data automatically scans for tasks and containers. It ingests the telemetry usage data for your container workloads and prepares the granular cost data for the current month.* 

To validate that the new Split Cost Allocation Data columns exist in CUR:

**Example**  

1. Open Athena console and change to CUR database

1. Expand CUR table and filter it as in the below screenshots to view the columns: Split line item columns (relevant for EKS and ECS):

    ![\[SCAD CUR Athena Table Split Columns\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_cur_athena_table_split_columns.png) 

   EKS cost allocation tags (relevant only if you’re using EKS):

    ![\[SCAD CUR Athena Table EKS Tags Columns\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_cur_athena_table_eks_tags_columns.png) 

   ECS cost allocation tags (relevant only if you’re using ECS):

    ![\[SCAD CUR Athena Table ECS Tags Columns\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_cur_athena_table_ecs_tags_columns.png) 

   AWS Batch cost allocation tags (relevant only if you’re using AWS Batch on ECS):

    ![\[SCAD CUR Athena Table AWS Batch Tags Columns\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_cur_athena_table_batch_tags_columns.png) 
Run the following Athena queryin against the CUR 2.0 table:  
EKS cost allocation tags columns (relevant only if you’re using EKS):  

```
SELECT DISTINCT "key"
FROM "<table_name>"
CROSS JOIN UNNEST(MAP_KEYS("resource_tags")) AS "t"("key")
WHERE "key" LIKE 'aws_eks%'
```
Expected result:  

```
+---+-----------------------+
| # |          key          |
+---+-----------------------+
| 1 | aws_eks_node          |
| 2 | aws_eks_deployment    |
| 3 | aws_eks_namespace     |
| 4 | aws_eks_cluster_name  |
| 5 | aws_eks_workload_name |
| 6 | aws_eks_workload_type |
+---+-----------------------+
```
ECS cost allocation tags columns (relevant only if you’re using ECS):  

```
SELECT DISTINCT "key"
FROM "<table_name>"
CROSS JOIN UNNEST(MAP_KEYS("resource_tags")) AS "t"("key")
WHERE "key" LIKE 'aws_ecs%'
```
Expected result:  

```
+---+----------------------+
| # |         key          |
+---+----------------------+
| 1 | aws_ecs_cluster_name |
| 2 | aws_ecs_service_name |
+---+----------------------+
```
AWS Batch cost allocation tags columns (relevant only if you’re using AWS Batch on ECS):  

```
SELECT DISTINCT "key"
FROM "<table_name>"
CROSS JOIN UNNEST(MAP_KEYS("resource_tags")) AS "t"("key")
WHERE "key" LIKE 'aws_batch%'
```
Expected result:  

```
+---+-------------------------------+
| # |              key              |
+---+-------------------------------+
| 1 | aws_batch_job_definition      |
| 2 | aws_batch_job_queue           |
| 3 | aws_batch_compute_environment |
+---+-------------------------------+
```

Only once you see all these columns (respective for the service you use), proceed to the dashboard deployment in the [Deployment](scad-containers-dashboard-deployment.md) chapter.

**Note**  
If you’d like to use EKS K8s pod labels or ECS task tags for cost allocation or Total Cost of Ownership (TCO), there are additional prerequisites listed in [Total Cost of Ownership Using Kubernetes Labels and AWS Tags](scad-containers-dashboard-tco.md). If you’re running Spark or Flink applications on EKS or on EMR on EKS, and you’d like to allocate cost to those applications, there are additional prerequisites listed in [Data on EKS - Cost Allocation for Spark and Flink Applications Running on EKS](scad-containers-dashboard-data-on-eks.md). These prerequisites can be done now or after the deployment, when you reach the post-deployment section

# Deployment
Deployment

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=SCAD-Containers-Dashboard&param_DashboardId=scad-containers-cost-allocation](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=SCAD-Containers-Dashboard&param_DashboardId=scad-containers-cost-allocation) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** status

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id scad-containers-cost-allocation
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

**Note**  
Please note that DataExport can take up to 24-48 hours to deliver the first reports. If you just installed Data Exports, the dashboard will be most likely empty. Please come back after 24 hours.

### Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id scad-containers-cost-allocation
```

Note:

Starting from version (v2.0.0), the `scad_cca_hourly_resource_view` Quick Sight dataset and Athena view are no longer used by the dashboard, and can be deleted. Please check the [SCAD Containers Cost Allocation Dashboard v2.0.0 changelog](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/changes/CHANGELOG-scad-cca.md#scad-containers-cost-allocation-dashboard---v200) for more information.

# Post Deployment
Post Deployment

Now that you deployed the dashboard, you may proceed to the following optional post-deployment options, if they’re relevant to your use-case:
+  [Adding K8s Pods Labels or Amazon ECS Tasks Tags to the Dashboard](scad-containers-dashboard-add-labels-tags.md) 
+  [Total Cost of Ownership Using Kubernetes Labels and AWS Tags](scad-containers-dashboard-tco.md) 
+  [Data on EKS - Cost Allocation for Spark and Flink Applications Running on EKS](scad-containers-dashboard-data-on-eks.md) 

# Adding K8s Pods Labels or Amazon ECS Tasks Tags to the Dashboard
Adding K8s Pods Labels or Amazon ECS Tasks Tags to the Dashboard

## Introduction


When creating pods in Kubernetes (K8s) clusters or tasks in ECS clusters, you can also apply K8s labels (pods) or AWS tags (ECS tasks) to them. This is useful for purpose of cost allocation, and can help you identify costs of your EKS/ECS workloads by definitions which are relevant to your organization. SCAD supports K8s pods labels and ECS tasks tags as user-defined cost allocation tags, and you can use these labels/tags in the dashboard, to visualize you costs based on them. Please follow the below instructions to add K8s pods labels/ECS tasks tags to the dashboard.

## Adding Labels/Tags


Follow the below process to add cost allocation tags to the dashboard.

### Activating Cost Allocation Tags


 [Activate the user-defined cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html) that represent the K8s pods labels or the ECS tasks tags that you wish to add to the dashboard.

### Adding Cost Allocation Tags to the Athena View


**Note**  
The Athena view already includes the [Kubernetes Recommended Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/), and the labels `app`, `chart`, `release`, `version`, `component`, `type` and `created-by`. The dashboard also already inclueds these labels, in the "Workloads Explorer" sheet (as group-by dimensions and as filters) and in the "Labels/Tags Explorer" sheet. If you’d like to use them in the dashboard, there’s no need for any additional action except for activating the respective cost allocation tags. If you have other labels or tags you’d like to use, please continue reading the below, to learn how to add them to the Athena view.
+ In Athena, open the `scad_cca_summary_view` Athena view by clicking "Show/edit query"
+ Add the label/tag to the left table in the view. The easiest way is to find an existing cost allocation tag and add yours below it. For example, find the following line:

```
, COALESCE("resource_tags"['user_created_by'], 'No K8s label/AWS tag key: created-by') "cat_created_by"
```

Add your cost allocation tag below it. Here’s an example, assuming your cost allocation tag is `business_unit`:

```
, COALESCE("resource_tags"['user_business_unit'], 'No K8s label/AWS tag key: business_unit') "cat_business_unit"
```
+ Bump the number in the `GROUP BY` clause of the left table. For example, if the `GROUP BY` looks like this:

```
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47
```

You should add the number 48 at the end, like this:

```
GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48
```
+ Repeat the same process in the `UNION ALL` section - find the last existing cost allocation tag line, add your cost allocation tag below it, and bump the `GROUP BY` numbers shown as above
+ Run the query, it should quickly complete
+ Log into Quick Sight, edit the `scad_cca_summary_view` dataset, click `Save & publish` on the top right, and wait for the dataset to finish refreshing

Once the Quick Sight dataset refresh is completed successfully, the new cost allocation tag column will be available in the Quick Sight analysis, for you to add to visuals. You may now save the dashboard as analysis and create visuals with the labels/tags, or continue to the [Total Cost of Ownership Using Kubernetes Labels and AWS Tags](scad-containers-dashboard-tco.md) chapter for 2 examples on using K8s pods labels/ECS tasks tags in the dashboard.

# Total Cost of Ownership Using Kubernetes Labels and AWS Tags
Total Cost of Ownership Using Kubernetes Labels and AWS Tags

## Introduction


When allocating costs to workloads in EKS and ECS clusters, often organizations need to track costs using their own definitions that aren’t necessarily available using the common Kubernetes (K8s) objects or ECS constructs. In these cases, organizations use K8s labels (on K8s pods) and AWS tags (on ECS tasks) to identify certain aspects based on which they want to allocate costs (for example, project, team, or business unit). AWS Split Cost Allocation Data supports EKS K8s pods labels and ECS tasks tags, as cost allocation tags, for this purpose. In addition for using EKS K8s pods labels and ECS tasks tags for the purpose of cost allocation to the containerized workloads, you can also allocate costs of AWS resources being used by those workloads, by consistently tagging the AWS resources with the same labels/tags applied to the pods that are using them. This is referred to as Total Cost of Ownership (TCO). Starting SCAD Containers Cost Allocation Dashboard v4.0.0, the dashboard can be used to view the cost of your workloads based on EKS K8s pod labels or ECS tags, including the 2 use-cases mentioned above (cost allocation for your containerized workloads, and TCO).

While you can use K8s pods labels and ECS tasks tags in any visual in the dashboard through customization, the following sheets include pre-built functionality:
+ The "Workloads Explorer" sheet includes the [Kubernetes Recommended Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/) as dimensions in the "Group By" control and as filtes on the left. Also, the common labels `app`, `chart`, `release`, `version`, `component`, `type` and `created-by` are included
+ The "Labels/Tags Explorer" sheet can be used to implement Total Cost of Ownership (TCO), allocating costs not only to the EKS K8s pods/ECS tasks, but also to AWS resources they’re using, by consistent labeling/tagging

## Prerequisites


Apart from the general dashboard prerequisites, below are additional prerequisites for allocating costs using EKS K8s pods labels and ECS tasks tags, using the SCAD dashboard.

### Activate Cost Allocation Tags

+ For EKS users, activate the user-defined cost allocation tags for the [Kubernetes Recommended Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/), if you’re using them. Also, activate the user-defined cost allocation tags for labels `app`, `chart`, `release`, `version`, `component`, `type` and `created-by`, if you’re using them. Please note that at least some of these labels are likely assigned to your pods upon creation, as some 3rd party applications are using these labels by default in their Helm charts/K8s manifests. So, even if you’re not sure you’re using them, activating them is a good practice
+ For all users (EKS and ECS), if you have your own custom K8s pods labels/ECS tasks tags, activate them too

### Add Cost Allocation Tags to the Athena View


The Athena view has some K8s labels included by default, as mentioned above. There’s no need to add them to the Athena view or to the dashboard, they’re already there. If you want to use other K8s pod labels/ECS tasks tags that aren’t listed above, please follow the [Adding K8s Pods Labels or Amazon ECS Tasks Tags to the Dashboard](scad-containers-dashboard-add-labels-tags.md) chapter to add them to the Athena view.

## How to Use EKS K8s Pods Labels/ECS Tasks Tags for Cost Allocation


Here we’ll explore 2 use-cases and examples for cost allocation using EKS K8s pods labels/ECS tasks tags. All examples will be using EKS K8s pods labels, but they apply to ECS tasks tags too.

### Cost Allocation Using K8s Pods Labels/ECS Tasks Tags


Let’s start with a simple use-case - allocating costs to pods/tasks. This is a common use-case that is relevant to many organization that would like to allocate cost to pods/tasks using their own definitions that aren’t necessarily available using the common Kubernetes (K8s) objects or ECS constructs. For example, you may want to allocate costs to projects, teams, business units, applications, and more. Let’s walk though an example of using the "Workloads Explorer" sheet to allocate costs by K8s pods labels.

Let’s first do it using the [Kubernetes Recommended Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/). Once in the dashboard, navigate to the "Workloads Explorer" sheet, and open the "Group By" control. Start searching for the string "K8s Label" - you’ll see a list of labels, select `Amazon EKS: K8s Label app.kubernetes.io/name`. You’ll see that the stacked-bar chart and the pivot table below it, are grouped by the values of this label. The stacked-bar chart and pivot table should look similar to the below:

![\[SCAD - Containers Cost Allocation Dashboard - Group By App Name Label - Chart\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_containers_cost_allocation_workloads_explorer_group_by_app_name_label_chart.png)


![\[SCAD - Containers Cost Allocation Dashboard - Group By App Name Label - Pivot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_containers_cost_allocation_workloads_explorer_group_by_app_name_label_pivot.png)


Let’s say we now want to filter the visuals by a specific value of the `app.kubernetes.io/name` label. On the left part of the "Workloads Explorer" sheet, find the "K8s Labels Filters" section, and in it, find the control filter titled "K8s Label: app.kubernetes.io/name". Select or unselect one or more values, and see how the chart and pivot change to show data only for the label value(s) you selected. You can now continue grouping and filtering the visuals by other dimensions.

Let’s see another example, using your own K8s pod label. We’ll use the `business_unit` label that we added to the Athena view earlier, as an example. First save the dashboard as an analysis, and then edit the calculated field named `group_by_workloads_explorer`. Like with the Athena view, find a line of an existing cost allocation tag, and add yours below it. As an example, here’s an existing line:

```
${GroupByWorkloadsExplorer} = "Amazon EKS: K8s Label created-by", {cat_created_by},
```

Add your label cost allocation tag below it. Here’s an example of how the new line will look like, with the `business_unit` label:

```
${GroupByWorkloadsExplorer} = "Amazon EKS: K8s Label business_unit", {cat_business_unit},
```

Save the calculated field. Now, edit the `aggregation_include_exclude_workloads_explorer` calculated field. Find a line of an existing cost allocation tag, and add yours below it. As an example, here’s an existing line:

```
${GroupByWorkloadsExplorer} = "Amazon EKS: K8s Label created-by" AND {cat_created_by} = "No K8s label/AWS tag key: created-by", "Exclude",
```

Add your label cost allocation tag below it. Here’s an example of how the new line will look like, with the `business_unit` label:

```
${GroupByWorkloadsExplorer} = "Amazon EKS: K8s Label business_unit" AND {cat_business_unit} = "No K8s label/AWS tag key: business_unit", "Exclude",
```

Save the calculated field. Now, we’ll add the label to the "Group By" control. Edit the "Group By" control, and add the text "Amazon EKS: K8s Label app.kubernetes.io/business\$1unit" anywhere in the control options (below one of the existing options). No need to save. Now, open the "Group By" control and select your label. You should see the chart and pivot table change to show the values for this label key.

Now, let’s add a filter. We start by creating a new parameter. Here’s how it should be configured if your label is `business_unit`:

![\[SCAD - Containers Cost Allocation Dashboard - Add Parameter for Label\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_add_parameter_for_label.png)


When prompted, proceed to add the control. Here’s how it should be configured (note, you need to select the field for your cost allocation tag):

![\[SCAD - Containers Cost Allocation Dashboard - Add Control for Label\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_add_control_for_label.png)


Once created, you can either keep the control on top of the sheet or inside the sheet - arrange it to your convenience. Lastly, we’ll add a filter to both visuals, that use the control. Open the chart visual filters, and add a new filter. The filter should look like this (change the parameter and field to the one you created for your cost allocation tag field):

![\[SCAD - Containers Cost Allocation Dashboard - Add Filter for Label\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_add_filter_for_label.png)


You’re now ready to use the control filter. Open it, select or unselect one or more values, and watch the visuals change to reflect your selection.

### Total Cost of Ownership (TCO) for K8s Applications


Organizations run applications on K8s, and those applications may be using other AWS resources. When allocating costs to the application using labels, you may want to also include the cost of other AWS services that the application uses. Assuming the AWS resources are single-tenant (meaning, they’re used only by the application in question), you can achieve TCO if you consistently label your pods and tag the AWS services using the same label/tag key-value pair.

Let’s first go through an example using the `app.kubernetes.io/name` label which is already included in the Athena view and dashboard. First, let’s see how the pods and AWS resources are consistently labeled and tagged. Let’s start with the application pod. Here’s an omitted output of the `kubectl describe pod` command for the application pod in question:

```
kubectl describe pod/kubecost-eks-cost-analyzer-xxx-xxx -n kubecost-eks
Name:             kubecost-eks-cost-analyzer-xxx-xxx
Namespace:        kubecost-eks
Priority:         0
Service Account:  kubecost-eks-cost-analyzer
Node:             ip-192-xxx-xxx-xxx.ec2.internal/192.xxx.xxx.xxx
Start Time:       Sat, 21 Jun 2025 19:57:54 +0300
Labels:           app=cost-analyzer
                  app.kubernetes.io/instance=kubecost-eks
                  app.kubernetes.io/name=cost-analyzer
                  pod-template-hash=xxx
Annotations:      checksum/configs: xxxx
```

We can see the pod is labeled with the label key `app.kubernetes.io/name` having label value `cost-analyzer`. Let’s now see one example of an AWS resource that this application uses, and see how it’s tagged. Here’s an AWS Secrets Manager secret that this application is using:

![\[SCAD - Containers Cost Allocation Dashboard - AWS Secrets Manager Secret Tags\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_aws_secrets_manager_secret_tags.png)


You can see that it has a tag with the same tag key-value pair as the pod label (tag key `app.kubernetes.io/name` having label value `cost-analyzer`). K8s pods labels and AWS resource tags are both reflected as cost allocation tag. If you have a label and tag with the same key, they’ll be reflected as a single cost allocation tag. This means that having a consistent labeling and tagging for your pods and AWS resources, allows you to easily sum the split cost of your pods and the amortized/unblended cost of their associated AWS resources.

Now that we saw how the pod and its associated AWS resources (AWS Secrets Manager secret in this case) are consistently labeled and tagged, let’s see how we can use the dashboard to achieve TCO. Navigate to the "Labels/Tags Explorer" sheet. On the top "Controls" section, open the "Select Label/Tag Key" control, and select the "app.kubernetes.io/name" option. In the first visual, you can see mapping of the different values of this label key, to workload types. Here’s an example:

![\[SCAD - Containers Cost Allocation Dashboard - TCO - App to Workload Mapping\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_tco_app_resource_workload_type_mapping.png)


From this visual, we can learn that for the label key we selected, the label value `cost-analyzer` (on the left part of the visual) is applied to EKS pods (designated as "EKS Pods" on the right side of the visual) and AWS resources (designated as "Other AWS Services", also on the right side of the visual). We can also learn that other values of this label key are only applied to EKS pods. If you hover with your mouse on a section of the visual, you can see the cost value. On the pie chart right next to the Sankey diagram visual, you can see the cost distribution between EKS pods and other AWS services. The Sankey diagram visual is interactive - click on the `cost-analyzer` value, and it’ll fliter other visuals in this sheet.

Let’s continue to the 2nd Sankey diagram visual below, where we can see mapping of the different values of the label key we selected, to AWS services. Here’s how it looks like after we filtered it by clicking on the `cost-analyzer` value on the previous visual:

![\[SCAD - Containers Cost Allocation Dashboard - TCO - App to AWS Service Mapping\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_tco_app_aws_services_mapping.png)


From this visual, we can learn that for the label key we selected, the label value `cost-analyzer` is using several services such as AWS Secrets Manager, AWS Glue, and more. This is possible thanks to consistent tagging. In one of the examples above, we saw an AWS Secrets Manager secret tagged with the same key-value pair as the K8s pod label. Now we can see this service is shown in the Sankey diagram visual, as being used by the pods labeled with the same label key-value pair. Here too, you can hover with your mouse to see the cost values, and you can click any part of the visual to filter other visuals.

You can continue scrolling down the sheet to see other visuals, breaking down the costs by different dimensions, allowing you to investigate the cost of your application, both from the K8s perspective and the AWS services perspective.

The same process can be repeated with your own labels. To use your own labels, repeat the same process outlined earlier in this page, to add your labels to the Athena view. Then in the dashboard, the process is the same, but in this case, the calculated fields are named `cat_selector_labels_tags_explorer` and `cat_selector_include_exclude_labels_tags_explorer`.

# Data on EKS - Cost Allocation for Spark and Flink Applications Running on EKS
Data on EKS - Cost Allocation for Spark and Flink Applications Running on EKS

## Introduction


Organizations that process big data and run Spark and Flink applications, often choose to run those applications on Kubernetes (K8s), specifically EKS in this context, leveraging the scheduling flexibility, autoscaling, scalability, and other advantages that come with running applications on K8s.

However, these advantages come with tradeoffs, such as allocating costs to your Spark and Flink applications, which becomes more challenging. The "Data on EKS" sheet in this dashboard, provides a pre-built solution for this challenge. It uses K8s labels that are automatically applied to Spark and Flink applications pods upon submission, to allocate costs to those applications. This solution applies to Spark and Flink applications running either directly on EKS or on EMR on EKS. In this guide, we’ll explore how this dashboard can be used to allocate costs to Spark and Flink applications.

When running Spark or Flink applications on EKS or on EMR on EKS, some labels are automatically applied to the pods running those applications, and they can be used to identify Spark or Flink applications, or other constructs related to the applications, for the purpose of cost allocation. You don’t need to apply the labels yourself to the pods (as they’re already applied when you submit the job), and you don’t need to add them to the Athena view and dashboard, as opposed to your own custom labels. These labels are already included in the Athena view and in the respective dashboard visuals.

## Prerequisites


Apart from the general dashboard prerequisites, below are additional prerequisites for allocating costs to Spark and Flink applications running on EKS or EMR on EKS.

### Activate Cost Allocation Tags


As mentioned above, there’s no need to label the pods or add labels to the Athena view. All that is required, is to activate the cost allocation tags that are representing the labels. The following K8s pods labels cost allocations tags should be activated (some of them might not be available as cost allocation tags, depending on which framework you’re using and how you submit the jobs):

 `spark-app-selector`, `spark-app-name`, `spark-exec-id`, `spark-exec-resourceprofile-id`, `spark-role`, `spark-version`, `sparkoperator.k8s.io/launched-by-spark-operator`, `sparkoperator.k8s.io/submission-id`, `created-by`, `spark-app-tag`, `emr-containers.amazonaws.com/virtual-cluster-id`, `emr-containers.amazonaws.com/job.id`, `eks-subscription.amazonaws.com/emr.internal.id`, `emr-containers.amazonaws.com/resource.type`, `emr-containers.amazonaws.com/component`, `type`, `app`, `component` 

Here’s a breakdown of the labels cost allocation tags you need to activate, for each use-case (in case you want to selectively activate only the labels cost allocation tags you need):
+ If you’re running Spark applications (regardless of wether they’re running directly on EKS or on EMR on EKS, and regardless of how you submit them), activate the following labels: `spark-app-selector`, `spark-app-name`, `spark-exec-id`, `spark-exec-resourceprofile-id`, `spark-role`, `spark-version` 
+ If you’re running Spark applications (regardless of wether they’re running directly on EKS or on EMR on EKS) and are submitting them using Spark Operator, activate all the cost allocation tags from the first bullet, and also: `sparkoperator.k8s.io/launched-by-spark-operator`, `sparkoperator.k8s.io/submission-id` 
+ If you’re running Spark applications (regardless of wether they’re running directly on EKS or on EMR on EKS) and are submitting them using Apache Livy, activate all the cost allocation tags from the first bullet, and also: `created-by`, `spark-app-tag` 
+ If you’re running Spark applications (regardless of wether they’re running directly on EKS or on EMR on EKS) and are submitting them using Spark Submit, activate all the cost allocation tags from the first bullet
+ If you’re using EMR on EKS (regardless of which framework, Spark or Flink, you’re using, and regardless of how you’re submitting them), activate all the cost allocation tags from the first bullet and from other bullets based on submission method, and also: `emr-containers.amazonaws.com/virtual-cluster-id`, `emr-containers.amazonaws.com/job.id`, `eks-subscription.amazonaws.com/emr.internal.id`, `emr-containers.amazonaws.com/resource.type`, `emr-containers.amazonaws.com/component` 
+ If you’re running Flink applications (regardless of wether they’re running directly on EKS or on EMR on EKS, and regardless of how you submit them), activate the following labels: `type`, `app`, `component` 

## How to Allocate Costs to Spark and Flink Applications


Here we’ll explore use-cases and examples for cost allocation of Spark and Flink applications.

### Allocating Cost to a Spark Application Running on EMR on EKS


Let’s see an example of a Spark Job submitted using `StartJobRun` on EMR on EKS, and work backwards from the EMR on EKS console to the dashboard. Working backwards from the EMR on EKS console is meant to show how you can use the information on the original native console to identify the cost of your Spark application. This can help you shift cost management left, to the developers, data engineers, DevOps engineers, or other teams using these clusters.

Navigate to the EMR on EKS console, and view the list of virtual clusters. In this case, we have one virtual cluster, as shown below:

![\[SCAD - Containers Cost Allocation Dashboard - EMR on EKS Virtual Clusters\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_emr_on_eks_virtual_clusters.png)


We’ll click on the Virtual cluster ID, which will redirect us to the list of EMR on EKS jobs that were (or are) running on this virtual cluster:

![\[SCAD - Containers Cost Allocation Dashboard - EMR on EKS Jobs\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_emr_on_eks_jobs.png)


Since new data in CUR isn’t updated in real-time, we’ll choose a job id that was running around 2 days before the time this guide was written. We’l use job id `000000036gr7qbcelvv`. Copy the job id, then navigate to the "Data on EKS" sheet on the SCAD dashboard, and in it, to the "Data on EKS Workloads Explorer - Interactive Spark/Flink Jobs Visuals" section. This section provides a set of interactive visuals to easily drill down into your Spark and Flink applications costs:

![\[SCAD - Containers Cost Allocation Dashboard - Data on EKS Workloads Explorer Part 1\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_data_on_eks_workloads_explorer_part1.png)


![\[SCAD - Containers Cost Allocation Dashboard - Data on EKS Workloads Explorer Part 2\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_data_on_eks_workloads_explorer_part2.png)


Open the "EMR on EKS Job ID" filter control above the stacked-bar chart, and paste the job id you copied into it, to fliter the visuals based on this job id. Once done, all visuals on the sheet will be filtered. The 2 visuals in the "Data on EKS Workloads Explorer - Interactive Spark/Flink Jobs Visuals" section (the stacked-bar chart and pivot table) are grouped by cluster name by default, so they’ll still show the cluster name, but will only show the cost of job id `000000036gr7qbcelvv`. You can then group by another dimension which may be interesting to you, but in this example, we’ll scroll down to the "Data on EKS Breakdown" section in the same sheet, to view more details on the job in question:

![\[SCAD - Containers Cost Allocation Dashboard - EMR on EKS Cost Dimensions\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/data_on_eks_cost_dimensions_visual.png)


The 1st visual in this section shows the different cost dimensions which are relevant when running jobs on EMR on EKS (the EMR on EKS service cost and the EKS pods split cost). This gives you additional visibility into what you’re charged for, and is relevant only when running jobs on EMR on EKS (EMR on EKS service cost isn’t applicable when running Spark/Flink applications directly on EKS). In the specific screenshot above, the visual is unfiltered (meaning, before applying the filter mentioned above), as the visual is more informative this way, in the specific environment used for this demonstration (due to the jobs being short-running ones). If you use the filters next to the "Data on EKS Workloads Explorer" visuals above, it’ll show only the cost based on the filters. Further down we can find a pivot table which breaks down the Spark job cost by several relevant dimensions, as shown below:

![\[SCAD - Containers Cost Allocation Dashboard - Spark Job Cost Breakdown\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_spark_job_cost_breakdown.png)


In this screenshot, the data is shown after applying the EMR on EKS Job ID filter mentioned above. This pivot table is very useful if you want to drill down into more details of the Spark job components and additional information. For example, here you can find the Spark app version, Spark app name, Spark app id, and even the ID of each executor, along with the pod name and pod UID. A similar pivot table is available further down, breaking down the cost of Flink jobs by different relevant dimensions.

Let’s now go back to the EMR on EKS console, and click on the "Spark UI" link on the right-most part of the line representing the job we chose, to open the Spark History Server console. On the landing page, we can see general information on the Spark job in question:

![\[SCAD - Containers Cost Allocation Dashboard - Spark History Server Landing Page\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_spark_history_server_landing_page.png)


Check the "Version", "App ID" and "App Name" columns in the Spark History Server console. They exactly correlate to the equivalent columns in the Spark jobs cost breakdown pivot table in the dashboard ("Spark App Version", "Spark App ID", and "Spark App Name", respectively). On the Spark History Server console, click on the link of the app id (below the "App ID" column). You’ll land on a page which shows more details on the Spark application in question. Then, click on the "Executors" menu on the top, which will show more details on the executors that were running as part of this Spark application:

![\[SCAD - Containers Cost Allocation Dashboard - Spark History Server Executors Page\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_spark_history_server_executors.png)


Now go back to the dashboard, to the Spark jobs cost breakdown pivot table. You can see exactly the same executor IDs (1 and 2) under the "Spark Executor ID" column. This is helpful if you want to drill down into specific components of your Spark applications, for example if one executor took more time to run, you may want to know how much it costs. You can also see the pod name and UID of each executor, and the driver.

To summarize, this example shows how you can work backwards from the native console of the application (in this case, EMR on EKS console), to the dashboard, based on a specific application-run, to see how much it costs. This is very helpful, and can help you shift left your FinOps practice towards your developers, data engineers, DevOps engineers, or other teams who use these clusters, who can work with the dashboard using the same terminology they’re used to when working with the native console of the application. It works the same way when you work backwards from the Spark History Server.

### Drilling Down to Spark Applications from Top-Level Dimensions


In the previous example, we worked backwards from a specific job-run, from the native application console. In this example, we’re doing it the other way around - we’re starting with the dashboard, and work top down, from a high-level construct.

The first 2 Sankey visuals on the "Data on EKS" sheet, in the "General Overview" section, map EKS cluster ARNs to EMR on EKS virtual cluster IDs and to job submission method:

![\[SCAD - Containers Cost Allocation Dashboard - Sankey Visuals\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/scad_data_on_eks_sankey_visuals.png)


You can use this information to learn the spend of each high-level component, and then continue to the "Data on EKS Workloads Explorer - Interactive Spark/Flink Jobs Visuals" section, to further drill down. The visuals in this section are grouped by cluster name by default. If you’re interested in investigating the costs of your Spark applications, you may want to start drilling down from this level. For example, you can take the highest spending cluster, and use the "Cluster Name" filter (on the top part of the "Data on EKS" sheet) to filter the visuals based on it. Then, you can open the "Group By" control and select "Amazon EKS: Namespace" to group the visuals by namespace (which will result in the visuals being grouped by namespace, only for the cluster that was selected in the filter). You can continue on and on, for example from namespace to "Spark App ID", and then at this point, you can use the "Top Allocations" control to list the top 10 applications (for example). From here, the interactive nature of the visual can be useful. You can click on any line in the pivot table, and Quick Sight will filter the rest of the visuals in the sheet. From here, the same approach applies - you can correlate the data seen in the visuals with the native console of your application (whether it’s EMR on EKS console or Spark History Server console).

# Amazon Connect Cost Insight Dashboard
Amazon Connect Cost Insight Dashboard

## Introduction


The Amazon Connect Cost Insight Dashboard leverages AWS Cost and Usage Report data to provide visualizations that helps optimizing cloud spending and enhance operational efficiency within the [Amazon Connect contact center](https://aws.amazon.com/pm/connect) infrastructure.

![\[Image of Amazon Connect Cost Insight Dashboard architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/CID_Connect_archi.png)


The Amazon Connect Cost Insight Dashboard is organized into 7 intuitive tabs:

1.  **Overview** A high-level summary of Amazon Connect and Contact Center Telecom charges.

1.  **Contact Center Analysis** Focus on cost and usage metrics exclusively for accounts running Amazon Connect and associated contact center services, enabling targeted monitoring of contact center operations.

1.  **Connect** Detailed view of Amazon Connect Voice service usage and costs.

1.  **Telecom Spend** Breakdown of contact center Telecommunications costs by number types and countries.

1.  **Daily Usage** 30-day trending data for costs and usage patterns with drill downs to inbound/outbound minutes and phone numbers usage.

1.  **Call Details** Key metrics about call patterns, durations, and regional distribution.

1.  **Contact Search** Detailed analysis of individual contacts and their characteristics. You can focus on a particular contact and see detailed information.

Each tab progressively moves from broad insights to specific details, helping you effectively monitor your contact center operations.

## Demo Dashboard


Get more familiar with Dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo?dashboard=amazon-connect-cost-insight-dashboard) 

![\[Image of Amazon Connect Cost Insight Dashboard in Quick Sight\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/Amazon_Connect_dash.png)


## Prerequisites


1. Deploy one or more of the foundational dashboards: [CUDOS, Cost Intelligence, or KPI Dashboard.](cudos-cid-kpi.md) 

## Deployment


**Example**  
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard) 

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Amazon-Connect-Cost-Insight-Dashboard&param_DashboardId=amazon-connect-cost-insight-dashboard](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Amazon-Connect-Cost-Insight-Dashboard&param_DashboardId=amazon-connect-cost-insight-dashboard) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.
Alternative method to install dashboards is the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool.  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id amazon-connect-cost-insight-dashboard
   ```

   Please follow the instructions from the deployment wizard. More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.

## Update


Please note that dashboards are not updated with update of CloudFormation Stack. When new version of the dashboard template is released, you can update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id amazon-connect-cost-insight-dashboard
```

## Dashboard Customization


1. Unleash your data creativity\$1 Dive into custom analysis by creating your own visuals from this dashboard. Follow our quick [guide](create-analysis.md) to get started.

1. To integrate CID with AWS Organizations for enhanced cost visibility across multiple accounts and organizational units follow this [documentation](add-org-taxonomy.md) 

1. To replace Amazon Connect instance IDs with more readable custom labels in your dashboard check following section [link](#replace-connect-instance-id-with-custom-names) 

1. To set up granular billing for a detailed view of your Amazon Connect usage follow this [documentation](https://docs.aws.amazon.com/connect/latest/adminguide/granular-billing.html) 

### Replacing Connect Instance IDs with Custom Names in Amazon Connect Dashboard


This process allows you to replace Amazon Connect instance IDs with more readable custom labels in your dashboard. This is a one-time setup that needs to be done after dashboard deployment.

#### Expand for the steps to follow


 **Steps** 

1. Create an Analysis. Refer to [How do I edit or customize the dashboards](faq.md#faq-how-do-i-edit-or-customize-the-dashboards) 

1. Edit the Calculated Field: Under Data >> Dataset 'resource\$1connect\$1view' edit **\$1\$1connect\$1instance\$1name** field  
![\[Connect Instance Name\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/connect/instance_name.png)

You’ll find an example that you can uncomment to provide your instance ID and preferred label

```
ifelse (
  {#connect_instance_id}="bb83be25-8c15-4696-a583-5dejlk12","EuropeProd",
//   {#connect_instance_id}="<instance_id>","<instance_name2>",
//   {#connect_instance_id}="<instance_id>","<instance_name3>",
   contains({usage_type},'-numbers'), 'phone numbers'
   ,
   {#connect_instance_id}
    )
```

Save the calculated field and verify the changes in the Overview tab’s verification table (bottom right)

![\[Connect Instance Label\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/connect/instance_label.png)


1. Publish your Analysis as Dashboard.

 **Notes** 
+ Each instance ID mapping should follow the format: \$1\$1connect\$1instance\$1id\$1="instance-id","custom-name"
+ Maintain the default 'phone numbers' handling and fallback options
+ Multiple instances can be added by repeating the mapping line
+ Remember to include the comma between each condition

## Authors

+ Alex Yankovskyy, Solutions Architect
+ Baraa Elkosh, Sr Technical Account Manager
+ Mariia Poliakh, Technical Account Manager

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

Have a success story to share with the Team, suggest an improvement or report an error?
+ Please email: [cloud-intelligence-dashboards-amazon-connect@amazon.com](mailto:cloud-intelligence-dashboards-amazon-connect@amazon.com) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# AWS Config Resource Compliance Dashboard
AWS Config Resource Compliance Dashboard

## Introduction


AWS Config is a fully managed service that provides you with resource inventory, configuration history, and configuration change notifications for security and governance.

The Amazon Web Services (AWS) Config Resource Compliance Dashboard (CRCD) shows the inventory of your AWS resources, along with their compliance status, across multiple AWS accounts and Regions by leveraging your AWS Config data.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/709-y8q81Q8?rel=0/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/709-y8q81Q8?rel=0)


## Links


### Demo Dashboard


Get more familiar with the dashboard using the live, interactive demo dashboard following this [link](https://cid.workshops.aws.dev/demo/?dashboard=cid-crcd).

### GitHub Project


See the source code and the changelog at our GitHub [project](https://github.com/aws-samples/config-resource-compliance-dashboard).

## Advantages


### Compliance tracking


Track compliance of your AWS Config rules and conformance packs per service, AWS Region, account, resource. Identify resources that require compliance remediation and establish a process for continuous compliance review. Verify that your tagging strategy is consistently applied across accounts and Regions.

### Democratize security and compliance visibility


The AWS Config Dashboard helps security teams establish a compliance practice and offers visibility over security compliance to field teams, without them accessing AWS Config service or dedicated security tooling accounts.

### Shift-left security and compliance practices


Field teams will see their non-compliant resources as quickly as security teams. This creates a short feedback loop that helps keep non-compliant resources to a minimum and helps organizations establish a consistent compliance review process with a shorter path to get to green compliance.

### A simplified Configuration Management Database (CMDB) experience in AWS


Avoid investment in a dedicated external CMDB system or third-party tools. Access the inventory of resources in a single pane of glass, without accessing the AWS Management Console on each account and Region. Filter resources by account, Region, and fields that are specific to the resource such as IP address. If you tag consistently your resources, for example to map them to the application, owning team and environment, specify those tags to the dashboard and they will be displayed alongside other resource-specific information, and used for filtering your configuration items. Manage and plan the upgrade of Amazon RDS DB engines and AWS Lambda runtimes.

## Dashboard features


### AWS Config compliance

+ At-a-glance status of compliant and non-compliant resources and AWS Config rules.
+ Month-by-month compliance trend for resources and AWS Config rules.
+ Compliance breakdown by service, account, and Region.
+ Compliance tracking for AWS Config rules and conformance packs.

### Resource inventory management


![\[CRCD Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-ec2-inventory.png)


Inventory of Amazon EC2, Amazon EBS, Amazon S3, Amazon Relational Database Service (RDS) and AWS Lambda resources with filtering on account, Region and resource-specific fields (e.g. IP addresses for EC2, Lambda runtime, RDS database engine). Furthermore, the dashboard supports filtering of these resources by the custom tags that you use to categorize workloads and resources, such as Application, Owner and Environment. The name of the tags will be provided by you during installation.

#### Resource inventory and EC2 Availability Zone dashboards


Graphs that report summarized insights about resource configuration data, including detailed information about EC2 and EBS. Evaluate your resilience to AZ-level events by checking the distribution of your EC2 instances across Availability Zones.

### Tag compliance


Visualize the results of AWS Config Managed Rule [required-tags](https://docs.aws.amazon.com/config/latest/developerguide/required-tags.html). You can deploy this rule to find resources in your accounts that were not launched with your desired tag configurations by specifying which resource types should have tags and the expected value for each tag. The rule can be deployed multiple times in AWS Config. To display data on the dashboard, the rules must have a name that starts with `required-tags`, `required-tag`, `requiredtags` or `requiredtag` (this is case insensitive).

![\[CRCD Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-tag-compliance-summary.png)


### Contributors to AWS Config costs


AWS Config cost is driven by the number of rule evaluations and configuration item changes being recorded. AWS Config cost are complex and calculating them precisely is outside the scope of this dashboard. To help you analyze AWS Config cost contributors and reduce operational costs while maintaining robust security and compliance monitoring, the dashboard reports the number of configuration items changes that are recorded and the number of AWS Config rule evaluations over time. The dashboard also covers other use cases that contribute to unnecessary AWS Config costs:
+  **Conformance pack rules that cannot be evaluated.** Conformance pack rules that have a compliance status of INSUFFICIENT\$1DATA do not have AWS resources in scope. Since you are charged for each rule evaluation regardless of the outcome, rules that return INSUFFICIENT\$1DATA still incur costs without delivering any compliance information.
+  **Redundant AWS Config rules.** While AWS Config provides multiple deployment options—including individual rules, conformance packs, Security Hub standards, and AWS Control Tower controls—many customers inadvertently implement duplicate rules across these services. This duplication leads to significant disadvantages: unnecessary costs from redundant evaluations, governance complexity that complicates compliance management, and potentially inconsistent remediation actions for the same compliance issues. To optimize compliance efforts and reduce costs, organizations should develop a strategic approach that eliminates rule duplication across their AWS environments. The dashboard will help you identify the rules that are deployed multiple times.

![\[CRCD Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-cost-drivers.png)


### Configuration Item events


The AWS Config Dashboards shows the timeline of your configuration changes. Find which resources were recently created, updated or deleted and see which accounts and Regions are delivering AWS Config data. Visualize the latest data imported into the dashboard and confirm that you are receiving data from all accounts and Regions.

![\[CRCD Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-ci-events.png)


## Steps


There are two possible ways to deploy the AWS Config dashboard on AWS Organizations. Read the [Perequisites](config-resource-prerequisites.md) page to understand which deployment setup is better for you. If you install the dashboard on a standalone account that is not part of an AWS Organization, follow the installation instructions in the Log Archive account.
+  [Prerequisites](config-resource-prerequisites.md) 
+  [Deployment: Log Archive account](config-resource-log-archive.md) 
+  [Deployment: Dashboard account](config-resource-dashboard-account.md) 
+  [Optional post-deployment activities and FAQ](config-resource-post-deployment.md) 
+  [Teardown](config-resource-teardown.md) 

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

## Update instructions


If you already have installed the AWS Config Dasboard, you can check our [GitHub repository upgrade page](https://github.com/aws-samples/config-resource-compliance-dashboard/blob/main/documentation/upgrade.md) to see if there are instructions on how to upgrade to the latest version.

## Authors

+ Luca Casarini, Senior Technical Account Manager, AWS

## Contributors

+ Iakov Gan, Ex-Amazonian

## Feedback Support


Follow [Feedback & Support](feedback-support.md) guide.

# Prerequisites
Prerequisites

## Architecture


The AWS Config Resource Compliance Dashboard (CRCD) solution can be deployed in standalone AWS accounts or AWS accounts that are members of an AWS Organization.

You can deploy the dashboard in a standalone account with [AWS Config enabled](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html). This option may be useful for proof of concept or testing purposes. In this case, all dashboard resources are deployed within the same AWS account.

If you use AWS Organizations, AWS Config must be enabled with an [AWS Config delivery channel](https://docs.aws.amazon.com/config/latest/developerguide/manage-delivery-channel.html) sending files to a centralized Amazon S3 bucket (which we will call the Log Archive bucket) in a dedicated account (which we will call the Log Archive account). In this case, there are two possible ways to deploy the CRCD dashboard.

1.  **Deploy in the Log Archive account** You can deploy the dashboard resources in the same Log Archive account where your AWS Config configuration files are delivered. The architecture in this case looks like this:

![\[CRCD Dashboard: deployment on AWS Organization\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-architecture-log-archive-account.png)


1.  **Deploy in a separate Dashboard account** Alternatively, you can create a separate Dashboard account to deploy the dashboard resources. In this case, objects from the Log Archive bucket in the Log Archive account are replicated to another bucket in the Dashboard account.

![\[CRCD Dashboard: deployment on AWS Organization\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-architecture-dashboard-account.png)


An Amazon Athena table is used to extract data from the AWS Config configuration files delivered to Amazon S3. Whenever a new object is added to the bucket, the Lambda Partitioner function is triggered. This function checks if the object is an AWS Config configuration update. If it is, the function adds a new partition to the corresponding Athena table with the new data; otherwise, the function ignores it. The solution provides Athena views, which are SQL queries that extract data from Amazon S3 using the schema defined in the Athena table. Finally, you can visualize the data in a Quick Sight dashboard that uses these views through Amazon Quick Sight datasets.

### Log Archive bucket encrypted with an AWS Key Management Service (KMS) key


The deployment process supports Log Archive buckets encrypted using a customer-managed KMS key (SSE-KMS).

In case of Log Archive account deployment:
+ Amazon Quick Sight will be granted permissions to use the KMS key for decrypt operations. This is done with an IAM policy. If you prefer, you can manually grant this permission directly on the key policy. See below for instructions.

In case of Dashboard account deployment:
+ S3 replication must occur between buckets with the same type of encryption.
+ The Dashboard bucket will be encrypted with a KMS key which is created by the AWS CloudFormation template.
+ The S3 replication policy will have permissions to use the KMS keys of both buckets.

**Note**  
If your Log Archive bucket is SSE-KMS encrypted, and you do not provide the ARN of the corresponding KMS key in the CloudFormation parameters, the dashboard resources will not have the necessary permissions to function correctly.

## Prerequisites


1. AWS Config enabled in the accounts and AWS Regions you want to track, with an AWS Config delivery channel sending files to a centralized Amazon S3 bucket (which we will call the Log Archive bucket) in a dedicated account (which we will call the Log Archive account).
   + We recommend that your AWS Config delivery channel delivers AWS Config configuration snapshot files every 24 hours for all accounts and Regions where AWS Config is active (see below for more information).

1. An AWS account where you’ll deploy the dashboard.

1. An IAM Role or IAM User with permissions to deploy the infrastructure using CloudFormation.

1. Sign up for [Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/signing-up.html) and create a user:

   1. Select **Enterprise** edition.

   1. For the **Get Paginated Reports add-on**, choose the option you prefer (this is not required for deploying the CRCD dashboard).

   1.  **Use IAM federated identities and Quick Sight-managed users**.

   1. Select the Region where to deploy the dashboard. We recommend using the same Region of your Amazon S3 bucket.

   1. Add a username and an e-mail where you’ll receive notifications about failed Quick Sight datasets updates.

   1. Use the **Quick Sight-managed role (default)**.

   1. Don’t modify the **Allow access and autodiscovery for these resources** section and click **Finish**.

1. Ensure you have SPICE capacity available in the Region where you’re deploying the dashboard.

### Account Names


If you deployed other CUDOS dashboards, the dashboard will display account names.

## Before you start


### AWS Config considerations


 *Skip this paragraph if you have AWS Config enabled.* 

The solution leverages AWS Config data to build the visualizations on the dashboard. If you **do not** have AWS Config enabled, we strongly recommend building your strategy first:
+ Decide which accounts, Regions, and resources to monitor.
+ Define what "compliance" means to your organization, i.e. which [AWS Config rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) or [conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html) to activate.
+ Identify the account that will be delegated admin for AWS Config.
+ Keep in mind the paragraphs below when enabling AWS Config.

**Note**  
Only when the AWS Config setup matches your needs should you consider deploying this dashboard.

### AWS Config delivery channel considerations


The AWS Config delivery channel is a crucial component for managing and controlling where configuration updates are sent. It consists of an Amazon S3 bucket and an optional Amazon SNS topic, which is not needed by the AWS Config dashboard. The S3 bucket is used to store AWS Config configuration history and configuration snapshots files, while the SNS topic can be used for streaming configuration changes. A delivery channel is required to use AWS Config and is limited to one per Region per AWS account. When setting up a delivery channel, you can specify the name, the S3 bucket for file delivery, and the frequency of configuration snapshot delivery.

A configuration **snapshot** provides a comprehensive view of all currently active recorded configuration items within a customer’s AWS account. In contrast, AWS Config delivers automatically a configuration **history** file to the S3 bucket every 6 hours. This file contains changes detected for each resource type since the last history file was delivered. Check this [blog post](https://aws.amazon.com/blogs/mt/configuration-history-configuration-snapshot-files-aws-config/) for more information on the difference between AWS Config configuration history and configuration snapshot files.

The dashboard does not support [oversized configuration item change notifications](https://docs.aws.amazon.com/config/latest/developerguide/oversized-notification-example.html).

To check your AWS Config delivery channel setup, you can use the AWS CLI command:

```
aws configservice describe-delivery-channels
```

This command will provide information about the delivery channel configuration on the account and Region where it is launched, including the S3 bucket where configuration updates are sent and the configuration snapshot delivery properties. Ensure the configuration is consistent across all accounts and Regions you want to record. The output of the CLI command should look like this:

```
{
    "DeliveryChannels": [
        {
            "name": "[YOUR-DELIVERY-CHANNEL-NAME]",
            "s3BucketName": "[YOUR-LOG-ARCHIVE-BUCKET-NAME]",
            "s3KeyPrefix": "[OPTIONAL-S3-PREFIX-FOR-AWS-CONFIG-FILES]",
            "configSnapshotDeliveryProperties": {
                "deliveryFrequency": "TwentyFour_Hours"
            }
        }
    ]
}
```

We recommend to have `configSnapshotDeliveryProperties` configured on your delivery channel with a delivery frequency of 24 hours, run the CLI command above to verify your setup.

**Note**  
AWS Control Tower configures the AWS Config delivery channel with a 24-hour delivery frequency for configuration snapshot files.

#### Click here to read what to do if your delivery channel is not configured to deliver configuration snapshots


 **How to add daily delivery of configuration snapshot files to your delivery channel** 

You have to configure this on every account and Region where you have AWS Config active. We’ll give an example below of how this can be achieved with the AWS CLI, but if your environment consists of several AWS accounts and Regions, we recommend using CloudFormation StackSets to ensure a consistent configuration.

Here’s how you can use the AWS CLI to modify the existing settings and schedule the delivery of configuration snapshot files to your delivery channel configuration.

1. Log into the AWS Console in any account and Region, open AWS CloudShell.

1. Run the AWS CLI command `aws configservice describe-delivery-channels` and save the resulting JSON to a local file. Name it `deliveryChannel.json`. For example, your file may look like the one below.

```
{
  "name": "default",
  "s3BucketName": "config-bucket-123456789012",
  "snsTopicARN": "arn:aws:sns:us-east-1:123456789012:config-topic",
  "s3KeyPrefix": "my-prefix"
}
```

1. Verify the S3 bucket in `s3BucketName` is the name of your Log Archive bucket.

1. Edit the file to add the `configSnapshotDeliveryProperties` section:

```
{
  "name": "default",
  "s3BucketName": "config-bucket-123456789012",
  "snsTopicARN": "arn:aws:sns:us-east-1:123456789012:config-topic",
  "s3KeyPrefix": "my-prefix",
  "configSnapshotDeliveryProperties": {
    "deliveryFrequency": "TwentyFour_Hours"
  }
}
```

You have to follow these steps consistently in every account and Region:

1. Log into the AWS Console of one account and Region, open AWS CloudShell.

1. Upload the `deliveryChannel.json` file containing the delivery channel configuration.

1. Use the `put-delivery-channel` AWS CLI [command](https://docs.aws.amazon.com/cli/latest/reference/configservice/put-delivery-channel.html) to update your delivery channel configuration according to the content of the JSON file. This command allows you to update or modify your current delivery channel settings.

```
aws configservice put-delivery-channel --delivery-channel file://deliveryChannel.json
```

Ensure this is done consistently in every account and Region.

### Regional considerations


**Note**  
Data transfer costs will incur when Amazon Athena queries an Amazon S3 bucket across Regions.

To avoid cross-region data transfer, Amazon Quick Sight and the Amazon S3 bucket containing AWS Config files must be deployed in the same Region.
+ If you have already deployed either resource, the other must use the same Region. If you haven’t deployed anything yet, you can choose a Region of your preference.
+ If you have deployed both resources in different Regions, we strongly recommend making changes so that both are in the same Region.
+ Once you have decided on the Region, deploy AWS resources supporting the dashboard (via CloudFormation) in the same Region.

### Tag Compliance: naming convention on the AWS Config rule


This part of the dashboard visualizes the evaluation results of AWS Config Managed Rule [required-tags](https://docs.aws.amazon.com/config/latest/developerguide/required-tags.html). You can deploy this rule to find resources in your accounts that were not launched with your desired tag configurations by specifying which resource types should have tags and the expected value for each tag. The rule can be deployed multiple times in AWS Config. To display data on the dashboard, the rules must have a name that starts with `required-tags`, `required-tag`, `requiredtags` or `requiredtag` (this is case insensitive).

### Deployment architecture


The most important decision is whether to deploy the dashboard on a dedicated Dashboard account or directly into the Log Archive account. These are the implications of each architecture.

#### Log Archive account architecture



| Pros | Cons | 
| --- | --- | 
|  Keep your logs secure in the Log Archive account.  |  Your security team must deploy and maintain the AWS Config Dashboard resources, including user access to Quick Sight. Alternatively, you have to share access to the Log Archive account with other teams that will manage these resources.  | 
|  Avoid cost for data transfer and storing data on the Dashboard account.  |  The CRCD Dashboard adds complexity in user management if you already have Quick Sight dashboards deployed in the Log Archive account.  | 
|  |  If you already have S3 object notification configured on your Config bucket, a part of the deployment process must be done manually.  | 

#### Dashboard account architecture



| Pros | Cons | 
| --- | --- | 
|  Allow your DevOps or Platform teams independence in installing and maintaining the dashboard, as well as regulating user access.  |  Your security data will be copied to another AWS account.  | 
|  A limited number of resources must be deployed on Log Archive account.  |  Control Tower default installations may collect AWS Config and AWS CloudTrail on the same bucket. This means that all your security logs will be replicated to another account.  | 
|  |  You will incur costs for the replication and storing a copy of your data on another Amazon S3 bucket. Cloud Trail logs will increase those costs needlessly, as they are not used by the dashboard.  | 
|  |  If you already have S3 replication configured on your Log Archive bucket, a part of the deployment process must be done manually.  | 

### Deployment instructions

+ Follow [these instructions](config-resource-log-archive.md) to deploy the dashboard in the **Log Archive** account, or in a standalone AWS account.
+ Follow [these instructions](config-resource-dashboard-account.md) to deploy the dashboard in the **Dashboard** account.

# Deployment: Log Archive account
Deployment: Log Archive account

## Deployment Instructions


The infrastructure needed to collect and process the data is defined in AWS CloudFormation. The dashboard resources are defined in a template file that can be installed using the [CID-CMD](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) tool.

### Deployment on standalone account


Follow the same installation instructions for the Log Archive account.

### Deployment on Log Archive account


The installation process consists of two steps:

1. Data pipeline resources for the dashboard, via CloudFormation stack.

1. Quick Sight resources for the dashboard and the necessary Athena views, using the [CID-CMD](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) command line tool.

![\[CRCD Dashboard: deployment steps on Log Archive account\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-deployment-steps-log-archive-account.png)


#### Deployment Steps


**Note**  
Ensure you are in the Region where both your Log Archive bucket and Amazon Quick Sight are deployed.

##### Step 1


1. Log into the AWS Management Console for your Log Archive account.

1. Click the Launch Stack button below to open the stack template in your CloudFormation console. This Stack will create the data pipeline resources for the dashboard.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-crcd-resources.yaml&stackName=config-dashboard-resources](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-crcd-resources.yaml&stackName=config-dashboard-resources) 

1. Specify the following parameters:
   +  `Log Archive account ID` Enter the AWS account ID where you are currently logged in (Required).
   +  `Log Archive bucket` Enter the name of the Amazon S3 bucket that collects AWS Config data (Required).
   +  `ARN of the KMS key that encrypts the Log Archive bucket` Leave empty if the bucket is not encrypted with a KMS key. If you encrypt the bucket with a KMS key, copy the key’s ARN here.
     + CloudFormation will create an IAM policy that grants Amazon Quick Sight permissions to use the key for decrypt operations.
     + You may prefer managing key permissions on the key policy, rather than IAM. In his case, leave the parameter empty. You’ll have to manually grant key permissions in the key policy (more details below).
   +  `Dashboard account ID` Enter the same value as the `Log Archive account ID` (Required).
   +  `Dashboard bucket` Enter the same value as the `Log Archive bucket` (Required).
   +  `ARN of the KMS key that encrypts the Dashboard bucket` Leave empty, this parameter is ignored in this deployment mode.
   +  `Configure S3 event notification` This will configure S3 event notifications to trigger the Partitioner Lambda function, which creates the corresponding partition on Amazon Athena when a new AWS Config file is delivered to the Log Archive bucket (Required).
     + Select `yes` to configure S3 event notifications.
     + Select `no` if you have already configured event notifications on the Log Archive bucket. You’ll have to manually configure S3 event notifications (more details below).
     + The S3 event notification configuration is performed by an ad-hoc Lambda function (called `crcd-support-configure-s3-event-notification`) that will be called by the CloudFormation template automatically.
**Note**  
The `crcd-support-configure-s3-event-notification` function will return an error (and the entire stack will fail) if you have already configured event notifications on the Log Archive bucket. In this case you must select `no` and run the stack again.
   +  `Configure cross-account replication` Leave at the default value. This parameter is ignored in this deployment mode.
   + Leave all other parameters at their default value.

1. Run the CloudFormation template.

1. Note down the output values of the CloudFormation template.

##### Click here if you need to perform parts of this installation manually


 **Manual setup of KMS key permissions** 

**Note**  
Skip this section if you do not utilize a KMS key to encrypt your Dashboard bucket, or if you specified the key ARN in the CloudFormation parameter `ARN of the KMS key that encrypts the Dashboard bucket` in Step 1.

Follow these steps to [edit](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html) the key policy and grant the Quick Sight role permissions to use the key for decrypt operations.

1. Ensure you are logged into the AWS Management Console on the Log Archive account and Region where you created the KMS key that encrypts the Log Archive bucket.

1. Open the AWS Key Management Service console and click on the KMS key.

1. Add the following statement to the key policy:

```
{
    "Sid": "CRCD Dashboard allow Quick Sight Role access",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::ACCOUNT_ID:role/QUICKSIGHT_DATASOURCE_ROLE"
    },
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": "*"
}
```

Where:
+  `ACCOUNT_ID` is the AWS account ID where you installed thedashboard.
+  `QUICKSIGHT_DATASOURCE_ROLE` is the value of the output `Quick SightDataSourceRole` from the CloudFormation template.

 **Manual setup of S3 event notification** 

**Note**  
Skip this section if you selected `yes` in CloudFormation parameter `Configure S3 event notification` in Step 1.

If you selected `no`, you must configure the Log Archive S3 bucket event notification to trigger the Lambda Partitioner function when objects are added to the bucket. CloudFormation has already deployed the necessary permissions for the Lambda function to access the Log Archive bucket. You can find the ARN of the Lambda Partitioner function in the output values of the CloudFormation template.

We recommend that you configure your event notification to an SNS topic in these cases:

1. If your bucket already publishes events notifications to an SNS topic, [subscribe](https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html#sns-trigger-console) the Lambda Partitioner function to the topic.

1. If your bucket sends event notifications to another Lambda function, change the notification to an SNS topic and [subscribe](https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html#sns-trigger-console) both the existing function and the Lambda Partitioner function to that SNS topic.

The S3 event notifications for this dashboard must meet the following requirements:

1. All object create events.

1. All prefixes.

This may be a challenge depending on your current S3 event notification setup, as Amazon S3 [cannot have](https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-filtering.html#notification-how-to-filtering-examples-invalid) overlapping prefixes in two rules for the same event type.

Follow [these instructions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-enable-disable-notification-intro.html) to add a notification configuration to your bucket using an Amazon SNS topic. Also, ensure that the Log Archive bucket is [granted permissions to publish event notification messages to your SNS topic](https://docs.aws.amazon.com/AmazonS3/latest/userguide/grant-destinations-permissions-to-s3.html).

##### Step 2


Remain logged into the AWS Management Console for your Log Archive account.

**Note**  
At this step you will specify the tags to be used to display resources in the [Resource inventory management](config-resource-compliance-dashboard.md#config-resource-compliance-dashboard-inventory-management) part of the dashboard. Use the tags that classify workloads and resources in your organization.

1. Navigate to the AWS Management Console and open [AWS CloudShell](https://console.aws.amazon.com/cloudshell). Ensure to be in the correct Region.

1. Install the latest pip package of the [CID-CMD](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. Deploy the dashboard by running the following command (replace the parameters accordingly):
   +  `--tag1` The name of the first tag you use to categorize resources.
   +  `--tag2` The name of the second tag you use to categorize resources.
   +  `--tag3` The name of the third tag you use to categorize resources.
   +  `--tag4` The name of the fourth tag you use to categorize resources.
   + Notice that tag parameters are case sensitive and cannot be empty. If you do not use a tag, pass a short default value, e.g. `--tag4 'NA'`.
   + Leave all other parameters at their default value.

```
cid-cmd deploy \
   --resources 'https://raw.githubusercontent.com/aws-samples/config-resource-compliance-dashboard/refs/heads/main/dashboard_template/cid-crcd.yaml' \
   --dashboard-id 'cid-crcd' \
   --athena-database 'cid_crcd_database' \
   --athena-workgroup 'crcd-dashboard' \
   --tag1 'REPLACE_WITH_CUSTOM_TAG_1' \
   --tag2 'REPLACE_WITH_CUSTOM_TAG_2' \
   --tag3 'REPLACE_WITH_CUSTOM_TAG_3' \
   --tag4 'REPLACE_WITH_CUSTOM_TAG_4'
```

1. The CID-CMD tool will prompt you to select a datasource: `[quicksight-datasource-id] Please choose DataSource (Select the first one if not sure):`.
   + If you have installed other CID/CUDOS dashboards, select the existing datasource `CID-CMD-Athena`.
   + Otherwise select `CID-CMD-Athena <CREATE NEW DATASOURCE>`.

1. When prompted `[quicksight-datasource-role] Please choose a Quick Sight role. It must have access to Athena:` select `CidCmdQuick SightDataSourceRole <ADD NEW ROLE>` or `CidCmdQuick SightDataSourceRole` (the second option will appear by default if you have other CID/CUDOS dashboards).

1. In certain cases the installer will show an updated IAM policy JSON code and prompt `? [confirm-policy-AthenaAccess] Please confirm:`. Select `yes`.

1. When prompted `[timezone] Please select timezone for datasets scheduled refresh.:` select the time zone for dataset scheduled refresh in your Region (it is already preselected).

1. When prompted `Select taxonomy fields to add as dashboard filters and group by fields` select `Looks good` without adding taxonomy items. Taxonomy is not yet supported by the dashboard.

1. When prompted `[share-with-account] Share this dashboard with everyone in the account?:` select the option that works for you.

## Visualize the dashboard


1. Navigate to Quick Sight and then `Dashboards`.

1. Ensure you are in the correct Region.

1. Click on the **AWS Config Resource Compliance Dashboard (CRCD)** dashboard.

# Deployment: Dashboard account
Deployment: Dashboard account

## Deployment Instructions


The infrastructure needed to collect and process the data is defined in AWS CloudFormation. The dashboard resources are defined in a template file that can be installed using the [CID-CMD](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) tool.

## Installation on dedicated Dashboard account


The installation process consists of three steps:

1. On the Dashboard account, deploy data pipeline resources for the dashboard using a CloudFormation stack.

1. On the Log Archive account, configure the S3 replication rule that copies AWS Config files from the Log Archive bucket to the Dashboard bucket using a CloudFormation stack.

1. On the Dashboard account, deploy Quick Sight resources for the dashboard and the necessary Athena views using the [CID-CMD](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) command line tool.

![\[CRCD Dashboard: deployment steps on Dashboard account\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/dashboards/crcd-deployment-steps-dashboard-account.png)


**Note**  
The S3 replication rule configured at step 2 is valid only for new AWS Config files delivered to the Log Archive bucket, i.e. it **will not** replicate files that previously existed on the Log Archive bucket.

## Deployment Steps


**Note**  
Ensure you are in the AWS Region where both your Log Archive bucket and Amazon Quick Sight are deployed.

## Step 1 [in Dashboard account]


1. Log into the AWS Management Console for your Dashboard account.

1. Ensure you are in the same Region as the Log Archive bucket.

1. Click the Launch Stack button below to open the stack template in your CloudFormation console. This Stack will create the data pipeline resources for the dashboard.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-crcd-resources.yaml&stackName=config-dashboard-resources](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-crcd-resources.yaml&stackName=config-dashboard-resources) 

1. Specify the following parameters:
   +  `Log Archive account ID` Enter the AWS account ID of the Log Archive account. Notice this in **not** where you are currently logged in (Required).
   +  `Log Archive bucket` Enter the name of the Amazon S3 bucket that collects AWS Config data (Required).
   +  `ARN of the KMS key that encrypts the Log Archive bucket` If you encrypt the Log Archive bucket with a KMS key, copy the key’s ARN here.
     + If a KMS key ARN is passed here, the CloudFormation template will create a new KMS key and use it to encrypt the the Dashboard bucket.
   +  `Dashboard account ID` Enter the AWS account ID where you are currently logged in (Required).
   +  `Dashboard bucket` Enter the name of the Amazon S3 bucket that will collect AWS Config data. The CloudFormation template will create this bucket on the Dashboard account (Required).
   +  `ARN of the KMS key that encrypts the Dashboard bucket` Leave empty. This parameter is ignored in this deployment mode.
   +  `Configure S3 event notification` Leave at the default value. This parameter is ignored in this deployment mode.
   +  `Configure cross-account replication` Leave at the default value. This parameter is ignored in this deployment mode.
   + Leave all other parameters at their default value.

1. Run the CloudFormation template.

1. Note down the output values of the CloudFormation template.

1. If you encrypt the Log Archive bucket with a KMS key, the template will create a KMS key to encrypt the Dashboard bucket. Note down its ARN from the output value `DashboardBucketKmsKeyArn`. You will use it at the next step.

## Step 2 [in Log Archive account]


1. Log into the AWS Management Console for your Log Archive account.

1. Click the Launch Stack button below to open the stack template in your CloudFormation console. This Stack will create the data pipeline resources for the dashboard.

 [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-crcd-resources.yaml&stackName=config-dashboard-resources](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?&templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-crcd-resources.yaml&stackName=config-dashboard-resources) 

1. Specify the following parameters:
   +  `Log Archive account ID` Enter the AWS account ID where you are currently logged in (Required).
   +  `Log Archive bucket` Enter the name of the Amazon S3 bucket that collects AWS Config data (Required).
   +  `ARN of the KMS key that encrypts the Log Archive bucket` If you encrypt the Log Archive bucket with a KMS key, copy the key’s ARN here.
   +  `Dashboard account ID` Insert the ID of the Dashboard account that you specified in this field at Step 1 (Required).
   +  `Dashboard bucket` Insert the bucket name that you specified in this field at Step 1 (Required).
   +  `ARN of the KMS key that encrypts the Dashboard bucket` This parameter is used only at this step of the Dashboard account deployment. If you encrypt the Log Archive bucket with a KMS key, insert the ARN of the KMS key created in Step 1 (it’s `DashboardBucketKmsKeyArn` on the CloudFormation Outputs).
   +  `Configure S3 event notification` Leave at the default value. This parameter is ignored in this deployment mode.
   +  `Configure cross-account replication` (Required)
     + Select `yes` to configure S3 replication from the Log Archive bucket to the Dashboard bucket.
     + Select `no` if you already have configured S3 replication rules on the Log Archive bucket. You will have to setup S3 replication manually (see below).
     + The S3 replication configuration is performed by an ad-hoc Lambda function (**Configure bucket replication** in the diagram above) that will be called by the CloudFormation template automatically.
**Note**  
If you select `yes`, and you have existing S3 replication configurations, the **Configure bucket replication** function will return an error and the entire stack will fail. In this case you must select `no` and run the stack again.
   + Leave all other parameters at their default value.

1. Run the CloudFormation template.

1. Note down the output values of the CloudFormation template.

### Click here if your need to manually setup S3 replication


 **Manual setup of S3 replication** 

1. Log onto the Log Archive account and open the Amazon S3 console.

1. You can replicate AWS Config files from the centralized Log Archive bucket to the Dashboard bucket through an Amazon S3 Replication configuration, follow these [instructions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-walkthrough-2.html).

1. Specify the IAM role created by the CloudFormation template at Step 2, as reported in the output value `ReplicationRoleArn` of the template.

If your Log Archive bucket is SSE-KMS encrypted, the replication role will have the necessary permissions, no need for additional steps.

**Note**  
The S3 replication rule configured at step 2 is valid only for new AWS Config files delivered to the Log Archive bucket, i.e. it **will not** replicate files that previously existed on the Log Archive bucket.

## Step 3 [in Dashboard account]


Log back into the AWS Management Console for your Dashboard account.

**Note**  
At this step you will specify the tags to be used to display resources in the [Resource inventory management](config-resource-compliance-dashboard.md#config-resource-compliance-dashboard-inventory-management) part of the dashboard. Use the tags that classify workloads and resources in your organization.

1. Navigate to the AWS Management Console and open [AWS CloudShell](https://console.aws.amazon.com/cloudshell). Ensure to be in the correct Region.

1. Install the latest pip package of the [CID-CMD](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. Deploy the dashboard by running the following command (replace the parameters accordingly):
   +  `--tag1` The name of the first tag you use to categorize resources.
   +  `--tag2` The name of the second tag you use to categorize resources.
   +  `--tag3` The name of the third tag you use to categorize resources.
   +  `--tag4` The name of the fourth tag you use to categorize resources.
   + Notice that tag parameters are case sensitive and cannot be empty. If you do not use a tag, pass a short default value, e.g. `--tag4 'NA'`.
   + Leave all other parameters at their default value.

     ```
     cid-cmd deploy \
        --resources 'https://raw.githubusercontent.com/aws-samples/config-resource-compliance-dashboard/refs/heads/main/dashboard_template/cid-crcd.yaml' \
        --dashboard-id 'cid-crcd' \
        --athena-database 'cid_crcd_database' \
        --athena-workgroup 'crcd-dashboard' \
        --tag1 'REPLACE_WITH_CUSTOM_TAG_1' \
        --tag2 'REPLACE_WITH_CUSTOM_TAG_2' \
        --tag3 'REPLACE_WITH_CUSTOM_TAG_3' \
        --tag4 'REPLACE_WITH_CUSTOM_TAG_4'
     ```

1. The CID-CMD tool will prompt you to select a datasource: `[quicksight-datasource-id] Please choose DataSource (Select the first one if not sure):`.
   + If you have installed other CID/CUDOS dashboards, select the existing datasource `CID-CMD-Athena`.
   + Otherwise select `CID-CMD-Athena <CREATE NEW DATASOURCE>`.

1. When prompted `[quicksight-datasource-role] Please choose a Quick Sight role.` select `CidCmdQuick SightDataSourceRole <ADD NEW ROLE>` or `CidCmdQuick SightDataSourceRole` (the second option will appear as default if you have other CID/CUDOS dashboards).

1. In certain cases the installer will show an updated IAM policy JSON code and prompt `? [confirm-policy-AthenaAccess] Please confirm:`. Select `yes`.

1. When prompted `[timezone] Please select timezone for datasets scheduled refresh.:` select the time zone for dataset scheduled refresh in your Region (it is already preselected).

1. When prompted `Select taxonomy fields to add as dashboard filters and group by fields` select `Looks good` without adding taxonomy items. Taxonomy is not yet supported by the dashboard.

1. When prompted `[share-with-account] Share this dashboard with everyone in the account?:` select the option that works for you.

## Visualize the dashboard


1. Navigate to Quick Sight and then `Dashboards`.

1. Ensure you are in the correct Region.

1. Click on the **AWS Config Resource Compliance Dashboard (CRCD)** dashboard.

# Optional post-deployment activities and FAQ
Optional post-deployment activities and FAQ

## Post-deployment configuration


### Lambda Partitioner function


#### Amazon CloudWatch Log Group Retention


The logs of the Partitioner function are kept for 14 days. If needed, [change the retention period](https://docs.aws.amazon.com/solutions/latest/security-insights-on-aws/change-the-cloudwatch-log-group-retention-period.html) directly on the Amazon CloudWatch console.

### Quick Sight


#### Configure dataset refresh schedule


By default, the datasets for the CRCD dashboard are refreshed once a day. You can optionally configure the Refresh Schedule in Quick Sight with a different frequency:

1. Navigate to Quick Sight and then `Datasets`.

1. All the datasets for this dashboard have the prefix `config_`.

1. Click on a dataset, and then open the `Refresh` tab.

1. Click on `ADD NEW SCHEDULE`, and configure as needed.

## FAQ


### I installed the dashboard successfully, but there’s no data


If you followed our recommendations in the [prerequisites](config-resource-prerequisites.md), AWS Config delivers a configuration snapshot file every 24 hours, so you will probably start seeing data in a couple of days, depending on when the configuration snapshot files are generated and when the Quick Sight datasets are refreshed.

AWS Config generates history records approximately 6 hours after a resource is changed. These records will be loaded on the dasboard faster, and be visible on the **Configuration Item Events** tab.

Follow these steps to have AWS Config generate a configuration snapshot and visualize its data on the dashboard:

1. Log into the AWS Management Console of an account of you organization.

1. Open [AWS CloudShell](https://console.aws.amazon.com/cloudshell) in the AWS Region whose data you want to export.

1. Run the following command:

   ```
   aws configservice describe-delivery-channels
   ```

1. This command will provide information about your current delivery channel configuration, including the S3 bucket where configuration updates are sent and the configuration snapshot delivery properties. The output of the CLI command should look like this:

   ```
   {
        "DeliveryChannels": [
            {
                "name": "[YOUR-DELIVERY-CHANNEL-NAME]",
                "s3BucketName": "[YOUR-LOG-ARCHIVE-BUCKET-NAME]",
                "s3KeyPrefix": "[OPTIONAL-S3-PREFIX-FOR-AWS-CONFIG-FILES]",
                "configSnapshotDeliveryProperties": {
                    "deliveryFrequency": "TwentyFour_Hours"
                }
            }
        ]
    }
   ```

1. Note down the name of your delivery channel.

1. Run this command to generate an AWS Config snapshot (replace `"YOUR-DELIVERY-CHANNEL-NAME"` with the name reported above):

   ```
   aws configservice deliver-config-snapshot --delivery-channel-name "YOUR-DELIVERY-CHANNEL-NAME"
   ```

   The snapshot file will be delivered to the Log Archive bucket, optionally replicated to the Dashboard bucket, and indexed by the Lambda Partitioner function.

1. Optionally repeat these steps on other AWS accounts/Regions. We recommend doing this only for test purposes, or for rapidly checking the AWS Config data of a few accounts of your interest. AWS Config will deliver a snapshot file for all your resources within 24 hours.

1. Open Athena and query the table (or any view) to see if the data has been indexed. Mind that some dashboards elements will still need time to visualize your data.

```
SELECT * FROM "cid_crcd_database"."cid_crcd_config" limit 10;
```

1. Log onto Quick Sight and [refresh](https://docs.aws.amazon.com/quicksight/latest/user/refreshing-imported-data.html) your datasets before opening the dashboard.

# Teardown
Teardown

## Remove the AWS Config Resource Compliance Dashboard dashboard resources


Follow these steps to remove the dashboard.

### Step 1: all deployment architectures


1. Log into the AWS Console of the account where you deployed the dashboard. This is the AWS account ID that you specified in the `Dashboard account ID` parameter of the CloudFormation template.

1. Open AWS CloudShell in the AWS Region where the dashboard is deployed.

1. You need to use the dahboard YAML file corresponding to your [CRCD release](https://github.com/aws-samples/config-resource-compliance-dashboard/releases). Upload both `cid-crcd.yaml` and `cid-crcd-definition.yaml` on the `dashboard_template` directory to AWS CloudShell.

1. Execute the following command to delete the dashboard:

```
cid-cmd delete --resources cid-crcd.yaml
```

1. When prompted:
   + Select the `[cid-crcd] AWS Config Resource Compliance Dashboard (CRCD)` dashboard.
   + For each Quick Sight dataset, choose `yes` to delete the dataset.
   + If prompted, accept the default values for the S3 Path for the Athena table.
   + If prompted, accept the default values for the tags.

### Step 2: only for deployment on Log Archive or standalone account


**Note**  
Follow these steps if you deployed the dashboard on the Log Archive account or a standalone AWS account.

1. Log into the AWS Console of the account where you deployed the dashboard resources with CloudFormation. This is the AWS account ID that you specified both in the `Log Archive account ID` and the `Dashboard account ID` parameters of the CloudFormation template.

1. Revert any manual configuration made during setup.

1. Open the S3 console and empty the Amazon S3 bucket for the Athena Query results. The bucket name is in the CloudFormation stack output.

1. In the same account, open CloudFormation and delete the stack that installed the data pipeline resources for the dashboard.

### Step 2: only for deployment on dedicated Dashboard account


**Note**  
Follow these steps if you deployed the dashboard on a dedicated Dashboard account.

#### Remove resources on Log Archive account


1. Log into the AWS Console of the Log Archive account. This is the AWS account ID that you specified in the `Log Archive account ID` parameter of the CloudFormation template.

1. Revert any manual configuration made during setup.

1. Open CloudFormation and delete the stack that installed the resources for the dashboard.

#### Remove resources on Dashboard account


1. Log into the AWS Console of the account where you deployed the dashboard resources with CloudFormation. This is the AWS account ID that you specified in the `Dashboard account ID` parameter of the CloudFormation template.

1. Revert any manual configuration made during setup.

1. Open the S3 console and empty the Amazon S3 bucket for the Athena Query results. The bucket name is in the CloudFormation stack output.

1. Empty the Dashboard bucket, as well. This bucket contains a copy of the AWS Config files from the Log Archive account. The bucket name is in the CloudFormation stack output.

1. In the same account, open CloudFormation and delete the stack that installed the data pipeline resources for the dashboard.

# Sustainability Proxy Metrics and Carbon Emissions Dashboard
Sustainability Proxy Metrics and Carbon Emissions Dashboard

## Introduction


The [Sustainability Proxy Metrics](https://aws.amazon.com/blogs/aws-cloud-financial-management/measure-and-track-cloud-efficiency-with-sustainability-proxy-metrics-part-i-what-are-proxy-metrics/) and Carbon Emissions Dashboard helps customers look for opportunities to reduce their sustainability impact by making changes to their AWS infrastructure. This dashboard shows resource use in key areas defined in the Sustainability Pillar of the AWS Well-Architected Framework. It helps customers implement an impact aware architecture and acts as a starting point for customers to implement business metrics as defined in the Well-Architected Framework.

The dashboard provides Amazon Quick Sight visualizations with sustainability proxy metrics for commonly used AWS technologies. You can use these visualizations to set workload-level sustainability targets and technical resource plans to reduce resource use in your workloads. The dashboard helps you identify proxy metrics that best reflect the type of improvement you are assessing and the resources targeted for improvement, such as vCPU hours for compute resources, storage usage, and data transfer metrics. It also helps visualize carbon emission data taken from the carbon data export.

## Demo Dashboards


Get more familiar with the Sustainability Proxy Metrics and Carbon Emissions Dashboard using the [live interactive demo dashboard](https://cid.workshops.aws.dev/demo?dashboard=sustainability-proxy-metrics) :

![\[Sustainability Proxy Metrics Dashboard\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/SPMD.png)


## Prerequisites


1. Deploy one or more of the foundational dashboards: [CUDOS, Cost Intelligence, or KPI Dashboard](cudos-cid-kpi.md) as explained in [deployment guide](deployment-in-global-regions.md). Take special consideration to select "yes" to include the data export creation for carbon emissions (Step 1 in the guide)

## Deployment


**Example**  
Install the dashboard using the [cid-cmd](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) tool:  

1. Log in to to your **Data Collection** Account.

1. Open up a command-line interface with permissions to run API requests in your AWS account. We recommend to use [CloudShell](https://console.aws.amazon.com/cloudshell).

1. Check that the regional setup within the console is correct by overriding it to the region where you deployed the previous CloudFormation templates (Example: `us-east-1`):

   ```
   export AWS_DEFAULT_REGION=us-east-1
   ```

1. In your command-line interface run the following command to download and install the CID CLI tool:

   ```
   pip3 install --upgrade cid-cmd
   ```

   If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

   ```
   sudo yum install python3.11-pip -y
   python3.11 -m pip install -U cid-cmd
   ```

1. In your command-line interface run the following command to deploy the dashboard:

   ```
   cid-cmd deploy --dashboard-id sustainability-proxy-metrics
   ```

   Please follow the instructions from the deployment wizard. During deployment, you may be asked for the following details:
   +  **athena-workgroup**: The Athena workgroup used to access Athena (default: `CID`)
   +  **datasource**: The Athena datasource created in previous steps (default: `AwsDataCatalog`)
   +  **cur-table-name**: The CUR table name (default: `cur`)
   +  **AWS Athena database**: The database within the Datasource (default: `cid_cur`)
   +  **Tag**: A tag name used to categorize workloads. This gives you a list of all cost allocation tags. Select a tag that you apply to categorize workloads, like "workloadId". If you do not tag workloads, you can select "none".

     You will also be asked if you want to "Share the dashboard". This shares the dashboard with all Quick Sight users setup in your AWS account. If you want to restrict access, you can say no, which means only the current user can see it. You can share with selective users later using [QuickSight sharing features](https://docs.aws.amazon.com/quicksight/latest/user/sharing-a-dashboard.html).

     More info about command line options are in the [Readme](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md#command-line-tool-cid-cmd) or `cid-cmd --help`.
 **Prerequisite**: To install this dashboard using CloudFormation, you need to install Foundational Dashboards CFN with version v4.0.0 or above as described [here](deployment-in-global-regions.md#deployment-in-global-region-deploy-dashboard).

1. Log in to to your **Data Collection** Account.

1. Click the Launch Stack button below to open the **pre-populated stack template** in your CloudFormation.

    [https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Sustainability-Proxy-Metrics-Dashboard&param_DashboardId=sustainability-proxy-metrics&param_RequiresDataExports=yes](https://console.aws.amazon.com/cloudformation/home#/stacks/create/review?templateURL=https://aws-managed-cost-intelligence-dashboards.s3.amazonaws.com/cfn/cid-plugin.yml&stackName=Sustainability-Proxy-Metrics-Dashboard&param_DashboardId=sustainability-proxy-metrics&param_RequiresDataExports=yes) 

1. You can change **Stack name** for your template if you wish.

1. Leave **Parameters** values as it is.

1. Review the configuration and click **Create stack**.

1. You will see the stack will start in **CREATE\$1IN\$1PROGRESS**. Once complete, the stack will show **CREATE\$1COMPLETE** 

1. You can check the stack output for dashboard URLs.
**Note**  
 **Troubleshooting:** If you see error "No export named cid-CidExecArn found" during stack deployment, make sure you have completed prerequisite steps.

## Update


When a new version of the dashboard template is released, update your dashboard by running the following command in your command-line interface:

```
cid-cmd update --dashboard-id sustainability-proxy-metrics --force --recursive
```

**Note**  
Please note that updating the dashboard might impact customizations you made on the dashboards. The tool will provide you an interactive prompt when it detects differences and you can accept the changes or keep existing modifications.

## Authors

+ Tom Coombs, Principal Technical Account Manager
+ Steffen Grunwald, Principal Solutions Architect
+ Katja Philipp, Ex-Amazonian

## Feedback & Support


Follow [Feedback & Support](feedback-support.md) guide

**Note**  
These dashboards and their content: (a) are for informational purposes only, (b) represent current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS content, products or services are provided "as is" without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.

# Share Dashboards
Share Dashboards

**Note**  
Adding users can have [cost implications for Quick Sight](https://aws.amazon.com/quicksight/pricing/?nc=sn&loc=4) 

Secure sharing and distribution of data is a key feature offered by Amazon Quick Sight. Consider other groups of users within your organization that would benefit from viewing the dashboard data. After you deployed a Quick Sight dashboard, you can share it with other users or groups, and choose the level of access to grant them. You can also choose to share with all users in your Amazon Quick Sight subscription.

Users who are dashboard **viewers** can view and filter the dashboard data. Any selections to filters, controls, or sorting that users apply while viewing the dashboard exist only while the user is viewing the dashboard, and aren’t saved once it’s closed. Users who are dashboard **owners/co-owners** can edit and share the dashboard, and optionally can edit and share the analysis.

1. Go to the **Quick Sight** service homepage inside your account. Be sure to select the correct region from the top right user menu or you will not see your expected tables

1. From the left hand menu, choose **Dashboards** 

1. On the dashboard page, **select the dashboard you wish to share**.

1. Select **Share** on the application bar.

![\[Quick Sight top navigation with the share button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/share_dashboard.png)


1. Select **Share dashboard** 

![\[Quick Sight top navigation with the share button dropdown and share dashboard item highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/share_dashboard2.png)


1. Do one of the following:
   + Check what permissions already exist by choosing **Manage dashboard access**. Then choose **Add users** to return to this screen.
   + You have the option to share with all the users in your Amazon Quick Sight subscription. To do this, select the option **Share with all users in this account**. When you manage dashboard access through the Managed dashboard permissions screen, you see that the option Share with all users in this account is enabled. The individual users aren’t listed in this screen.
   + To share with an individual user or group, type the user or group into the search box. Then choose the user or group from the list that appears. Only active users and groups appear in the list.

![\[Amazon Quick Sight share dashboard dialog with the three main elements of sharing a dashboard highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/share_dashboard_with_users.png)


1. After you have entered all the users that you want to share with, choose **ADD** and select the permission of **Viewer** or **Co-owner** to confirm your choices. You can see the username, email, permission level, user role, and privileges. You can also remove a user by using the delete icon.

1. Choose permissions for each user. **Note:** Users in the Reader role cannot have permissions modified from Viewer, and cannot have Save as privileges.

![\[Amazon Quick Sight share dashboard with add users drop down displayed\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/manage_dashboard_sharing.png)


## Read more about Quick Sight Viewers and Co-Owners roles


 **Viewer** 

Viewers can view, filter, and sort the dashboard data. They can also use any controls or custom actions that are on the dashboard. Any changes they make to the dashboard exist only while they are viewing it, and aren’t saved once they close the dashboard.

 **Co-owner** 

Co-owners can edit and share the dashboard. You have the option to provide them with the same permissions to the analysis. If you want them to also edit and share the dataset, you can set that up inside the analysis.

1. Choose whether to enable a user’s privilege to **Save as** in order to create a new dashboard from a copy of this one. This privilege grants read-only access to the datasets, so the user or group can create new analyses from it.

 [AWS Documentation For These Steps](https://docs.aws.amazon.com/quicksight/latest/user/sharing-a-dashboard.html) 

# Add organizational taxonomy
Add organizational taxonomy

## Introduction


AWS customers use various methods to allocate costs of AWS resources, ranging from a single level of business units to complex, multi-dimensional organizational [taxonomy](https://en.wikipedia.org/wiki/Taxonomy).

This document presents the architectural framework for resource ownership attribution and cost allocation in AWS environment, including automated integration with Cloud Intelligence Dashboards.

The following video introduces the concept of taxonomy in the context of AWS resource management. It will guide you through the main concepts using the example of Unicorn Rental Corporation.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/8-OMF9sca2E?rel=0/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/8-OMF9sca2E?rel=0)


## Architectural Foundations


Resource allocation in AWS typically follows two primary architectural patterns:

1.  [Account-Level Allocation](#add-org-taxonomy-account-level-cost-allocation) - Uses information at the AWS account level, such as:

   1. AWS Account Names

   1. Organizational Unit (OU) hierarchical structure

   1. Account-level or OU level tagging in AWS Organizations

   1. External sources (e.g. CMDB) as a source of truth to allocate account ownership based on organizational policies

1.  [Resource-Level Allocation](#add-org-taxonomy-resource-level-cost-allocation) - Enables more granular allocation through:

   1. AWS Cost Allocation Tags (user-defined and AWS-generated)

   1. AWS Cost Categories

Cloud Intelligence Dashboards provide a comprehensive way to integrate organizational taxonomy into your dashboards, enabling precise filtering and cost allocation tracking. You can define a set of taxonomy fields to be used as Filters and GroupBy dimensions within the CID dashboards. Additionally, you can configure Row-Level Security (RLS) to restrict access based on one of these taxonomy fields, ensuring that users see only the data relevant to their scope of responsibility.

### Components for building Organizational Taxonomy


It is essential to establish reporting that accurately reflects the current structure of your organization while also accounting for potential future changes. Once the organizational taxonomies are clear, the next step is to map them to the appropriate technical data.

![\[taxonomy\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/taxonomy-pipe.svg)



| Name | Level | Source | Prerequisite | Comment | 
| --- | --- | --- | --- | --- | 
|  Account Tag  |  Account Level  |  AWS Organizations or CUR2  |  CID Data Collection or CUR2  |  A simple way to achieve account level taxonomy and cost allocation from CUR2 or AWS Organizations  | 
|  OU Name  |  Account Level  |  AWS Organizations  |  CID Data Collection  |  | 
|  OU Tag  |  Account Level  |  AWS Organizations  |  CID Data Collection  |  [RECOMMENDED] More flexible then Account Tag. CID Data Collection allows collecting Hierarchical Tags when the lower level Tags can override higher level. This can create a flexible system that do not require setting tags on the individual Account level  | 
|  Account Name  |  Account Level  |  AWS Organizations or CUR2  |  CID Data Collection or CUR2  |  Some organizations can have an established naming convention for AWS Accounts. A part of this name can be used for a business unit taxonomy.  | 
|  AWS Cost Allocation Tags  |  Resource Level  |  CUR  |  AWS Data Exports  |  | 
|  AWS Cost Categories  |  Resource Level or Account Level  |  CUR  |  AWS Data Exports  |  | 

Additional recommendations:

1.  **Anticipate Organizational Change**. Using AWS Account Tags or OU (Organizational Unit) Tags can be a flexible and maintainable approach, especially in environments where organizational changes are frequent. Account-level tagging allows you to adapt to these changes with minimal operational overhead. However, if an account contains shared resources used across multiple teams or business units, Resource-Level Attribution becomes necessary to accurately reflect usage and costs at a more granular level.

1.  **Balance Granularity and Effort**. Resource-Level Attribution, such as Tags and Cost Categories, provides detailed visibility into per resource usage and cost and is essential when shared resources exist within a single account. It requires consistent and disciplined tagging practices across teams and services, along with ongoing governance to ensure tags remain accurate and meaningful over time. In contrast, Account-Level Attribution is easier to implement and maintain but offers less granularity.

## Account Level Cost Allocation


### Using Account Tags as Cost Allocation Tags in CUR2


Since December 2025, AWS added support of Account Tags in AWS Organizations to be enabled as Cost Allocation Tags.

The main benefit of Account Tags as Cost Allocation Tags is simplicity: all required information comes directly from CUR2, eliminating the need for additional data sources or collection mechanisms.

 **Implementation Steps:** 

1. In AWS Organizations, add tags to your AWS accounts (e.g., `Owner`, `BusinessUnit`, `CostCenter`)

1. In the AWS Billing Console, activate these account tags as Cost Allocation Tags

1. Upgrade your data export stack on payer account(s) to `v0.9.0` or later if you are running an older version

1. Wait 24 hours for the tags to appear in your Cost and Usage Report

1. Run `cid-cmd update --force --recursive` to discover and configure the tags

1. Select the account-level tags you want to use as taxonomy dimensions when prompted

 **When to Use Data Collection Method Instead:** 

While Account Tags as Cost Allocation Tags in CUR2 are recommended for simple account level taxonomy use cases, you should use the [Advanced account map](#add-org-taxonomy-account-map-based-on-org-example) method if:
+ You need to use OU Tags as part of your taxonomy (OU Tags as Cost Allocation Tags are not yet supported)
+ You require hierarchical tag inheritance from OUs to accounts
+ You need more complex organizational hierarchy mapping

### Using Static Account Map View


To implement account level mapping CID provides a special View in Amazon Athena called `account_map`. This Athena view is specifically designed to be customizable by customers. You can modify this Athena view, leveraging various options:

1. Cost and Usage Report (CUR) data. CUR2 data contains Account Names.

1. AWS Organizations metadata collected via [CID Data Collection](data-collection.md).

1. External sources such as external [CMDB](https://en.wikipedia.org/wiki/Configuration_management_database) systems or just a spreadsheet or csv file on Amazon S3.

![\[Architecture\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/ou-integration-architecture.png)


### Example of default account map based on CUR 2.0


#### Click here to see the query


```
CREATE OR REPLACE VIEW account_map AS
SELECT DISTINCT
    line_item_usage_account_id                                        account_id,
    MAX_BY(line_item_usage_account_name, line_item_usage_start_date)  account_name,
    MAX_BY(bill_payer_account_id,        line_item_usage_start_date)  parent_account_id,
    MAX_BY(bill_payer_account_name,      line_item_usage_start_date)  parent_account_name
FROM
    "${cur_database}"."${cur_table_name}"
GROUP BY
    line_item_usage_account_id
```

### Advanced example of account map based on AWS Organizations data (Recommended)


This example leverages the AWS Organization Data collected by [CID Data Collection](data-collection.md) and allows you adding Account and OU level tags into account\$1map. Please note that you do not need to use all the fields from this example, you can adjust it to your specific business and organizational requirements.

#### Click here to see the query


```
CREATE OR REPLACE VIEW "account_map" AS
SELECT DISTINCT
    -- Mandatory
      id account_id
    , Name account_name

    -- Optional
    , email account_email_id
    , ManagementAccountId parent_account_id
    , Parent "o_u" -- The Name of the lowest level OU of the Account

    -- Part of Account Name.
    -- Account names are sometimes structured in a way that allows us to use part of the name as a taxonomy identifier
    , TRY(split_part(Name, '-', 1)) "account_prefix"
    , TRY(split_part(Name, '-', 2)) "account_suffix"

    -- A simple mapping of Management Account Ids to user-friendly names
    , CASE ManagementAccountId
        WHEN '111111111111' THEN 'My Management Org'
        WHEN '222222222222' THEN 'My Test Org'
        ELSE ManagementAccountId
    END parent_account_name

    -- Full path separated with '>'
    , HierarchyPath as o_u_hierarchy

    -- Levels of OU hierarchy
    , TRY(hierarchy[1].name) o_u_level1 -- root
    , TRY(hierarchy[2].name) o_u_level2
    , TRY(hierarchy[3].name) o_u_level3
    , TRY(hierarchy[4].name) o_u_level4
    , TRY(hierarchy[5].name) o_u_level5

    -- Hierarchical Tags
    -- You can set on OU level and override on lower OU or set Account level Tags
    , TRY(FILTER(HierarchyTags, x -> x.key = 'MyEnterprise')[1].value) as ou_tag_enterprise
    , TRY(FILTER(HierarchyTags, x -> x.key = 'MyBusinessLine')[1].value) as ou_tag_business_line
    , TRY(FILTER(HierarchyTags, x -> x.key = 'MyBusinessUnit')[1].value) as ou_tag_business_unit
FROM
    "optimization_data"."organization_data"
```

### Example of static Account Map


Static account map is useful when you have custom mapping maintained in the CMDB outside of account names and you want to bring it to account map.

#### Click here to see query


```
CREATE OR REPLACE VIEW account_map AS
SELECT *
FROM
  (
 VALUES
     ROW ('111111111111', 'Account1', 'Company 1', 'Business Unit 11')
   , ROW ('222222222222', 'Account2', 'Company 1', 'Business Unit 11')
   , ROW ('333333333333', 'Account3', 'Company 2', 'Business Unit 21')
   , ROW ('444444444444', 'Account4', 'Company 2', 'Business Unit 22')
)  ignored_table_name (
    account_id,   --mandatory
    account_name, --mandatory
    company,      --custom
    business_unit --custom
)
```

You can also leverage `cid-cmd csv2view` command that accepts a csv file and generates the code of view as above.

### How to update Account Map


#### Click here for instructions for update


1. Navigate to Amazon Athena and the cur database (default: `cid_cur`).

1. Locate the `account_map` and click on the vertical ellipses (`⋮`) next to the view and select *Show/edit query* from the context menu.

1. First, make a copy of the view as backup, naming the new view something like *account\$1map\$1original*.

1. Select the entire view and replace it with this query adjusted to your needs.

1. Click `Run` to execute the query and create/update the view.

1. Run `cid-cmd update --force --recursive` in CloudShell.

## Resource Level Cost Allocation (Tags and Cost Categories)


Some data sources—such as the Cost and Usage Report (CUR) v2.0, Cost Optimization Hub, and FOCUS already include tag information. The `cid-cmd` deployment tool allows you to specify a list of tags to be included in the dashboard during initial deployment or during a recursive update.

Please note that introducing tags with many unique values can significantly expand the dataset. For example, using tag\$1a with 10 unique values and tag\$1b with 100 independent values could result in up to a 1,000-fold increase in dataset size due to combinatorial growth. Avoid using high-cardinality tags, such as `Name`, especially when working with large datasets based on the Cost and Usage Report (CUR).

**Warning**  
Each tag added to the dashboard increases the cardinality of data and increases the size of aggregated SPICE datasets in Amazon Quick Sight. This architectural decision directly impacts SPICE caching size and performance. We recommend selecting only the minimum necessary tags to satisfy business requirements.

If your Quick Sight datasets takes too much time refreshing after configuring the tags, please remove all tags (`cid update --force --recursive -y`) and try re-adding them one by one.

## Adding Taxonomy to the Dashboards


The CID framework allows you to seamlessly add taxonomy fields to the dashboards including attributes from the [Account Map](#add-org-taxonomy-account-level-cost-allocation) as well as resource-level tags.

Once configured, taxonomy fields are added as Filter Controls across all sheets in the respective dashboard. They can also be used in Calculated Fields and Parameter Controls to support "Group By" aggregations.

To modify the taxonomy fields or update the set of Cost Allocation Tags, run the following commands:

1. Install the tool

   ```
   pip3 install -U cid-cmd
   ```

1. Update a dashboard and all dependency datasets

   ```
   cid-cmd update --force --recursive
   ```

After update Amazon Quick Sight datasets will be refreshed automatically. During the refresh process you may see `Dataset changed too much` error which should disappear once datasets are fully refreshed.

See more in [update](update-dashboards.md) documentation.

### You can also specify in the command line


Following command can be used for deployment if the taxonomy fields are known. Parameters `--taxonomy` and `--resource-tags` are optional. If not provided the tool with discover and propose operator to choose them.

```
cid-cmd update --force --recursive  --resource-tags 'tag_environment' --taxonomy 'company,business_unit,tag_environment'
```

Once dashboard is installed AND all datasets are updated, you can use filters and Group By elements in dashboards.

![\[Screenshot\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/images/taxonomy-screen.png)


See more details on the live [interactive dashboard demo](https://cid.workshops.aws.dev/demo/?dashboard=cudos&sheet=Taxonomy%20Explorer).
+ You can use top level filters (for Company or Business units from Account Map or Environment Tag).
+ You can use the same taxonomy dimensions as GroupBy fields.

## FAQ


## Do all dashboards support taxonomy?


For the moment only Foundational Dashboards (CID, KPI, CUDOS) support adding organizational taxonomy with `cid-cmd` tool, we plan to support this in all CID dashboards in the future.

## I need to append a taxonomy from AWS Organization Tags


1. Make sure [CID Data Collection](data-collection.md) is installed.

1. Modify your account map query ([example](#add-org-taxonomy-account-map-based-on-org-example)).

1. Run [update](#add-account-level-resource-level-taxonomy-dimension) via command line.

## I just need to add a taxonomy based on AWS Cost Allocation Tag


Run [update](#add-account-level-resource-level-taxonomy-dimension) via command line. No other actions needed.

[\$1After Update and Adding Tags the refresh of SPICE datasets times out or shows a error about resources limitation == After Update and Adding Tags the refresh of SPICE datasets times out or shows a error about resources limitation

Try to [re-update](#add-account-level-resource-level-taxonomy-dimension) using tags with less cardinality (less unique values).

## Can I change taxonomy?


Yes. Just [re-update](#add-account-level-resource-level-taxonomy-dimension).

## Can CloudFormation Template manage this?


As of now only you can use CID-CMD command line tool to manage taxonomy in as it allows choosing values from existing configuration in interactive way.

# Update Dashboards
Update Dashboards

## Update Dashboards


**Important**  
We recommend customers updating both cid-cmd tool and CID Cloud Formation stack to a version 4.2.3 or more recent.

We always improve Cloud Intelligence Dashboards by adding new actionable insights and recommendations. All new dashboard versions announced in our [Changelog](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/tree/main/changes). You can find your current dashboard version on About tab of each of the dashboards.

To pull the latest version of the dashboard from the public template please use the following steps.

### Simple Update


1. Open [CloudShell](https://console.aws.amazon.com/cloudshell/home) in the account where you have deployed the Cloud Intelligence Dashboards

1. Install cid-cmd tool. Run the following command and make sure you hit enter :

```
pip3 install --upgrade cid-cmd
```

\$1 If using [CloudShell](https://console.aws.amazon.com/cloudshell), use the following instead:

\$1

```
sudo yum install python3.11-pip -y
python3.11 -m pip install -U cid-cmd
```

1. Start update. Run the following command and choose the dashboard to update :

```
cid-cmd update
```

**Note**  
After update Quick Sight datasets will be refreshed automatically. During the refresh process you may see "Dataset changed too much" error which should disappear once datasets are fully refreshed

### Recursive Update


#### Click here to see how resetting dashboards to the 'factory settings'


In some cases the update of underlying Quick Sight Datasets and views is required. This can be useful also to reset dashboards to factory settings if any issue. Please note that it might impact customizations you did on the dashboards. The tool will provide you an interactive prompt when it will detect the difference and you can accept the changes or keep existing.

```
cid-cmd update --force --recursive
```

### Update from CUDOS v4 to v5


If you are looking to update to CUDOS v5 from a previous CUDOS version, please refer to the guide in the [FAQs](faq.md) 

### Update Demo


[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/ub7VWL2GJ84?rel=0/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/ub7VWL2GJ84?rel=0)


# Teardown
Teardown

## Teardown of CloudFormation deployment (Automated)


**Note**  
 **Deleting the CloudFormation template means that CUR data will not flow to your destination (data collection) account anymore. However, historical data will be retained in your destination account. To delete the CURs, go to the \$1\$1resource-prefix\$1-\$1\$1payer-account-id\$1-shared S3 Bucket and manually delete the account data. Note that if you deployed following best practices with a separate Destination account hosting the dashboards, you should also delete the CID-CUR-Replication Stack in your Management/Payer/Source account.** 

### Click here to use the same CFN templates you used to deploy the dashboards to tear down the environment


1. Login to the Account(s) where you deployed CloudFormation templates as part of this lab

1. Find your existing CID-related templates and choose Delete.

![\[Cloudformation stack detail CID-Multipayeraccount with the delete button highlighted\]](http://docs.aws.amazon.com/guidance/latest/cloud-intelligence-dashboards/images/teardown.png)


## Manual Teardown


### To perform a teardown manually for this lab, perform the following steps:


Please follow instructions [here](https://github.com/aws-solutions-library-samples/cloud-intelligence-dashboards-framework/blob/main/CID-CMD.md) to install cid-cmd tool.

You can run the **delete** command and follow instructions in an interactive mode to delete the relevant Dashboard.

```
cid-cmd delete
```