

• The AWS Systems Manager CloudWatch Dashboard will no longer be available after April 30, 2026. Customers can continue to use Amazon CloudWatch console to view, create, and manage their Amazon CloudWatch dashboards, just as they do today. For more information, see [Amazon CloudWatch Dashboard documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html). 

# Using AWS Systems Manager tools


Systems Manager groups tools into four categories. The following documentation describes the various tools of AWS Systems Manager and how to set up and use these tools. Choose the tabs under each category to learn more about each tool.

## Node tools


A *managed node* is any machine configured for use with Systems Manager in [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments.

------
#### [ Compliance ]

Use [Compliance](systems-manager-compliance.md) to scan your fleet of managed nodes for patch compliance and configuration inconsistencies. You can collect and aggregate data from multiple AWS accounts and AWS Regions, and then drill down into specific resources that aren’t compliant. By default, Compliance displays compliance data about Patch Manager patching and State Manager associations. You can also customize the service and create your own compliance types based on your IT or business requirements.

------
#### [ Distributor ]

Use [Distributor](distributor.md) to create and deploy packages to managed nodes. With Distributor, you can package your own software—or find AWS-provided agent software packages, such as **AmazonCloudWatchAgent**—to install on Systems Manager managed nodes. After you install a package for the first time, you can use Distributor to uninstall and reinstall a new package version, or perform an in-place update that adds new or changed files. Distributor publishes resources, such as software packages, to Systems Manager managed nodes.

------
#### [ Fleet Manager ]

[Fleet Manager](fleet-manager.md) is a unified user interface (UI) experience that helps you remotely manage your nodes. With Fleet Manager, you can view the health and performance status of your entire fleet from one console. You can also gather data from individual devices and instances to perform common troubleshooting and management tasks from the console. This includes viewing directory and file contents, Windows registry management, operating system user management, and more. 

------
#### [ Hybrid Activations ]

To set up non-EC2 machines in your hybrid and multicloud environment as managed nodes, create a [hybrid activation](systems-manager-hybrid-multicloud.md). After you complete the activation, you receive an activation code and ID. This code and ID combination functions like an Amazon Elastic Compute Cloud (Amazon EC2) access ID and secret key to provide secure access to the Systems Manager service from your managed instances.

You can also create an activation for edge devices if you want to manage them by using Systems Manager.

------
#### [ Inventory ]

[Inventory](systems-manager-inventory.md) automates the process of collecting software inventory from your managed nodes. You can use Inventory to gather metadata about applications, files, components, patches, and more.

------
#### [ Patch Manager ]

Use [Patch Manager](patch-manager.md) to automate the process of patching your managed nodes with both security related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This tool allows you to scan managed nodes for missing patches and apply missing patches individually or to large groups of managed nodes by using tags. Patch Manager uses *patch baselines*, which can include rules for auto-approving patches within days of their release, and a list of approved and rejected patches. You can install security patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task, or you can patch your managed nodes on demand at any time.

For Linux operating systems, you can define the repositories that should be used for patching operations as part of your patch baseline. This allows you to ensure that updates are installed only from trusted repositories regardless of what repositories are configured on the managed node. For Linux, you also have the ability to update any package on the managed node, not just those that are classified as operating system security updates. You can also generate patch reports that are sent to an S3 bucket of your choice. For a single managed node, reports include details of all patches for the machine. For a report on all managed nodes, only a summary of how many patches are missing is provided.

------
#### [ Run Command ]

Use [Run Command](run-command.md) to remotely and securely manage the configuration of your managed nodes at scale. Use Run Command to perform on-demand changes such as updating applications or running Linux shell scripts and Windows PowerShell commands on a target set of dozens or hundreds of managed nodes. 

------
#### [ Session Manager ]

Use [Session Manager](session-manager.md) to manage your edge devices and Amazon Elastic Compute Cloud (Amazon EC2) instances through an interactive one-click browser-based shell or through the AWS CLI. Session Manager provides secure and auditable edge device and instance management without needing to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to edge devices and instances, strict security practices, and fully auditable logs with edge device and instance access details, while still providing end users with simple one-click cross-platform access to your edge devices and EC2 instances. To use Session Manager, you must enable the advanced-instances tier. For more information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).

------
#### [ State Manager ]

Use [State Manager](systems-manager-state.md) to automate the process of keeping your managed nodes in a defined state. You can use State Manager to guarantee that your managed nodes are bootstrapped with specific software at startup, joined to a Windows domain (Windows Server nodes only), or patched with specific software updates. 

------

## Change Management tools


------
#### [ Automation ]

Use [Automation](systems-manager-automation.md) to automate common maintenance and deployment tasks. You can use Automation to create and update Amazon Machine Images (AMIs), apply driver and agent updates, reset passwords on Windows Server instance, reset SSH keys on Linux instances, and apply OS patches or application updates. 

------
#### [ Change Calendar ]

[Change Calendar](systems-manager-change-calendar.md) helps you set up date and time ranges when actions you specify (for example, in [Systems Manager Automation](systems-manager-automation.md) runbooks) can or can't be performed in your AWS account. In Change Calendar, these ranges are called *events*. When you create a Change Calendar entry, you're creating a [Systems Manager document](documents.md) of the type `ChangeCalendar`. In Change Calendar, the document stores [iCalendar 2.0](https://icalendar.org/) data in plaintext format. Events that you add to the Change Calendar entry become part of the document. You can add events manually in the Change Calendar interface or import events from a supported third-party calendar using an `.ics` file.

------
#### [ Change Manager ]

**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

[Change Manager](change-manager.md) is an enterprise change management framework for requesting, approving, implementing, and reporting on operational changes to your application configuration and infrastructure. From a single *delegated administrator account*, if you use AWS Organizations, you can manage changes across multiple AWS accounts in multiple AWS Regions. Alternatively, using a *local account*, you can manage changes for a single AWS account. Use Change Manager for managing changes to both AWS resources and on-premises resources.

------
#### [ Documents ]

A [Systems Manager document](documents.md) (SSM document) defines the actions that Systems Manager performs. SSM document types include *Command* documents, which are used by State Manager and Run Command, and Automation runbooks, which are used by Systems Manager Automation. Systems Manager includes dozens of pre-configured documents that you can use by specifying parameters at runtime. Documents can be expressed in JSON or YAML, and include steps and parameters that you specify.

------
#### [ Maintenance Windows ]

Use [Maintenance Windows](maintenance-windows.md) to set up recurring schedules for managed instances to run administrative tasks such as installing patches and updates without interrupting business-critical operations. 

------
#### [ Quick Setup ]

Use [Quick Setup](systems-manager-quick-setup.md) to configure frequently used AWS services and features with recommended best practices. You can use Quick Setup in an individual AWS account or across multiple AWS accounts and AWS Regions by integrating with AWS Organizations. Quick Setup simplifies setting up services, including Systems Manager, by automating common or recommended tasks. These tasks include, for example, creating required AWS Identity and Access Management (IAM) instance profile roles and setting up operational best practices, such as periodic patch scans and inventory collection.

------

## Application tools


------
#### [ AppConfig ]

[AppConfig](appconfig.md) helps you create, manage, and deploy application configurations and feature flags. AppConfig supports controlled deployments to applications of any size. You can use AppConfig with applications hosted on Amazon EC2 instances, AWS Lambda containers, mobile applications, or edge devices. To prevent errors when deploying application configurations, AppConfig includes validators. A validator provides a syntactic or semantic check to verify that the configuration you want to deploy works as intended. During a configuration deployment, AppConfig monitors the application to verify that the deployment is successful. If the system encounters an error or if the deployment invokes an alarm, AppConfig rolls back the change to minimize impact for your application users.

------
#### [ Application Manager ]

[Application Manager](application-manager.md) helps DevOps engineers investigate and remediate issues with their AWS resources in the context of their applications and clusters. In Application Manager, an *application* is a logical group of AWS resources that you want to operate as a unit. This logical group can represent different versions of an application, ownership boundaries for operators, or developer environments, to name a few. Application Manager support for container clusters includes both Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS) clusters. Application Manager aggregates operations information from multiple AWS services and Systems Manager tools to a single AWS Management Console.

------
#### [ Parameter Store ]

[Parameter Store](systems-manager-parameter-store.md) provides secure, hierarchical storage for configuration data and secrets management. You can store data such as passwords, database strings, Amazon Elastic Compute Cloud (Amazon EC2) instance IDs and Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name you specified when you created the parameter.

------

## Operations tools


------
#### [ CloudWatch Dashboards ]

[Amazon CloudWatch Dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) are customizable pages in the CloudWatch console that you can use to monitor your resources in a single view, even those resources that are spread across different regions. You can use CloudWatch dashboards to create customized views of the metrics and alarms for your AWS resources. 

------
#### [ Explorer ]

[Explorer](Explorer.md) is a customizable operations dashboard that reports information about your AWS resources. Explorer displays an aggregated view of operations data (OpsData) for your AWS accounts and across AWS Regions. In Explorer, OpsData includes metadata about your Amazon EC2 instances, patch compliance details, and operational work items (OpsItems). Explorer provides context about how OpsItems are distributed across your business units or applications, how they trend over time, and how they vary by category. You can group and filter information in Explorer to focus on items that are relevant to you and that require action. When you identify high priority issues, you can use OpsCenter, a tool in Systems Manager, to run Automation runbooks and resolve those issues. 

------
#### [ Incident Manager ]

[Incident Manager](incident-manager.md) is an incident management console that helps users mitigate and recover from incidents affecting their AWS hosted applications.

Incident Manager increases incident resolution by notifying responders of impact, highlighting relevant troubleshooting data, and providing collaboration tools to get services back up and running. Incident Manager also automates response plans and allows responder team escalation. 

------
#### [ OpsCenter ]

[OpsCenter](OpsCenter.md) provides a central location where operations engineers and IT professionals can view, investigate, and resolve operational work items (OpsItems) related to AWS resources. OpsCenter is designed to reduce mean time to resolution for issues impacting AWS resources. This Systems Manager tool aggregates and standardizes OpsItems across services while providing contextual investigation data about each OpsItem, related OpsItems, and related resources. OpsCenter also provides Systems Manager Automation runbooks that you can use to resolve issues. You can specify searchable, custom data for each OpsItem. You can also view automatically generated summary reports about OpsItems by status and source. 

------

# Perform a node task with Systems Manager


Use this tutorial to get started with AWS Systems Manager. You'll learn how to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance that is managed by Systems Manager, and how to connect to the managed instance.

Because Systems Manager is a collection of multiple tools, no single walkthrough or tutorial can introduce the entire service. This tutorial provides an introduction to some of the tools.

## Prerequisites


Before you begin, be sure that you've completed the steps in [Managing EC2 instances with Systems Manager](systems-manager-setting-up-ec2.md).

## Launch an instance using an AMI with SSM Agent preinstalled


You can launch an Amazon EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you launch your first managed instance quickly, so it doesn't cover all possible options. 

**To launch an instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. From the EC2 console dashboard, in the **Launch instance** box, choose **Launch instance**, and then choose **Launch instance** from the options that appear.

1. For **Name and tags**, for **Name**, enter a descriptive name for your instance.

1. For **Application and OS Images (Amazon Machine Image)**, do the following:

   1. Choose the **Quick Start** tab, and then choose Amazon Linux. This is the operating system (OS) for your instance.

   1. For **Amazon Machine Image (AMI)**, choose an HVM version of Amazon Linux 2.

1. For **Instance type**, from the **Instance type** list, choose the hardware configuration for your instance. Choose the `t2.micro` instance type, which is selected by default. The `t2.micro` instance type is eligible for the AWS Free Tier. In AWS Regions where `t2.micro` is unavailable, you can use a `t3.micro` instance under the Free Tier. For more information, see [AWS Free Tier](https://aws.amazon.com/free).

1. For **Key pair (login)**, for **Key pair name**, choose a key pair.

1. For **Network settings**, choose **Edit**. For **Security group name**, notice that the wizard created and selected a security group for you. You can use this security group, or alternatively you can select a security group that you created previously using the following steps:

   1. Choose **Select existing security group**.

   1. From **Common security groups**, choose your security group from the list of existing security groups.

1. If you aren't using Default Host Management Configuration, expand the **Advanced details** section, and for **IAM instance profile**, choose the instance profile that you created when getting set up in [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

1. Keep the default selections for the other configuration settings for your instance.

1. Review a summary of your instance configuration in the **Summary** pane. When you're ready, choose **Launch instance**.

1. A confirmation page informs you that your instance is launching. Choose **View all instances** to close the confirmation page and return to the console.

1. On the **Instances** screen, you can view the status of the launch. It takes a short time for an instance to launch.

1. It can take a few minutes for the instance to show as managed and be ready for you to connect to it. To check that your instance passed its status checks, view this information in the **Status check** column.

## Connect to your managed instance using Systems Manager


**To connect to your managed instance**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the instance that you want to connect to.

1. In the **Node actions** menu, choose **Start terminal session**.

1. Select **Connect**.

## Clean up your instance


If you're done working with the managed instance that you created for this tutorial, terminate it. Terminating an instance effectively deletes it.

**To terminate your instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**. In the list of instances, select the instance.

1. Choose **Instance state**, **Terminate instance**.

1. Choose **Terminate** when prompted for confirmation.

   Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console briefly, and then the entry is deleted automatically. You can't remove the terminated instance from the console display yourself.

# AWS Systems Manager Node tools
Node tools

AWS Systems Manager provides the following tools for accessing, managing, and configuring your *managed nodes*. A managed node is any machine configured for use with Systems Manager in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.

**Topics**
+ [

# AWS Systems Manager Compliance
](systems-manager-compliance.md)
+ [

# AWS Systems Manager Distributor
](distributor.md)
+ [

# AWS Systems Manager Fleet Manager
](fleet-manager.md)
+ [

# AWS Systems Manager Hybrid Activations
](activations.md)
+ [

# AWS Systems Manager Inventory
](systems-manager-inventory.md)
+ [

# AWS Systems Manager Patch Manager
](patch-manager.md)
+ [

# AWS Systems Manager Run Command
](run-command.md)
+ [

# AWS Systems Manager Session Manager
](session-manager.md)
+ [

# AWS Systems Manager State Manager
](systems-manager-state.md)

# AWS Systems Manager Compliance
Compliance

You can use Compliance, a tool in AWS Systems Manager, to scan your fleet of managed nodes for patch compliance and configuration inconsistencies. You can collect and aggregate data from multiple AWS accounts and Regions, and then drill down into specific resources that aren’t compliant. By default, Compliance displays current compliance data about patching in Patch Manager and associations in State Manager. (Patch Manager and State Manager are also both tools in AWS Systems Manager.) To get started with Compliance, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/compliance). In the navigation pane, choose **Compliance**.

Patch compliance data from Patch Manager can be sent to AWS Security Hub CSPM. Security Hub CSPM gives you a comprehensive view of your high-priority security alerts and compliance status. It also monitors the patching status of your fleet. For more information, see [Integrating Patch Manager with AWS Security Hub CSPM](patch-manager-security-hub-integration.md). 

Compliance offers the following additional benefits and features: 
+ View compliance history and change tracking for Patch Manager patching data and State Manager associations by using AWS Config.
+ Customize Compliance to create your own compliance types based on your IT or business requirements.
+ Remediate issues by using Run Command, another tool in AWS Systems Manager, State Manager, or Amazon EventBridge.
+ Port data to Amazon Athena and Amazon Quick to generate fleet-wide reports.

**EventBridge support**  
This Systems Manager tool is supported as an *event* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**Chef InSpec integration**  
Systems Manager integrates with [https://www.chef.io/inspec/](https://www.chef.io/inspec/). InSpec is an open-source, runtime framework that allows you to create human-readable profiles on GitHub or Amazon Simple Storage Service (Amazon S3). You can then use Systems Manager to run compliance scans and view compliant and noncompliant managed nodes. For more information, see [Using Chef InSpec profiles with Systems Manager Compliance](integration-chef-inspec.md).

**Pricing**  
Compliance is offered at no additional charge. You only pay for the AWS resources that you use.

**Topics**
+ [

# Getting started with Compliance
](compliance-prerequisites.md)
+ [

# Configuring permissions for Compliance
](compliance-permissions.md)
+ [

# Creating a resource data sync for Compliance
](compliance-datasync-create.md)
+ [

# Learn details about Compliance
](compliance-about.md)
+ [

# Deleting a resource data sync for Compliance
](systems-manager-compliance-delete-RDS.md)
+ [

# Remediating compliance issues using EventBridge
](compliance-fixing.md)
+ [

# Assign custom compliance metadata using the AWS CLI
](compliance-custom-metadata-cli.md)

# Getting started with Compliance


To get started with Compliance, a tool in AWS Systems Manager, complete the following tasks.


****  

| Task | For more information | 
| --- | --- | 
|  Compliance works with patch data in Patch Manager and associations in State Manager. (Patch Manager and State Manager are also both tools in AWS Systems Manager.) Compliance also works with custom compliance types on managed nodes that are managed using Systems Manager. Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.  |  [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md)  | 
|  Update the AWS Identity and Access Management (IAM) role used by your managed nodes to restrict Compliance permissions.  |  [Configuring permissions for Compliance](compliance-permissions.md)  | 
|  If you plan to monitor patch compliance, verify that you've configured Patch Manager. You must perform patching operations by using Patch Manager before Compliance can display patch compliance data.  |  [AWS Systems Manager Patch Manager](patch-manager.md)  | 
|  If you plan to monitor association compliance, verify that you've created State Manager associations. You must create associations before Compliance can display association compliance data.  |  [AWS Systems Manager State Manager](systems-manager-state.md)  | 
|  (Optional) Configure the system to view compliance history and change tracking.   |  [Viewing compliance configuration history and change tracking](compliance-about.md#compliance-history)  | 
|  (Optional) Create custom compliance types.   |  [Assign custom compliance metadata using the AWS CLI](compliance-custom-metadata-cli.md)  | 
|  (Optional) Create a resource data sync to aggregate all compliance data in a target Amazon Simple Storage Service (Amazon S3) bucket.  |  [Creating a resource data sync for Compliance](compliance-datasync-create.md)  | 

# Configuring permissions for Compliance


As a security best practice, we recommend that you update the AWS Identity and Access Management (IAM) role used by your managed nodes with the following permissions to restrict the node's ability to use the [PutComplianceItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API action. This API action registers a compliance type and other compliance details on a designated resource, such as an Amazon EC2 instance or a managed node.

If your node is an Amazon EC2 instance, you must update the IAM instance profile used by the instance with the following permissions. For more information about instance profiles for EC2 instance managed by Systems Manager, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). For other types of managed nodes, update the IAM role used by the node with the following permissions. For more information, see [Update permissions for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html) in the *IAM User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutComplianceItems"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:SourceInstanceARN": "${ec2:SourceInstanceARN}"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutComplianceItems"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ssm:SourceInstanceARN": "${ssm:SourceInstanceARN}"
                }
            }
        }
    ]
}
```

------

# Creating a resource data sync for Compliance


You can use the resource data sync feature in AWS Systems Manager to send compliance data from all of your managed nodes to a target Amazon Simple Storage Service (Amazon S3) bucket. When you create the sync, you can specify managed nodes from multiple AWS accounts, AWS Regions, and your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. Resource data sync then automatically updates the centralized data when new compliance data is collected. With all compliance data stored in a target S3 bucket, you can use services like Amazon Athena and Amazon Quick to query and analyze the aggregated data. Configuring resource data sync for Compliance is a one-time operation.

Use the following procedure to create a resource data sync for Compliance by using the AWS Management Console.

**To create and configure an S3 bucket for resource data sync (console)**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated compliance data. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. Open the bucket, choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace amzn-s3-demo-bucket and *Account-ID* with the name of the S3 bucket you created and a valid AWS account ID. Optionally, replace *Bucket-Prefix* with the name of an Amazon S3 prefix (subdirectory). If you didn't create a prefix, remove *Bucket-Prefix*/ from the ARN in the policy. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "SSMBucketPermissionsCheck",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:GetBucketAcl",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           {
               "Sid": " SSMBucketDelivery",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:PutObject",
               "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket/Bucket-Prefix/*/accountid=111122223333/*"],
               "Condition": {
                   "StringEquals": {
                       "s3:x-amz-acl": "bucket-owner-full-control"
                   }
               }
           }
       ]
   }
   ```

------

**To create a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management**, **Resource Data Syncs**, and then choose **Create resource data sync**.

1. In the **Sync name** field, enter a name for the sync configuration.

1. In the **Bucket name** field, enter the name of the Amazon S3 bucket you created at the start of this procedure.

1. (Optional) In the **Bucket prefix** field, enter the name of an S3 bucket prefix (subdirectory).

1. In the **Bucket region** field, choose **This region** if the S3 bucket you created is located in the current AWS Region. If the bucket is located in a different AWS Region, choose **Another region**, and enter the name of the Region.
**Note**  
If the sync and the target S3 bucket are located in different Regions, you might be subject to data transfer pricing. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

1. Choose **Create**.

# Learn details about Compliance


Compliance, a tool in AWS Systems Manager, collects and reports data about the status of patching in Patch Manager patching and associations in State Manager. (Patch Manager and State Manager are also both tools in AWS Systems Manager.) Compliance also reports on custom compliance types you have specified for your managed nodes. This section includes details about each of these compliance types and how to view Systems Manager compliance data. This section also includes information about how to view compliance history and change tracking.

**Note**  
Systems Manager integrates with [https://www.chef.io/inspec/](https://www.chef.io/inspec/). InSpec is an open-source, runtime framework that allows you to create human-readable profiles on GitHub or Amazon Simple Storage Service (Amazon S3). Then you can use Systems Manager to run compliance scans and view compliant and noncompliant instances. For more information, see [Using Chef InSpec profiles with Systems Manager Compliance](integration-chef-inspec.md).

## About patch compliance


After you use Patch Manager to install patches on your instances, compliance status information is immediately available to you in the console or in response to AWS Command Line Interface (AWS CLI) commands or corresponding Systems Manager API operations.

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## About State Manager association compliance


After you create one or more State Manager associations, compliance status information is immediately available to you in the console or in response to AWS CLI commands or corresponding Systems Manager API operations. For associations, Compliance shows statuses of `Compliant` or `Non-compliant` and the severity level assigned to the association, such as `Critical` or `Medium`.

When State Manager executes an association on a managed node, it triggers a compliance aggregation process that updates compliance status for all associations on that node. The `ExecutionTime` value in compliance reports represents when the compliance status was captured by Systems Manager, not when the association was executed on the managed node. This means multiple associations might display identical `ExecutionTime` values even if they were executed at different times. To determine actual association execution times, refer to the association execution history using the AWS CLI command [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association-execution-targets.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association-execution-targets.html) or by viewing the execution details in the console.

## About custom compliance


You can assign compliance metadata to a managed node. This metadata can then be aggregated with other compliance data for compliance reporting purposes. For example, say that your business runs versions 2.0, 3.0, and 4.0 of software X on your managed nodes. The company wants to standardize on version 4.0, meaning that instances running versions 2.0 and 3.0 are non-compliant. You can use the [PutComplianceItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API operation to explicitly note which managed nodes are running older versions of software X. You can only assign compliance metadata by using the AWS CLI, AWS Tools for Windows PowerShell, or the SDKs. The following CLI sample command assigns compliance metadata to a managed instance and specifies the compliance type in the required format `Custom:`. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm put-compliance-items \
    --resource-id i-1234567890abcdef0 \
    --resource-type ManagedInstance \
    --compliance-type Custom:SoftwareXCheck \
    --execution-summary ExecutionTime=AnyStringToDenoteTimeOrDate \
    --items Id=Version2.0,Title=SoftwareXVersion,Severity=CRITICAL,Status=NON_COMPLIANT
```

------
#### [ Windows ]

```
aws ssm put-compliance-items ^
    --resource-id i-1234567890abcdef0 ^
    --resource-type ManagedInstance ^
    --compliance-type Custom:SoftwareXCheck ^
    --execution-summary ExecutionTime=AnyStringToDenoteTimeOrDate ^
    --items Id=Version2.0,Title=SoftwareXVersion,Severity=CRITICAL,Status=NON_COMPLIANT
```

------

**Note**  
The `ResourceType` parameter only supports `ManagedInstance`. If you add custom compliance to a managed AWS IoT Greengrass core device, you must specify a `ResourceType` of `ManagedInstance`.

Compliance managers can then view summaries or create reports about which managed nodes are or aren't compliant. You can assign a maximum of 10 different custom compliance types to a managed node.

For an example of how to create a custom compliance type and view compliance data, see [Assign custom compliance metadata using the AWS CLI](compliance-custom-metadata-cli.md).

## Viewing current compliance data


This section describes how to view compliance data in the Systems Manager console and by using the AWS CLI. For information about how to view patch and association compliance history and change tracking, see [Viewing compliance configuration history and change tracking](#compliance-history).

**Topics**
+ [

### Viewing current compliance data (console)
](#compliance-view-results-console)
+ [

### Viewing current compliance data (AWS CLI)
](#compliance-view-data-cli)

### Viewing current compliance data (console)


Use the following procedure to view compliance data in the Systems Manager console.

**To view current compliance reports in the Systems Manager console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Compliance**.

1. In the **Compliance dashboard filtering** section, choose an option to filter compliance data. The **Compliance resources summary** section displays counts of compliance data based on the filter you chose.

1. To drill down into a resource for more information, scroll down to the **Details overview for resources** area and choose the ID of a managed node.

1. On the **Instance ID** or **Name** details page, choose the **Configuration compliance** tab to view a detailed configuration compliance report for the managed node.

**Note**  
For information about fixing compliance issues, see [Remediating compliance issues using EventBridge](compliance-fixing.md).

### Viewing current compliance data (AWS CLI)


You can view summaries of compliance data for patching, associations, and custom compliance types in the in the AWS CLI by using the following AWS CLI commands. 

[https://docs.aws.amazon.com/cli/latest/reference/ssm/list-compliance-summaries.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-compliance-summaries.html)  
Returns a summary count of compliant and non-compliant association statuses according to the filter you specify. (API: [ListComplianceSummaries](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListComplianceSummaries.html))

[https://docs.aws.amazon.com/cli/latest/reference/ssm/list-resource-compliance-summaries.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-resource-compliance-summaries.html)  
Returns a resource-level summary count. The summary includes information about compliant and non-compliant statuses and detailed compliance-item severity counts, according to the filter criteria you specify. (API: [ListResourceComplianceSummaries](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListResourceComplianceSummaries.html))

You can view additional compliance data for patching by using the following AWS CLI commands.

[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-group-state.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-group-state.html)  
Returns high-level aggregated patch compliance state for a patch group. (API: [DescribePatchGroupState](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchGroupState.html))

[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patch-states-for-patch-group.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patch-states-for-patch-group.html)  
Returns the high-level patch state for the instances in the specified patch group. (API: [DescribeInstancePatchStatesForPatchGroup](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatchStatesForPatchGroup.html))

**Note**  
For an illustration of how to configure patching and view patch compliance details by using the AWS CLI, see [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md).

## Viewing compliance configuration history and change tracking


Systems Manager Compliance displays *current* patching and association compliance data for your managed nodes. You can view patching and association compliance history and change tracking by using [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/). AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. To view patching and association compliance history and change tracking, you must turn on the following resources in AWS Config: 
+ `SSM:PatchCompliance`
+ `SSM:AssociationCompliance`

For information about how to choose and configure these specific resources in AWS Config, see [Selecting Which Resources AWS Config Records](https://docs.aws.amazon.com/config/latest/developerguide/select-resources.html) in the *AWS Config Developer Guide*.

**Note**  
For information about AWS Config pricing, see [Pricing](https://aws.amazon.com/config/pricing/).

# Deleting a resource data sync for Compliance


If you no longer want to use AWS Systems Manager Compliance to view compliance data, then we also recommend deleting resource data syncs used for Compliance data collection.

**To delete a Compliance resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management**, **Resource data syncs**.

1. Choose a sync in the list. 
**Important**  
Make sure you choose the sync used for Compliance. Systems Manager supports resource data sync for multiple tools. If you choose the wrong sync, you could disrupt data aggregation for Systems Manager Explorer or Systems Manager Inventory.

1. Choose **Delete**.

1. Delete the Amazon Simple Storage Service (Amazon S3) bucket where the data was stored. For information about deleting an S3 bucket, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html).

# Remediating compliance issues using EventBridge


You can quickly remediate patch and association compliance issues by using Run Command, a tool in AWS Systems Manager. You can target instance or AWS IoT Greengrass core device IDs or tags and run the `AWS-RunPatchBaseline` document or the `AWS-RefreshAssociation` document. If refreshing the association or re-running the patch baseline fails to resolve the compliance issue, then you need to investigate your associations, patch baselines, or instance configurations to understand why the Run Command operations didn't resolve the problem. 

For more information about patching, see [AWS Systems Manager Patch Manager](patch-manager.md) and [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

For more information about associations, see [Working with associations in Systems Manager](state-manager-associations.md).

For more information about running a command, see [AWS Systems Manager Run Command](run-command.md).

**Specify Compliance as the target of an EventBridge event**  
You can also configure Amazon EventBridge to perform an action in response to Systems Manager Compliance events. For example, if one or more managed nodes fail to install Critical patch updates or run an association that installs anti-virus software, then you can configure EventBridge to run the `AWS-RunPatchBaseline` document or the `AWS-RefreshAssocation` document when the Compliance event occurs. 

Use the following procedure to configure Compliance as the target of an EventBridge event.

**To configure Compliance as the target of a EventBridge event (console)**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same AWS Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to respond to matching events that come from your own AWS account, select **default**. When an AWS service in your account emits an event, it always goes to your account’s default event bus.

1. For **Rule type**, choose **Rule with an event pattern**.

1. Choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. In the **Event pattern** section, choose **Event pattern form**.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **Systems Manager**.

1. For **Event type**, choose **Configuration Compliance**.

1. For **Specific detail type(s)**, choose **Configuration Compliance State Change**.

1. Choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **Systems Manager Run Command**.

1. In the **Document** list, choose a Systems Manager document (SSM document) to run when your target is invoked. For example, choose `AWS-RunPatchBaseline` for a non-compliant patch event, or choose `AWS-RefreshAssociation` for a non-compliant association event.

1. Specify information for the remaining fields and parameters.
**Note**  
Required fields and parameters have an asterisk (\$1) next to the name. To create a target, you must specify a value for each required parameter or field. If you don't, the system creates the rule, but the rule won't be run.

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Tagging Your Amazon EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Review the details of the rule and choose **Create rule**.

# Assign custom compliance metadata using the AWS CLI


The following procedure walks you through the process of using the AWS Command Line Interface (AWS CLI) to call the AWS Systems Manager [PutComplianceItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API operation to assign custom compliance metadata to a resource. You can also use this API operation to manually assign patch or association compliance metadata to a managed nodes, as shown in the following walkthrough. For more information about custom compliance, see [About custom compliance](compliance-about.md#compliance-custom).

**To assign custom compliance metadata to a managed instance (AWS CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to assign custom compliance metadata to a managed node. Replace each *example resource placeholder* with your own information. The `ResourceType` parameter only supports a value of `ManagedInstance`. Specify this value even if you are assigning custom compliance metadata to a managed AWS IoT Greengrass core device.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-compliance-items \
       --resource-id instance_ID \
       --resource-type ManagedInstance \
       --compliance-type Custom:user-defined_string \
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value \
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------
#### [ Windows ]

   ```
   aws ssm put-compliance-items ^
       --resource-id instance_ID ^
       --resource-type ManagedInstance ^
       --compliance-type Custom:user-defined_string ^
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value ^
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------

1. Repeat the previous step to assign additional custom compliance metadata to one or more nodes. You can also manually assign patch or association compliance metadata to managed nodes by using the following commands:

   Association compliance metadata

------
#### [ Linux & macOS ]

   ```
   aws ssm put-compliance-items \
       --resource-id instance_ID \
       --resource-type ManagedInstance \
       --compliance-type Association \
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value \
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------
#### [ Windows ]

   ```
   aws ssm put-compliance-items ^
       --resource-id instance_ID ^
       --resource-type ManagedInstance ^
       --compliance-type Association ^
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value ^
       --items Id=user-defined_ID,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT
   ```

------

   Patch compliance metadata

------
#### [ Linux & macOS ]

   ```
   aws ssm put-compliance-items \
       --resource-id instance_ID \
       --resource-type ManagedInstance \
       --compliance-type Patch \
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value,ExecutionId=user-defined_ID,ExecutionType=Command  \
       --items Id=for_example, KB12345,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT,Details="{PatchGroup=name_of_group,PatchSeverity=the_patch_severity, for example, CRITICAL}"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-compliance-items ^
       --resource-id instance_ID ^
       --resource-type ManagedInstance ^
       --compliance-type Patch ^
       --execution-summary ExecutionTime=user-defined_time_and/or_date_value,ExecutionId=user-defined_ID,ExecutionType=Command  ^
       --items Id=for_example, KB12345,Title=user-defined_title,Severity=one_or_more_comma-separated_severities:CRITICAL, MAJOR, MINOR,INFORMATIONAL, or UNSPECIFIED,Status=COMPLIANT or NON_COMPLIANT,Details="{PatchGroup=name_of_group,PatchSeverity=the_patch_severity, for example, CRITICAL}"
   ```

------

1. Run the following command to view a list of compliance items for a specific managed node. Use filters to drill down into specific compliance data.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-items \
       --resource-ids instance_ID \
       --resource-types ManagedInstance \
       --filters one_or_more_filters
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-items ^
       --resource-ids instance_ID ^
       --resource-types ManagedInstance ^
       --filters one_or_more_filters
   ```

------

   The following examples show you how to use this command with filters.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-items \
       --resource-ids i-02573cafcfEXAMPLE \
       --resource-type ManagedInstance \
       --filters Key=DocumentName,Values=AWS-RunPowerShellScript Key=Status,Values=NON_COMPLIANT,Type=NotEqual Key=Id,Values=cee20ae7-6388-488e-8be1-a88ccEXAMPLE Key=Severity,Values=UNSPECIFIED
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-items ^
       --resource-ids i-02573cafcfEXAMPLE ^
       --resource-type ManagedInstance ^
       --filters Key=DocumentName,Values=AWS-RunPowerShellScript Key=Status,Values=NON_COMPLIANT,Type=NotEqual Key=Id,Values=cee20ae7-6388-488e-8be1-a88ccEXAMPLE Key=Severity,Values=UNSPECIFIED
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=OverallSeverity,Values=UNSPECIFIED
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=OverallSeverity,Values=UNSPECIFIED
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=OverallSeverity,Values=UNSPECIFIED Key=ComplianceType,Values=Association Key=InstanceId,Values=i-02573cafcfEXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=OverallSeverity,Values=UNSPECIFIED Key=ComplianceType,Values=Association Key=InstanceId,Values=i-02573cafcfEXAMPLE
   ```

------

1. Run the following command to view a summary of compliance statuses. Use filters to drill down into specific compliance data.

   ```
   aws ssm list-resource-compliance-summaries --filters One or more filters.
   ```

   The following examples show you how to use this command with filters.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=ExecutionType,Values=Command
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=ExecutionType,Values=Command
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-resource-compliance-summaries \
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=OverallSeverity,Values=CRITICAL
   ```

------
#### [ Windows ]

   ```
   aws ssm list-resource-compliance-summaries ^
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=OverallSeverity,Values=CRITICAL
   ```

------

1. Run the following command to view a summary count of compliant and non-compliant resources for a compliance type. Use filters to drill down into specific compliance data.

   ```
   aws ssm list-compliance-summaries --filters One or more filters.
   ```

   The following examples show you how to use this command with filters.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-summaries \
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=PatchGroup,Values=TestGroup
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-summaries ^
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=PatchGroup,Values=TestGroup
   ```

------

------
#### [ Linux & macOS ]

   ```
   aws ssm list-compliance-summaries \
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=ExecutionId,Values=4adf0526-6aed-4694-97a5-14522EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm list-compliance-summaries ^
       --filters Key=AWS:InstanceInformation.PlatformType,Values=Windows Key=ExecutionId,Values=4adf0526-6aed-4694-97a5-14522EXAMPLE
   ```

------

# AWS Systems Manager Distributor
Distributor

Distributor, a tool in AWS Systems Manager, helps you package and publish software to AWS Systems Manager managed nodes. You can package and publish your own software or use Distributor to find and publish AWS-provided agent software packages, such as **AmazonCloudWatchAgent**, or third-party packages such as **Trend Micro. **Publishing a package advertises specific versions of the package's document to managed nodes that you identify using node IDs, AWS account IDs, tags, or an AWS Region. To get started with Distributor, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/distributor). In the navigation pane, choose **Distributor**.

After you create a package in Distributor, you can install the package in one of the following ways:
+ One time by using [AWS Systems Manager Run Command](run-command.md)
+ On a schedule by using [AWS Systems Manager State Manager](systems-manager-state.md)

**Important**  
Packages distributed by third-party sellers are not managed by AWS and are published by the vendor of the package. We encourage you to conduct additional due diligence to ensure compliance with your internal security controls. Security is a shared responsibility between AWS and you. This is described as the shared responsibility model. To learn more, see the [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/).

## How can Distributor benefit my organization?


Distributor offers these benefits:
+  **One package, many platforms** 

  When you create a package in Distributor, the system creates an AWS Systems Manager document (SSM document). You can attach .zip files to this document. When you run Distributor, the system processes the instructions in the SSM document and installs the software package in the .zip file on the specified targets. Distributor supports multiple operating systems, including Windows, Ubuntu Server, Debian Server, and Red Hat Enterprise Linux. For more information about supported platforms, see [Supported package platforms and architectures](#what-is-a-package-platforms).
+  **Control package access across groups of managed instances** 

  You can use Run Command or State Manager to control which of your managed nodes get a package and which version of that package. Run Command and State Manager are tools in AWS Systems Manager. Managed nodes can be grouped by instance or device IDs, AWS account numbers, tags, or AWS Regions. You can use State Manager associations to deliver different versions of a package to different groups of instances.
+  **Many AWS agent packages included and ready to use** 

  Distributor includes many AWS agent packages that are ready for you to deploy to managed nodes. Look for packages in the Distributor `Packages` list page that are published by `Amazon`. Examples include `AmazonCloudWatchAgent` and `AWSPVDriver`.
+  **Automate deployment ** 

  To keep your environment current, use State Manager to schedule packages for automatic deployment on target managed nodes when those machines are first launched.

## Who should use Distributor?

+ Any AWS customer who wants to create new or deploy existing software packages, including AWS published packages, to multiple Systems Manager managed nodes at one time.
+ Software developers who create software packages.
+ Administrators who are responsible for keeping Systems Manager managed nodes current with the most up-to-date software packages.

## What are the features of Distributor?

+  **Deployment of packages to both Windows and Linux instances** 

  With Distributor, you can deploy software packages to Amazon Elastic Compute Cloud (Amazon EC2) instances and AWS IoT Greengrass core devices for Linux and Windows Server. For a list of supported instance operating system types, see [Supported package platforms and architectures](#what-is-a-package-platforms).
**Note**  
Distributor isn't supported on the macOS operating system.
+  **Deploy packages one time, or on an automated schedule** 

  You can choose to deploy packages one time, on a regular schedule, or whenever the default package version is changed to a different version. 
+  **Completely reinstall packages, or perform in-place updates** 

  To install a new package version, you can completely uninstall the current version and install a new one in its place, or only update the current version with new and updated components, according to an *update script* that you provide. Your package application is unavailable during a reinstallation, but can remain available during an in-place update. In-place updates are especially useful for security monitoring applications or other scenarios where you need to avoid application downtime.
+  **Console, CLI, PowerShell, and SDK access to Distributor capabilities** 

  You can work with Distributor by using the Systems Manager console, AWS Command Line Interface (AWS CLI), AWS Tools for PowerShell, or the AWS SDK of your choice.
+  **IAM access control** 

  By using AWS Identity and Access Management (IAM) policies, you can control which members of your organization can create, update, deploy, or delete packages or package versions. For example, you might want to give an administrator permissions to deploy packages, but not to change packages or create new package versions.
+  **Logging and auditing capability support** 

  You can audit and log Distributor user actions in your AWS account through integration with other AWS services. For more information, see [Auditing and logging Distributor activity](distributor-logging-auditing.md).

## What is a package in Distributor?


A *package* is a collection of installable software or assets that includes the following.
+ A .zip file of software per target operating system platform. Each .zip file must include the following.
  + An **install** and an **uninstall** script. Windows Server-based managed nodes require PowerShell scripts (scripts named `install.ps1` and `uninstall.ps1`). Linux-based managed nodes require shell scripts (scripts named `install.sh` and `uninstall.sh`). AWS Systems Manager SSM Agent reads and carries out the instructions in the **install** and **uninstall** scripts.
  + An executable file. SSM Agent must find this executable to install the package on target managed nodes.
+ A JSON-formatted manifest file that describes the package contents. The manifest isn't included in the .zip file, but it's stored in the same Amazon Simple Storage Service (Amazon S3) bucket as the .zip files that form the package. The manifest identifies the package version and maps the .zip files in the package to target managed node attributes, such as operating system version or architecture. For information about how to create the manifest, see [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

When you choose **Simple** package creation in the Distributor console, Distributor generates the installation and uninstallation scripts, file hashes, and the JSON package manifest for you, based on the software executable file name and target platforms and architectures.

### Supported package platforms and architectures


You can use Distributor to publish packages to the following Systems Manager managed node platforms. A version value must match the exact release version of the operating system Amazon Machine Image (AMI) that you're targeting. For more information about determining this version, see step 4 of [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

**Note**  
Systems Manager doesn't support all of the following operating systems for AWS IoT Greengrass core devices. For more information, see [Setting up AWS IoT Greengrass core devices](https://docs.aws.amazon.com/greengrass/v2/developerguide/setting-up.html) in the *AWS IoT Greengrass Version 2 Developer Guide*.


| Platform | Code value in manifest file | Supported architectures | 
| --- | --- | --- | 
| AlmaLinux | almalinux |  x86\$164 ARM64  | 
|  Amazon Linux 2 and Amazon Linux 2023  |   `amazon`   |  x86\$164 or x86 ARM64 (Amazon Linux 2 and AL2023, A1 instance types)  | 
|  Debian Server  |   `debian`   |  x86\$164 or x86  | 
|  openSUSE  |   `opensuse`   |  x86\$164  | 
|  openSUSE Leap  |   `opensuseleap`   |  x86\$164  | 
|  Oracle Linux  |   `oracle`   |  x86\$164  | 
|  Red Hat Enterprise Linux (RHEL)  |   `redhat`   |  x86\$164 ARM64 (RHEL 7.6 and later, A1 instance types)  | 
| Rocky Linux | rocky |  x86\$164 ARM64  | 
|  SUSE Linux Enterprise Server (SLES)  |   `suse`   |  x86\$164  | 
|  Ubuntu Server  |   `ubuntu`   |  x86\$164 or x86 ARM64 (Ubuntu Server 16 and later, A1 instance types)  | 
|  Windows Server  |   `windows`   |  x86\$164  | 

**Topics**
+ [

## How can Distributor benefit my organization?
](#distributor-benefits)
+ [

## Who should use Distributor?
](#distributor-who)
+ [

## What are the features of Distributor?
](#distributor-features)
+ [

## What is a package in Distributor?
](#what-is-a-package)
+ [

# Setting up Distributor
](distributor-getting-started.md)
+ [

# Working with Distributor packages
](distributor-working-with.md)
+ [

# Auditing and logging Distributor activity
](distributor-logging-auditing.md)
+ [

# Troubleshooting AWS Systems Manager Distributor
](distributor-troubleshooting.md)

# Setting up Distributor


Before you use Distributor, a tool in AWS Systems Manager, to create, manage, and deploy software packages, follow these steps.

## Complete Distributor prerequisites


Before you use Distributor, a tool in AWS Systems Manager, be sure your environment meets the following requirements.


**Distributor prerequisites**  

| Requirement | Description | 
| --- | --- | 
|  SSM Agent  |  AWS Systems Manager SSM Agent version 2.3.274.0 or later must be installed on the managed nodes on which you want to deploy or from which you want to remove packages. To install or update SSM Agent, see [Working with SSM Agent](ssm-agent.md).  | 
|  AWS CLI  |  (Optional) To use the AWS Command Line Interface (AWS CLI) instead of the Systems Manager console to create and manage packages, install the newest release of the AWS CLI on your local computer. For more information about how to install or upgrade the CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) in the *AWS Command Line Interface User Guide*.  | 
|  AWS Tools for PowerShell  |  (Optional) To use the Tools for PowerShell instead of the Systems Manager console to create and manage packages, install the newest release of Tools for PowerShell on your local computer. For more information about how to install or upgrade the Tools for PowerShell, see [Setting up the AWS Tools for Windows PowerShell or AWS Tools for PowerShell Core](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html) in the *AWS Tools for PowerShell User Guide*.  | 

**Note**  
Systems Manager doesn't support distributing packages to Oracle Linux managed nodes by using Distributor.

## Verify or create an IAM instance profile with Distributor permissions


By default, AWS Systems Manager doesn't have permission to perform actions on your instances. You must grant access by using an AWS Identity and Access Management (IAM) instance profile. An instance profile is a container that passes IAM role information to an Amazon Elastic Compute Cloud (Amazon EC2) instance at launch. This requirement applies to permissions for all Systems Manager tools, not just Distributor.

**Note**  
When you configure your edge devices to run AWS IoT Greengrass Core software and SSM Agent, you specify an IAM service role that enables Systems Manager to peform actions on it. You don't need to configure managed edge devices with an instance profile. 

If you already use other Systems Manager tools, such as Run Command and State Manager, an instance profile with the required permissions for Distributor is already attached to your instances. The simplest way to ensure that you have permissions to perform Distributor tasks is to attach the **AmazonSSMManagedInstanceCore** policy to your instance profile. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

## Control user access to packages


Using AWS Identity and Access Management (IAM) policies, you can control who can create, deploy, and manage packages. You also control which Run Command and State Manager API operations they can perform on managed nodes. Like Distributor, both Run Command and State Manager, are tools in AWS Systems Manager.

**ARN Format**  
User-defined packages are associated with document Amazon Resource Names (ARNs) and have the following format.

```
arn:aws:ssm:region:account-id:document/document-name
```

The following is an example.

```
arn:aws:ssm:us-west-1:123456789012:document/ExampleDocumentName
```

You can use a pair of AWS supplied default IAM policies, one for end users and one for administrators, to grant permissions for Distributor activities. Or you can create custom IAM policies appropriate for your permissions requirements.

For more information about using variables in IAM policies, see [IAM Policy Elements: Variables](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html). 

For information about how to create policies and attach them to users or groups, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Adding and Removing IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

## Create or choose an Amazon S3 bucket to store Distributor packages


When you create a package by using the **Simple** workflow in the AWS Systems Manager console, you choose an existing Amazon Simple Storage Service (Amazon S3) bucket to which Distributor uploads your software. Distributor is a tool in AWS Systems Manager. In the **Advanced** workflow, you must upload .zip files of your software or assets to an Amazon S3 bucket before you begin. Whether you create a package by using the **Simple** or **Advanced** workflows in the console, or by using the API, you must have an Amazon S3 bucket before you start creating your package. As part of the package creation process, Distributor copies your installable software and assets from this bucket to an internal Systems Manager store. Because the assets are copied to an internal store, you can delete or repurpose your Amazon S3 bucket when package creation is finished.

For more information about how to create a bucket, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*. For more information about how to run an AWS CLI command to create a bucket, see [https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) in the *AWS CLI Command Reference*.

# Working with Distributor packages


You can use the AWS Systems Manager console, AWS command line tools (AWS CLI and AWS Tools for PowerShell), and AWS SDKs to add, manage, or deploy packages in Distributor. Distributor is a tool in AWS Systems Manager. Before you add a package to Distributor:
+ Create and zip installable assets.
+ (Optional) Create a JSON manifest file for the package. This isn't required to use the **Simple** package creation process in the Distributor console. Simple package creation generates a JSON manifest file for you.

  You can use the AWS Systems Manager console or a text or JSON editor to create the manifest file.
+ Have an Amazon Simple Storage Service (Amazon S3) bucket ready to store your installable assets or software. If you're using the **Advanced** package creation process, upload your assets to the Amazon S3 bucket before you begin.
**Note**  
You can delete or repurpose this bucket after you finish creating your package because Distributor moves the package contents to an internal Systems Manager bucket as part of the package creation process.

AWS published packages are already packaged and ready for deployment. To deploy an AWS-published package to managed nodes, see [Install or update Distributor packages](distributor-working-with-packages-deploy.md).

You can share Distributor packages between AWS accounts. When using a package shared from another account in AWS CLI commands use the package Amazon Resource Name (ARN) instead of the package name.

**Topics**
+ [

# View packages in Distributor
](distributor-view-packages.md)
+ [

# Create a package in Distributor
](distributor-working-with-packages-create.md)
+ [

# Edit Distributor package permissions in the console
](distributor-working-with-packages-ep.md)
+ [

# Edit Distributor package tags in the console
](distributor-working-with-packages-tags.md)
+ [

# Add a version to a Distributor package
](distributor-working-with-packages-version.md)
+ [

# Install or update Distributor packages
](distributor-working-with-packages-deploy.md)
+ [

# Uninstall a Distributor package
](distributor-working-with-packages-uninstall.md)
+ [

# Delete a Distributor package
](distributor-working-with-packages-dpkg.md)

# View packages in Distributor
View packages

To view packages that are available for installation, you can use the AWS Systems Manager console or your preferred AWS command line tool. Distributor is a tool in AWS Systems Manager. To access Distributor, open the AWS Systems Manager console and choose **Distributor** in the left navigation pane. You will see all of the packages available to you.

The following section describes how you can view Distributor packages using your preferred command line tool.

## View packages using the command line


This section contains information about how you can use your preferred command line tool to view Distributor packages using the provided commands.

------
#### [ Linux & macOS ]

**To view packages using the AWS CLI on Linux**
+ To view all packages, excluding shared packages, run the following command.

  ```
  aws ssm list-documents \
      --filters Key=DocumentType,Values=Package
  ```
+ To view all packages owned by Amazon, run the following command.

  ```
  aws ssm list-documents \
      --filters Key=DocumentType,Values=Package Key=Owner,Values=Amazon
  ```
+ To view all packages owned by third parties, run the following command.

  ```
  aws ssm list-documents \
      --filters Key=DocumentType,Values=Package Key=Owner,Values=ThirdParty
  ```

------
#### [ Windows ]

**To view packages using the AWS CLI on Windows**
+ To view all packages, excluding shared packages, run the following command.

  ```
  aws ssm list-documents ^
      --filters Key=DocumentType,Values=Package
  ```
+ To view all packages owned by Amazon, run the following command.

  ```
  aws ssm list-documents ^
      --filters Key=DocumentType,Values=Package Key=Owner,Values=Amazon
  ```
+ To view all packages owned by third parties, run the following command.

  ```
  aws ssm list-documents ^
      --filters Key=DocumentType,Values=Package Key=Owner,Values=ThirdParty
  ```

------
#### [ PowerShell ]

**To view packages using the Tools for PowerShell**
+ To view all packages, excluding shared packages, run the following command.

  ```
  $filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $filter.Key = "DocumentType"
  $filter.Values = "Package"
  
  Get-SSMDocumentList `
      -Filters @($filter)
  ```
+ To view all packages owned by Amazon, run the following command.

  ```
  $typeFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $typeFilter.Key = "DocumentType"
  $typeFilter.Values = "Package"
  
  $ownerFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $ownerFilter.Key = "Owner"
  $ownerFilter.Values = "Amazon"
  
  Get-SSMDocumentList `
      -Filters @($typeFilter,$ownerFilter)
  ```
+ To view all packages owned by third parties, run the following command.

  ```
  $typeFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $typeFilter.Key = "DocumentType"
  $typeFilter.Values = "Package"
  
  $ownerFilter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
  $ownerFilter.Key = "Owner"
  $ownerFilter.Values = "ThirdParty"
  
  Get-SSMDocumentList `
      -Filters @($typeFilter,$ownerFilter)
  ```

------

# Create a package in Distributor
Create a package

To create a package, prepare your installable software or assets, one file per operating system platform. At least one file is required to create a package.

Different platforms might sometimes use the same file, but all files that you attach to your package must be listed in the `Files` section of the manifest. If you're creating a package by using the simple workflow in the console, the manifest is generated for you. The maximum number of files that you can attach to a single document is 20. The maximum size of each file is 1 GB. For more information about supported platforms, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

When you create a package, the system creates a new [SSM document](documents.md). This document allows you to deploy the package to managed nodes.

For demonstration purposes only, an example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip), is available for you to download from our website. The example package includes a completed JSON manifest and three .zip files containing installers for PowerShell v7.0.0. The installation and uninstallation scripts don't contain valid commands. Although you must zip each software installable and scripts into a .zip file to create a package in the **Advanced** workflow, you don't zip installable assets in the **Simple** workflow.

**Topics**
+ [

## Create a package using the Simple workflow
](#distributor-working-with-packages-create-simple)
+ [

## Create a package using the Advanced workflow
](#distributor-working-with-packages-create-adv)

## Create a package using the Simple workflow


This section describes how to create a package in Distributor by choosing the **Simple** package creation workflow in the Distributor console. Distributor is a tool in AWS Systems Manager. To create a package, prepare your installable assets, one file per operating system platform. At least one file is required to create a package. The **Simple** package creation process generates installation and uninstallation scripts, file hashes, and a JSON-formatted manifest for you. The **Simple** workflow handles the process of uploading and zipping your installable files, and creating a new package and associated [SSM document](documents.md). For more information about supported platforms, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

When you use the Simple method to create a package, Distributor creates `install` and `uninstall` scripts for you. However, when you create a package for an in-place update, you must provide your own `update` script content on the **Update script** tab. When you add input commands for an `update` script, Distributor includes this script in the .zip package it creates for you, along with the `install` and `uninstall` scripts.

**Note**  
Use the `In-place` update option to add new or updated files to an existing package installation without taking the associated application offline.

**To create a package using the Simple workflow**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose **Create package**, and then choose **Simple**.

1. On the **Create package** page, enter a name for your package. Package names can contain letters, numbers, periods, dashes, and underscores. The name should be generic enough to apply to all versions of the package attachments, but specific enough to identify the purpose of the package.

1. (Optional) For **Version name**, enter a version name. Version names can be a maximum of 512 characters, and can't contain special characters.

1. For **Location**, choose a bucket by using the bucket name and prefix or by using the bucket URL.

1. For **Upload software**, choose **Add software**, and then navigate to installable software files with `.rpm`, `.msi`, or `.deb` extensions. If the file name contains spaces, the upload fails. You can upload more than one software file in a single action.

1. For **Target platform**, verify that the target operating system platform shown for each installable file is correct. If the operating system shown isn't correct, choose the correct operating system from the dropdown list.

   For the **Simple** package creation workflow, because you upload each installable file only once, extra steps are required to instruct Distributor to target a single file at multiple operating systems. For example, if you upload an installable software file named `Logtool_v1.1.1.rpm`, you must change some defaults in the **Simple** workflow to target the same software on supported versions of both Amazon Linux and Ubuntu Server operating systems. When targeting multiple platforms, do one of the following.
   + Use the **Advanced** workflow instead, zip each installable file into a .zip file before you begin, and manually author the manifest so that one installable file can be targeted at multiple operating system platforms or versions. For more information, see [Create a package using the Advanced workflow](#distributor-working-with-packages-create-adv).
   + Manually edit the manifest file in the **Simple** workflow so that your .zip file is targeted at multiple operating system platforms or versions. For more information about how to do this, see the end of step 4 in [Step 2: Create the JSON package manifest](#packages-manifest).

1. For **Platform version**, verify that the operating system platform version shown is either **\$1any**, a major release version followed by a wildcard (7.\$1), or the exact operating system release version to which you want your software to apply. For more information about specifying an operating system platform version, see step 4 in [Step 2: Create the JSON package manifest](#packages-manifest).

1. For **Architecture**, choose the correct processor architecture for each installable file from the dropdown list. For more information about supported processor architectures, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

1. (Optional) Expand **Scripts**, and review the scripts that Distributor generates for your installable software.

1. (Optional) To provide an update script for use with in-place updates, expand **Scripts**, choose the **Update script** tab, and enter your update script commands.

   Systems Manager doesn't generate update scripts on your behalf.

1. To add more installable software files, choose **Add software**. Otherwise, go to the next step.

1. (Optional) Expand **Manifest**, and review the JSON package manifest that Distributor generates for your installable software. If you changed any information about your software since you began this procedure, such as platform version or target platform, choose **Generate manifest** to show the updated package manifest.

   You can edit the manifest manually if you want to target a software installable at more than one operating system, as described in step 8. For more information about editing the manifest, see [Step 2: Create the JSON package manifest](#packages-manifest).

1. Choose **Create package**.

Wait for Distributor to finish uploading your software and creating your package. Distributor shows upload status for each installable file. Depending on the number and size of packages you're adding, this can take a few minutes. Distributor automatically redirects you to the **Package details** page for the new package, but you can choose to open this page yourself after the software is uploaded. The **Package details** page doesn't show all information about your package until Distributor finishes the package creation process. To stop the upload and package creation process, choose **Cancel**.

If Distributor can't upload any of the software installable files, it displays an **Upload failed** message. To retry the upload, choose **Retry upload**. For more information about how to troubleshoot package creation failures, see [Troubleshooting AWS Systems Manager Distributor](distributor-troubleshooting.md).

## Create a package using the Advanced workflow


In this section, learn about how advanced users can create a package in Distributor after uploading installable assets zipped with installation and uninstallation scripts, and a JSON manifest file, to an Amazon S3 bucket.

To create a package, prepare your .zip files of installable assets, one .zip file per operating system platform. At least one .zip file is required to create a package. Next, create a JSON manifest. The manifest includes pointers to your package code files. When you have your required code files added to a folder or directory, and the manifest is populated with correct values, upload your package to an S3 bucket.

An example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip), is available for you to download from our website. The example package includes a completed JSON manifest and three .zip files.

**Topics**
+ [

### Step 1: Create the ZIP files
](#packages-zip)
+ [

### Step 2: Create the JSON package manifest
](#packages-manifest)
+ [

### Step 3: Upload the package and manifest to an Amazon S3 bucket
](#packages-upload-s3)
+ [

### Step 4: Add a package to Distributor
](#distributor-working-with-packages-add)

### Step 1: Create the ZIP files


The foundation of your package is at least one .zip file of software or installable assets. A package includes one .zip file per operating system that you want to support, unless one .zip file can be installed on multiple operating systems. For example, supported versions of Red Hat Enterprise Linux and Amazon Linux instances can typically run the same .RPM executable files, so you need to attach only one .zip file to your package to support both operating systems.

**Required files**  
The following items are required in each .zip file:
+ An **install** and an **uninstall** script. Windows Server-based managed nodes require PowerShell scripts (scripts named `install.ps1` and `uninstall.ps1`). Linux-based managed nodes require shell scripts (scripts named `install.sh` and `uninstall.sh`). SSM Agent runs the instructions in the **install** and **uninstall** scripts.

  For example, your installation scripts might run an installer (such as .rpm or .msi), they might copy files, or they might set configurations.
+ An executable file, installer packages (.rpm, .deb, .msi, etc.), other scripts, or configuration files.

**Optional files**  
The following item is optional in each .zip file:
+ An **update** script. Providing an update script makes it possible for you to use the `In-place update` option to install a package. When you want to add new or updated files to an existing package installation, the `In-place update` option doesn't take the package application offline while the update is performed. Windows Server-based managed nodes require a PowerShell script (script named `update.ps1`). Linux-based managed nodes require a shell script (script named `update.sh`). SSM Agent runs the instructions in the **update** script.

For more information about installing or updating packages, see [Install or update Distributor packages](distributor-working-with-packages-deploy.md).

For examples of .zip files, including sample **install** and **uninstall** scripts, download the example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip).

### Step 2: Create the JSON package manifest


After you prepare and zip your installable files, create a JSON manifest. The following is a template. The parts of the manifest template are described in the procedure in this section. You can use a JSON editor to create this manifest in a separate file. Alternatively, you can author the manifest in the AWS Systems Manager console when you create a package.

```
{
  "schemaVersion": "2.0",
  "version": "your-version",
  "publisher": "optional-publisher-name",
  "packages": {
    "platform": {
      "platform-version": {
        "architecture": {
          "file": ".zip-file-name-1.zip"
        }
      }
    },
    "another-platform": {
      "platform-version": {
        "architecture": {
          "file": ".zip-file-name-2.zip"
        }
      }
    },
    "another-platform": {
      "platform-version": {
        "architecture": {
          "file": ".zip-file-name-3.zip"
        }
      }
    }
  },
  "files": {
    ".zip-file-name-1.zip": {
      "checksums": {
        "sha256": "checksum"
      }
    },
    ".zip-file-name-2.zip": {
      "checksums": {
        "sha256": "checksum"
      }
    }
  }
}
```

**To create a JSON package manifest**

1. Add the schema version to your manifest. In this release, the schema version is always `2.0`.

   ```
   { "schemaVersion": "2.0",
   ```

1. Add a user-defined package version to your manifest. This is also the value of **Version name** that you specify when you add your package to Distributor. It becomes part of the AWS Systems Manager document that Distributor creates when you add your package. You also provide this value as an input in the `AWS-ConfigureAWSPackage` document to install a version of the package other than the latest. A `version` value can contain letters, numbers, underscores, hyphens, and periods, and be a maximum of 128 characters in length. We recommend that you use a human-readable package version to make it easier for you and other administrators to specify exact package versions when you deploy. The following is an example.

   ```
   "version": "1.0.1",
   ```

1. (Optional) Add a publisher name. The following is an example.

   ```
   "publisher": "MyOrganization",
   ```

1. Add packages. The `"packages"` section describes the platforms, release versions, and architectures supported by the .zip files in your package. For more information, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

   The *platform-version* can be the wildcard value, `_any`. Use it to indicate that a .zip file supports any release of the platform. You can also specify a major release version followed by a wildcard so all minor versions are supported, for example 7.\$1. If you choose to specify a *platform-version* value for a specific operating system version, be sure that it matches the exact release version of the operating system AMI that you're targeting. The following are suggested resources for getting the correct value of the operating system.
   + On a Windows Server-based managed nodes, the release version is available as Windows Management Instrumentation (WMI) data. You can run the following command from a command prompt to get version information, then parse the results for `version`.

     ```
     wmic OS get /format:list
     ```
   + On a Linux-based managed node, get the version by first scanning for operating system release (the following command). Look for the value of `VERSION_ID`.

     ```
     cat /etc/os-release
     ```

     If that doesn't return the results that you need, run the following command to get LSB release information from the `/etc/lsb-release` file, and look for the value of `DISTRIB_RELEASE`.

     ```
     lsb_release -a
     ```

     If these methods fail, you can usually find the release based on the distribution. For example, on Debian Server, you can scan the `/etc/debian_version` file, or on Red Hat Enterprise Linux, the `/etc/redhat-release` file.

     ```
     hostnamectl
     ```

   ```
   "packages": {
       "platform": {
         "platform-version": {
           "architecture": {
             "file": ".zip-file-name-1.zip"
           }
         }
       },
       "another-platform": {
         "platform-version": {
           "architecture": {
             "file": ".zip-file-name-2.zip"
           }
         }
       },
       "another-platform": {
         "platform-version": {
           "architecture": {
             "file": ".zip-file-name-3.zip"
           }
         }
       }
     }
   ```

   The following is an example. In this example, the operating system platform is `amazon`, the supported release version is `2016.09`, the architecture is `x86_64`, and the .zip file that supports this platform is `test.zip`.

   ```
   {
       "amazon": {
           "2016.09": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       }
   },
   ```

   You can add the `_any` wildcard value to indicate that the package supports all versions of the parent element. For example, to indicate that the package is supported on any release version of Amazon Linux, your package statement should be similar to the following. You can use the `_any` wildcard at the version or architecture levels to support all versions of a platform, or all architectures in a version, or all versions and all architectures of a platform.

   ```
   {
       "amazon": {
           "_any": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       }
   },
   ```

   The following example adds `_any` to show that the first package, `data1.zip`, is supported for all architectures of Amazon Linux 2016.09. The second package, `data2.zip`, is supported for all releases of Amazon Linux, but only for managed nodes with `x86_64` architecture. Both the `2023.8` and `_any` versions are entries under `amazon`. There is one platform (Amazon Linux), but different supported versions, architectures, and associated .zip files.

   ```
   {
       "amazon": {
           "2023.8": {
               "_any": {
                   "file": "data1.zip"
               }
           },
           "_any": {
               "x86_64": {
                   "file": "data2.zip"
               }
           }
       }
   }
   ```

   You can refer to a .zip file more than once in the `"packages"` section of the manifest, if the .zip file supports more than one platform. For example, if you have a .zip file that supports both Red Hat Enterprise Linux 8.x versions and Amazon Linux, you have two entries in the `"packages"` section that point to the same .zip file, as shown in the following example.

   ```
   {
       "amazon": {
           "2023.8.20250715 ": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       },
       "redhat": {
           "8.*": {
               "x86_64": {
                   "file": "test.zip"
               }
           }
       }
   },
   ```

1. Add the list of .zip files that are part of this package from step 4. Each file entry requires the file name and `sha256` hash value checksum. Checksum values in the manifest must match the `sha256` hash value in the zipped assets to prevent the package installation from failing.

   To get the exact checksum from your installables, you can run the following commands. On Linux, run `shasum -a 256 file-name.zip` or `openssl dgst -sha256 file-name.zip`. On Windows, run the `Get-FileHash -Path path-to-.zip-file` cmdlet in [PowerShell](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/get-filehash?view=powershell-6).

   The `"files"` section of the manifest includes one reference to each of the .zip files in your package.

   ```
   "files": {
           "test-agent-x86.deb.zip": {
               "checksums": {
                   "sha256": "EXAMPLE2706223c7616ca9fb28863a233b38e5a23a8c326bb4ae241dcEXAMPLE"
               }
           },
           "test-agent-x86_64.deb.zip": {
               "checksums": {
                   "sha256": "EXAMPLE572a745844618c491045f25ee6aae8a66307ea9bff0e9d1052EXAMPLE"
               }
           },
           "test-agent-x86_64.nano.zip": {
               "checksums": {
                   "sha256": "EXAMPLE63ccb86e830b63dfef46995af6b32b3c52ce72241b5e80c995EXAMPLE"
               }
           },
           "test-agent-rhel8-x86.nano.zip": {
               "checksums": {
                   "sha256": "EXAMPLE13df60aa3219bf117638167e5bae0a55467e947a363fff0a51EXAMPLE"
               }
           },
           "test-agent-x86.msi.zip": {
               "checksums": {
                   "sha256": "EXAMPLE12a4abb10315aa6b8a7384cc9b5ca8ad8e9ced8ef1bf0e5478EXAMPLE"
               }
           },
           "test-agent-x86_64.msi.zip": {
               "checksums": {
                   "sha256": "EXAMPLE63ccb86e830b63dfef46995af6b32b3c52ce72241b5e80c995EXAMPLE"
               }
           },
           "test-agent-rhel8-x86.rpm.zip": {
               "checksums": {
                   "sha256": "EXAMPLE13df60aa3219bf117638167e5bae0a55467e947a363fff0a51EXAMPLE"
               }
           }
       }
   ```

1. After you add your package information, save and close the manifest file.

The following is an example of a completed manifest. In this example, you have a .zip file, `NewPackage_LINUX.zip`, that supports more than one platform, but is referenced in the `"files"` section only once.

```
{
    "schemaVersion": "2.0",
    "version": "1.7.1",
    "publisher": "Amazon Web Services",
    "packages": {
        "windows": {
            "_any": {
                "x86_64": {
                    "file": "NewPackage_WINDOWS.zip"
                }
            }
        },
        "amazon": {
            "_any": {
                "x86_64": {
                    "file": "NewPackage_LINUX.zip"
                }
            }
        },
        "ubuntu": {
            "_any": {
                "x86_64": {
                    "file": "NewPackage_LINUX.zip"
                }
            }
        }
    },
    "files": {
        "NewPackage_WINDOWS.zip": {
            "checksums": {
                "sha256": "EXAMPLEc2c706013cf8c68163459678f7f6daa9489cd3f91d52799331EXAMPLE"
            }
        },
        "NewPackage_LINUX.zip": {
            "checksums": {
                "sha256": "EXAMPLE2b8b9ed71e86f39f5946e837df0d38aacdd38955b4b18ffa6fEXAMPLE"
            }
        }
    }
}
```

#### Package example


An example package, [ExamplePackage.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/ExamplePackage.zip), is available for you to download from our website. The example package includes a completed JSON manifest and three .zip files.

### Step 3: Upload the package and manifest to an Amazon S3 bucket


Prepare your package by copying or moving all .zip files into a folder or directory. A valid package requires the manifest that you created in [Step 2: Create the JSON package manifest](#packages-manifest) and all .zip files identified in the manifest file list.

**To upload the package and manifest to Amazon S3**

1. Copy or move all .zip archive files that you specified in the manifest to a folder or directory. Don't zip the folder or directory you move your .zip archive files and manifest file to.

1. Create a bucket or choose an existing bucket. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*. For more information about how to run an AWS CLI command to create a bucket, see [https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) in the *AWS CLI Command Reference*.

1. Upload the folder or directory to the bucket. For more information, see [Add an Object to a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/PuttingAnObjectInABucket.html) in the *Amazon Simple Storage Service Getting Started Guide*. If you plan to paste your JSON manifest into the AWS Systems Manager console, don't upload the manifest. For more information about how to run an AWS CLI command to upload files to a bucket, see [https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html](https://docs.aws.amazon.com/cli/latest/reference/s3/mv.html) in the *AWS CLI Command Reference*.

1. On the bucket's home page, choose the folder or directory that you uploaded. If you uploaded your files to a subfolder in a bucket, be sure to note the subfolder (also known as a *prefix*). You need the prefix to add your package to Distributor.

### Step 4: Add a package to Distributor
Add a package to Distributor

You can use the AWS Systems Manager console, AWS command line tools (AWS CLI and AWS Tools for PowerShell), or AWS SDKs to add a new package to Distributor. When you add a package, you're adding a new [SSM document](documents.md). The document allows you to deploy the package to managed nodes.

**Topics**
+ [

#### Add a package using the console
](#create-pkg-console)
+ [

#### Add a package using the AWS CLI
](#create-pkg-cli)

#### Add a package using the console


You can use the AWS Systems Manager console to create a package. Have ready the name of the bucket to which you uploaded your package in [Step 3: Upload the package and manifest to an Amazon S3 bucket](#packages-upload-s3).

**To add a package to Distributor (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose **Create package**, and then choose **Advanced**.

1. On the **Create package** page, enter a name for your package. Package names can contain letters, numbers, periods, dashes, and underscores. The name should be generic enough to apply to all versions of the package attachments, but specific enough to identify the purpose of the package.

1. For **Version name**, enter the exact value of the `version` entry in your manifest file.

1. For **S3 bucket name**, choose the name of the bucket to which you uploaded your .zip files and manifest in [Step 3: Upload the package and manifest to an Amazon S3 bucket](#packages-upload-s3).

1. For **S3 key prefix**, enter the subfolder of the bucket where your .zip files and manifest are stored.

1. For **Manifest**, choose **Extract from package** to use a manifest that you have uploaded to the Amazon S3 bucket with your .zip files.

   (Optional) If you didn't upload your JSON manifest to the S3 bucket where you stored your .zip files, choose **New manifest**. You can author or paste the entire manifest in the JSON editor field. For more information about how to create the JSON manifest, see [Step 2: Create the JSON package manifest](#packages-manifest).

1. When you're finished with the manifest, choose **Create package**.

1. Wait for Distributor to create your package from your .zip files and manifest. Depending on the number and size of packages you are adding, this can take a few minutes. Distributor automatically redirects you to the **Package details** page for the new package, but you can choose to open this page yourself after the software is uploaded. The **Package details** page doesn't show all information about your package until Distributor finishes the package creation process. To stop the upload and package creation process, choose **Cancel**.

#### Add a package using the AWS CLI


You can use the AWS CLI to create a package. Have the URL ready from the bucket to which you uploaded your package in [Step 3: Upload the package and manifest to an Amazon S3 bucket](#packages-upload-s3).

**To add a package to Amazon S3 using the AWS CLI**

1. To use the AWS CLI to create a package, run the following command, replacing *package-name* with the name of your package and *path-to-manifest-file* with the file path for your JSON manifest file. amzn-s3-demo-bucket is the URL of the Amazon S3 bucket where the entire package is stored. When you run the **create-document** command in Distributor, you specify the `Package` value for `--document-type`.

   If you didn't add your manifest file to the Amazon S3 bucket, the `--content` parameter value is the file path to the JSON manifest file.

   ```
   aws ssm create-document \
       --name "package-name" \
       --content file://path-to-manifest-file \
       --attachments Key="SourceUrl",Values="amzn-s3-demo-bucket" \
       --version-name version-value-from-manifest \
       --document-type Package
   ```

   The following is an example.

   ```
   aws ssm create-document \
       --name "ExamplePackage" \
       --content file://path-to-manifest-file \
       --attachments Key="SourceUrl",Values="https://s3.amazonaws.com/amzn-s3-demo-bucket/ExamplePackage" \
       --version-name 1.0.1 \
       --document-type Package
   ```

1. Verify that your package was added and show the package manifest by running the following command, replacing *package-name* with the name of your package. To get a specific version of the document (not the same as the version of a package), you can add the `--document-version` parameter.

   ```
   aws ssm get-document \
       --name "package-name"
   ```

For information about other options you can use with the **create-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*. For information about other options you can use with the **get-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-document.html).

# Edit Distributor package permissions in the console
Edit package permissions

After you add a package to Distributor, a tool in AWS Systems Manager, you can edit the package's permissions in the Systems Manager console. You can add other AWS accounts to a package's permissions. Packages can be shared with other accounts in the same AWS Region only. Cross-Region sharing isn't supported. By default, packages are set to **Private**, meaning only those with access to the package creator's AWS account can view package information and update or delete the package. If **Private** permissions are acceptable, you can skip this procedure.

**Note**  
You can update the permissions of packages that are shared with 20 or fewer accounts. 

**To edit package permissions in the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Packages** page, choose the package for which you want to edit permissions.

1. On the **Package details** tab, choose **Edit permissions** to change permissions.

1. For **Edit permissions**, choose **Shared with specific accounts**.

1. Under **Shared with specific accounts**, add AWS account numbers, one at a time. When you're finished, choose **Save**.

# Edit Distributor package tags in the console
Edit package tags

After you have added a package to Distributor, a tool in AWS Systems Manager, you can edit the package's tags in the Systems Manager console. These tags are applied to the package, and aren't connected to tags on the managed node on which you want to deploy the package. Tags are case sensitive key and value pairs that can help you group and filter your packages by criteria that are relevant to your organization. If you don't want to add tags, you're ready to install your package or add a new version.

**To edit package tags in the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Packages** page, choose the package for which you want to edit tags.

1. On the **Package details** tab, in **Tags**, choose **Edit**.

1. For **Add tags**, enter a tag key, or a tag key and value pair, and then choose **Add**. Repeat if you want to add more tags. To delete tags, choose **X** on the tag at the bottom of the window.

1. When you're finished adding tags to your package, choose **Save**.

# Add a version to a Distributor package
Add a package version

To add a package version, [create a package](distributor-working-with-packages-create.md), and then use Distributor to add a package version by adding an entry to the AWS Systems Manager (SSM) document that already exists for older versions. Distributor is a tool in AWS Systems Manager. To save time, update the manifest for an older version of the package, change the value of the `version` entry in the manifest (for example, from `Test_1.0` to `Test_2.0`) and save it as the manifest for the new version. The simple **Add version** workflow in the Distributor console updates the manifest file for you.

A new package version can:
+ Replace at least one of the installable files attached to the current version.
+ Add new installable files to support additional platforms.
+ Delete files to discontinue support for specific platforms.

A newer version can use the same Amazon Simple Storage Service (Amazon S3) bucket, but must have a URL with a different file name shown at the end. You can use the Systems Manager console or the AWS Command Line Interface (AWS CLI) to add the new version. Uploading an installable file with the exact name as an existing installable file in the Amazon S3 bucket overwrites the existing file. No installable files are copied over from the older version to the new version; you must upload installable files from the older version to have them be part of a new version. After Distributor is finished creating your new package version, you can delete or repurpose the Amazon S3 bucket, because Distributor copies your software to an internal Systems Manager bucket as part of the versioning process.

**Note**  
Each package is held to a maximum of 25 versions. You can delete versions that are no longer required.

**Topics**
+ [

## Adding a package version using the console
](#add-pkg-version)
+ [

## Adding a package version using the AWS CLI
](#add-pkg-version-cli)

## Adding a package version using the console


Before you perform these steps, follow the instructions in [Create a package in Distributor](distributor-working-with-packages-create.md) to create a new package for the version. Then, use the Systems Manager console to add a new package version to Distributor.

### Adding a package version using the Simple workflow


To add a package version by using the **Simple** workflow, prepare updated installable files or add installable files to support more platforms and architectures. Then, use Distributor to upload new and updated installable files and add a package version. The simplified **Add version** workflow in the Distributor console updates the manifest file and associated SSM document for you.

**To add a package version using the Simple workflow**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package to which you want to add another version.

1. On the **Add version** page, choose **Simple**.

1. For **Version name**, enter a version name. The version name for the new version must be different from older version names. Version names can be a maximum of 512 characters, and can't contain special characters.

1. For **S3 bucket name**, choose an existing S3 bucket from the list. This can be the same bucket that you used to store installable files for older versions, but the installable file names must be different to avoid overwriting existing installable files in the bucket.

1. For **S3 key prefix**, enter the subfolder of the bucket where your installable assets are stored.

1. For **Upload software**, navigate to the installable software files that you want to attach to the new version. Installable files from existing versions aren't automatically copied over to a new version; you must upload any installable files from older versions of the package if you want any of the same installable files to be part of the new version. You can upload more than one software file in a single action.

1. For **Target platform**, verify that the target operating system platform shown for each installable file is correct. If the operating system shown isn't correct, choose the correct operating system from the dropdown list.

   In the **Simple** versioning workflow, because you upload each installable file only once, extra steps are required to target a single file at multiple operating systems. For example, if you upload an installable software file named `Logtool_v1.1.1.rpm`, you must change some defaults in the **Simple** workflow to instruct Distributor to target the same software at both Amazon Linux and Ubuntu Server operating systems. You can do one of the following to work around this limitation.
   + Use the **Advanced** versioning workflow instead, zip each installable file into a .zip file before you begin, and manually author the manifest so that one installable file can be targeted at multiple operating system platforms or versions. For more information, see [Adding a package version using the Advanced workflow](#add-pkg-version-adv).
   + Manually edit the manifest file in the **Simple** workflow so that your .zip file is targeted at multiple operating system platforms or versions. For more information about how to do this, see the end of step 4 in [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. For **Platform version**, verify that the operating system platform version shown is either **\$1any**, a major release version followed by a wildcard (8.\$1), or the exact operating system release version to which you want your software to apply. For more information about specifying a platform version, see step 4 in [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. For **Architecture**, choose the correct processor architecture for each installable file from the drop-down list. For more information about supported architectures, see [Supported package platforms and architectures](distributor.md#what-is-a-package-platforms).

1. (Optional) Expand **Scripts**, and review the installation and uninstallation scripts that Distributor generates for your installable software.

1. To add more installable software files to the new version, choose **Add software**. Otherwise, go to the next step.

1. (Optional) Expand **Manifest**, and review the JSON package manifest that Distributor generates for your installable software. If you changed any information about your installable software since you began this procedure, such as platform version or target platform, choose **Generate manifest** to show the updated package manifest.

   You can edit the manifest manually if you want to target a software installable at more than one operating system, as described in step 9. For more information about editing the manifest, see [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. When you finish adding software and reviewing the target platform, version, and architecture data, choose **Add version**.

1. Wait for Distributor to finish uploading your software and creating the new package version. Distributor shows upload status for each installable file. Depending on the number and size of packages you are adding, this can take a few minutes. Distributor automatically redirects you to the **Package details** page for the package, but you can choose to open this page yourself after the software is uploaded. The **Package details** page doesn't show all information about your package until Distributor finishes creating the new package version. To stop the upload and package version creation, choose **Stop upload**.

1. If Distributor can't upload any of the software installable files, it displays an **Upload failed** message. To retry the upload, choose **Retry upload**. For more information about how to troubleshoot package version creation failures, see [Troubleshooting AWS Systems Manager Distributor](distributor-troubleshooting.md).

1. When Distributor is finished creating the new package version, on the package's **Details** page, on the **Versions** tab, view the new version in the list of available package versions. Set a default version of the package by choosing a version, and then choosing **Set default version**.

   If you don't set a default version, the newest package version is the default version.

### Adding a package version using the Advanced workflow


To add a package version, [create a package](distributor-working-with-packages-create.md), and then use Distributor to add a package version by adding an entry to the SSM document that exists for older versions. To save time, update the manifest for an older version of the package, change the value of the `version` entry in the manifest (for example, from `Test_1.0` to `Test_2.0`) and save it as the manifest for the new version. You must have an updated manifest to add a new package version by using the **Advanced** workflow.

**To add a package version using the Advanced workflow**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package to which you want to add another version, and then choose **Add version**.

1. For **Version name**, enter the exact value that is in the `version` entry of your manifest file.

1. For **S3 bucket name**, choose an existing S3 bucket from the list. This can be the same bucket that you used to store installable files for older versions, but the installable file names must be different to avoid overwriting existing installable files in the bucket.

1. For **S3 key prefix**, enter the subfolder of the bucket where your installable assets are stored.

1. For **Manifest**, choose **Extract from package** to use a manifest that you uploaded to the S3 bucket with your .zip files.

   (Optional) If you didn't upload your revised JSON manifest to the Amazon S3 bucket where you stored your .zip files, choose **New manifest**. You can author or paste the entire manifest in the JSON editor field. For more information about how to create the JSON manifest, see [Step 2: Create the JSON package manifest](distributor-working-with-packages-create.md#packages-manifest).

1. When you're finished with the manifest, choose **Add package version**.

1. On the package's **Details** page, on the **Versions** tab, view the new version in the list of available package versions. Set a default version of the package by choosing a version, and then choosing **Set default version**.

   If you don't set a default version, the newest package version is the default version.

## Adding a package version using the AWS CLI


You can use the AWS CLI to add a new package version to Distributor. Before you run these commands, you must create a new package version and upload it to S3, as described at the start of this topic.

**To add a package version using the AWS CLI**

1. Run the following command to edit the AWS Systems Manager document with an entry for a new package version. Replace *document-name* with the name of your document. Replace *amzn-s3-demo-bucket* with the URL of the JSON manifest that you copied in [Step 3: Upload the package and manifest to an Amazon S3 bucket](distributor-working-with-packages-create.md#packages-upload-s3). *S3-bucket-URL-of-package* is the URL of the Amazon S3 bucket where the entire package is stored. Replace *version-name-from-updated-manifest* with the value of `version` in the manifest. Set the `--document-version` parameter to `$LATEST` to make the document associated with this package version the latest version of the document.

   ```
   aws ssm update-document \
       --name "document-name" \
       --content "S3-bucket-URL-to-manifest-file" \
       --attachments Key="SourceUrl",Values="amzn-s3-demo-bucket" \
       --version-name version-name-from-updated-manifest \
       --document-version $LATEST
   ```

   The following is an example.

   ```
   aws ssm update-document \
       --name ExamplePackage \
       --content "https://s3.amazonaws.com/amzn-s3-demo-bucket/ExamplePackage/manifest.json" \
       --attachments Key="SourceUrl",Values="https://s3.amazonaws.com/amzn-s3-demo-bucket/ExamplePackage" \
       --version-name 1.1.1 \
       --document-version $LATEST
   ```

1. Run the following command to verify that your package was updated and show the package manifest. Replace *package-name* with the name of your package, and optionally, *document-version* with the version number of the document (not the same as the package version) that you updated. If this package version is associated with the latest version of the document, you can specify `$LATEST` for the value of the optional `--document-version` parameter.

   ```
   aws ssm get-document \
       --name "package-name" \
       --document-version "document-version"
   ```

For information about other options you can use with the **update-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-document.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Install or update Distributor packages
Install or update packages

You can deploy packages to your AWS Systems Manager managed nodes by using Distributor, a tool in AWS Systems Manager. To deploy the packages, use either the AWS Management Console or AWS Command Line Interface (AWS CLI). You can deploy one version of one package per command. You can install new packages or update existing installations in place. You can choose to deploy a specific version or choose to always deploy the latest version of a package for deployment. We recommend using State Manager, a tool in AWS Systems Manager, to install packages. Using State Manager helps ensure that your managed nodes are always running the most up-to-date version of your package.

**Important**  
Packages that you install using Distributor should be uninstalled only by using Distributor. Otherwise, Systems Manager can still register the application as `INSTALLED` and lead to other unintended results.


| Preference | AWS Systems Manager action | More info | 
| --- | --- | --- | 
|  Install or update a package immediately.  |  Run Command  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-deploy.html)  | 
|  Install or update a package on a schedule, so that the installation always includes the default version.  |  State Manager  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/distributor-working-with-packages-deploy.html)  | 
|  Automatically install a package on new managed nodes that have a specific tag or set of tags. For example, installing the Amazon CloudWatch agent on new instances.  |  State Manager  |  One way to do this is to apply tags to new managed nodes, and then specify the tags as targets in your State Manager association. State Manager automatically installs the package in an association on managed nodes that have matching tags. See [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).  | 

**Topics**
+ [

## Installing or updating a package one time using the console
](#distributor-deploy-pkg-console)
+ [

## Scheduling a package installation or update using the console
](#distributor-deploy-sm-pkg-console)
+ [

## Installing a package one time using the AWS CLI
](#distributor-deploy-pkg-cli)
+ [

## Updating a package one time using the AWS CLI
](#distributor-update-pkg-cli)
+ [

## Scheduling a package installation using the AWS CLI
](#distributor-smdeploy-pkg-cli)
+ [

## Scheduling a package update using the AWS CLI
](#distributor-smupdate-pkg-cli)

## Installing or updating a package one time using the console


You can use the AWS Systems Manager console to install or update a package one time. When you configure a one-time installation, Distributor uses [AWS Systems Manager Run Command](run-command.md), a tool in AWS Systems Manager, to perform the installation.

**To install or update a package one time using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package that you want to install.

1. Choose **Install one time**.

   This command opens Run Command with the command document `AWS-ConfigureAWSPackage` and your Distributor package already selected.

1. For **Document version**, select the version of the `AWS-ConfigureAWSPackage` document that you want to run.

1. For **Action**, choose **Install**.

1. For **Installation type**, choose one of the following: 
   + **Uninstall and reinstall**: The package is completely uninstalled, and then reinstalled. The application is unavailable until the reinstallation is complete.
   + **In-place update**: Only new or changed files are added to the existing installation according to instructions you provide in an `update` script. The application remains available throughout the update process. This option isn't supported for AWS published packages except the `AWSEC2Launch-Agent` package.

1. For **Name**, verify that the name of the package you selected is entered.

1. (Optional) For **Version**, enter the version name value of the package. If you leave this field blank, Run Command installs the default version that you selected in Distributor.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or devices manually, or by specifying a resource group.
**Note**  
If you don't see a managed node in the list, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md).

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate Control**:
   + For **Concurrency**, specify either a number or a percentage of targets on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags or resource groups and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other targets after it fails on either a number or a percentage of managed nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors. 

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. When you're ready to install the package, choose **Run**.

1. The **Command status** area reports the progress of the execution. If the command is still in progress, choose the refresh icon in the top-left corner of the console until the **Overall status** or **Detailed status** column shows **Success** or **Failed**.

1. In the **Targets and outputs** area, choose the button next to a managed node name, and then choose **View output**.

   The command output page shows the results of your command execution. 

1. (Optional) If you chose to write command output to an Amazon S3 bucket, choose **Amazon S3** to view the output log data.

## Scheduling a package installation or update using the console


You can use the AWS Systems Manager console to schedule the installation or update of a package. When you schedule package installation or update, Distributor uses [AWS Systems Manager State Manager](systems-manager-state.md) to install or update.

**To schedule a package installation using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the Distributor home page, choose the package that you want to install or update.

1. For **Package**, choose **Install on a schedule**.

   This command opens State Manager to a new association that is created for you.

1. For **Name**, enter a name (for example, **Deploy-test-agent-package**). This is optional, but recommended. Spaces aren't allowed in the name.

1. In the **Document** list, the document name `AWS-ConfigureAWSPackage` is already selected. 

1. For **Action**, verify that **Install** is selected.

1. For **Installation type**, choose one of the following: 
   + **Uninstall and reinstall**: The package is completely uninstalled, and then reinstalled. The application is unavailable until the reinstallation is complete.
   + **In-place update**: Only new or changed files are added to the existing installation according to instructions you provide in an `update` script. The application remains available throughout the update process.

1. For **Name**, verify that the name of your package is entered.

1. For **Version**, if you to want to install a package version other than the latest published version, enter the version identifier.

1. For **Targets**, choose **Selecting all managed instances in this account**, **Specifying tags**, or **Manually Selecting Instance**. If you target resources by using tags, enter a tag key and a tag value in the fields provided.
**Note**  
You can choose managed AWS IoT Greengrass core devices by choosing either **Selecting all managed instances in this account** or **Manually Selecting Instance**.

1. For **Specify schedule**, choose **On Schedule** to run the association on a regular schedule, or **No Schedule** to run the association once. For more information about these options, see [Working with associations in Systems Manager](state-manager-associations.md). Use the controls to create a `cron` or rate schedule for the association.

1. Choose **Create Association**.

1. On the **Association** page, choose the button next to the association you created, and then choose **Apply association now**.

   State Manager creates and immediately runs the association on the specified targets. For more information about the results of running associations, see [Working with associations in Systems Manager](state-manager-associations.md) in this guide.

For more information about working with the options in **Advanced options**, **Rate control**, and **Output options**, see [Working with associations in Systems Manager](state-manager-associations.md). 

## Installing a package one time using the AWS CLI


You can run **send-command** in the AWS CLI to install a Distributor package one time. If the package is already installed, the application will be taken offline while the package is uninstalled and the new version installed in its place.

**To install a package one time using the AWS CLI**
+ Run the following command in the AWS CLI.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "instance-IDs" \
      --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}'
  ```
**Note**  
The default behavior for `installationType` is `Uninstall and reinstall`. You can omit `"installationType":["Uninstall and reinstall"]` from this command when you're installing a complete package.

  The following is an example.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "i-00000000000000" \
      --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["ExamplePackage"]}'
  ```

For information about other options you can use with the **send-command** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

## Updating a package one time using the AWS CLI


You can run **send-command** in the AWS CLI to update a Distributor package without taking the associated application offline. Only new or updated files in the package are replaced.

**To update a package one time using the AWS CLI**
+ Run the following command in the AWS CLI.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "instance-IDs" \
      --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}'
  ```
**Note**  
When you add new or changed files, you must include `"installationType":["In-place update"]` in the command.

  The following is an example.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "i-02573cafcfEXAMPLE" \
      --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["ExamplePackage"]}'
  ```

For information about other options you can use with the **send-command** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

## Scheduling a package installation using the AWS CLI


You can run **create-association** in the AWS CLI to install a Distributor package on a schedule. The value of `--name`, the document name, is always `AWS-ConfigureAWSPackage`. The following command uses the key `InstanceIds` to specify target managed nodes. If the package is already installed, the application will be taken offline while the package is uninstalled and the new version installed in its place.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"instance-ID1\",\"instance-ID2\"]}]
```

**Note**  
The default behavior for `installationType` is `Uninstall and reinstall`. You can omit `"installationType":["Uninstall and reinstall"]` from this command when you're installing a complete package.

The following is an example.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["Uninstall and reinstall"],"name":["Test-ConfigureAWSPackage"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"i-02573cafcfEXAMPLE\",\"i-0471e04240EXAMPLE\"]}]
```

For information about other options you can use with the **create-association** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

## Scheduling a package update using the AWS CLI


You can run **create-association** in the AWS CLI to update a Distributor package on a schedule without taking the associated application offline. Only new or updated files in the package are replaced. The value of `--name`, the document name, is always `AWS-ConfigureAWSPackage`. The following command uses the key `InstanceIds` to specify target instances.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"instance-ID1\",\"instance-ID2\"]}]
```

**Note**  
When you add new or changed files, you must include `"installationType":["In-place update"]` in the command.

The following is an example.

```
aws ssm create-association \
    --name "AWS-ConfigureAWSPackage" \
    --parameters '{"action":["Install"],"installationType":["In-place update"],"name":["Test-ConfigureAWSPackage"]}' \
    --targets [{\"Key\":\"InstanceIds\",\"Values\":[\"i-02573cafcfEXAMPLE\",\"i-0471e04240EXAMPLE\"]}]
```

For information about other options you can use with the **create-association** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Uninstall a Distributor package
Uninstall a package

You can use the AWS Management Console or the AWS Command Line Interface (AWS CLI) to uninstall Distributor packages from your AWS Systems Manager managed nodes by using Run Command. Distributor and Run Command are tools in AWS Systems Manager. In this release, you can uninstall one version of one package per command. You can uninstall a specific version or the default version.

**Important**  
Packages that you install using Distributor should be uninstalled only by using Distributor. Otherwise, Systems Manager can still register the application as `INSTALLED` and lead to other unintended results.

**Topics**
+ [

## Uninstalling a package using the console
](#distributor-pkg-uninstall-console)
+ [

## Uninstalling a package using the AWS CLI
](#distributor-pkg-uninstall-cli)

## Uninstalling a package using the console


You can use Run Command in the Systems Manager console to uninstall a package one time. Distributor uses [AWS Systems Manager Run Command](run-command.md) to uninstall packages.

**To uninstall a package using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. On the Run Command home page, choose **Run command**.

1. Choose the `AWS-ConfigureAWSPackage` command document.

1. From **Action**, choose **Uninstall** 

1. For **Name**, enter the name of the package that you want to uninstall.

1. For **Targets**, choose how you want to target your managed nodes. You can specify a tag key and values that are shared by the targets. You can also specify targets by choosing attributes, such as an ID, platform, and SSM Agent version.

1. You can use the advanced options to add comments about the operation, change **Concurrency** and **Error threshold** values in **Rate control**, specify output options, or configure Amazon Simple Notification Service (Amazon SNS) notifications. For more information, see [Running Commands from the Console](https://docs.aws.amazon.com/systems-manager/latest/userguide/rc-console.html) in this guide.

1. When you're ready to uninstall the package, choose **Run**, and then choose **View results**.

1. In the commands list, choose the `AWS-ConfigureAWSPackage` command that you ran. If the command is still in progress, choose the refresh icon in the top-right corner of the console.

1. When the **Status** column shows **Success** or **Failed**, choose the **Output** tab.

1. Choose **View output**. The command output page shows the results of your command execution.

## Uninstalling a package using the AWS CLI


You can use the AWS CLI to uninstall a Distributor package from managed nodes by using Run Command.

**To uninstall a package using the AWS CLI**
+ Run the following command in the AWS CLI.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "instance-IDs" \
      --parameters '{"action":["Uninstall"],"name":["package-name (in same account) or package-ARN (shared from different account)"]}'
  ```

  The following is an example.

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureAWSPackage" \
      --instance-ids "i-02573cafcfEXAMPLE" \
      --parameters '{"action":["Uninstall"],"name":["Test-ConfigureAWSPackage"]}'
  ```

For information about other options you can use with the **send-command** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Delete a Distributor package
Delete a package

This section describes how to a delete a package. You can't delete a version of a package, only the entire package.

## Deleting a package using the console


You can use the AWS Systems Manager console to delete a package or a package version from Distributor, a tool in AWS Systems Manager. Deleting a package deletes all versions of a package from Distributor.

**To delete a package using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Distributor** home page, choose the package that you want to delete.

1. On the package's details page, choose **Delete package**.

1. When you're prompted to confirm the deletion, choose **Delete package**.

## Deleting a package version using the console


You can use the Systems Manager console to delete a package version from Distributor.

**To delete a package version using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Distributor**.

1. On the **Distributor** home page, choose the package that you want to delete a version of.

1. On the versions page for the package, choose the version to delete and choose **Delete version**.

1. When you're prompted to confirm the deletion, choose **Delete package version**.

## Deleting a package using the command line


You can use your preferred command line tool to delete a package from Distributor.

------
#### [ Linux & macOS ]

**To delete a package using the AWS CLI**

1. Run the following command to list documents for specific packages. In the results of this command, look for the package that you want to delete.

   ```
   aws ssm list-documents \
       --filters Key=Name,Values=package-name
   ```

1. Run the following command to delete a package. Replace *package-name* with the package name.

   ```
   aws ssm delete-document \
       --name "package-name"
   ```

1. Run the **list-documents** command again to verify that the package was deleted. The package you deleted shouldn't be included in the list.

   ```
   aws ssm list-documents \
       --filters Key=Name,Values=package-name
   ```

------
#### [ Windows ]

**To delete a package using the AWS CLI**

1. Run the following command to list documents for specific packages. In the results of this command, look for the package that you want to delete.

   ```
   aws ssm list-documents ^
       --filters Key=Name,Values=package-name
   ```

1. Run the following command to delete a package. Replace *package-name* with the package name.

   ```
   aws ssm delete-document ^
       --name "package-name"
   ```

1. Run the **list-documents** command again to verify that the package was deleted. The package you deleted shouldn't be included in the list.

   ```
   aws ssm list-documents ^
       --filters Key=Name,Values=package-name
   ```

------
#### [ PowerShell ]

**To delete a package using Tools for PowerShell**

1. Run the following command to list documents for specific packages. In the results of this command, look for the package that you want to delete.

   ```
   $filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
   $filter.Key = "Name"
   $filter.Values = "package-name"
   
   Get-SSMDocumentList `
       -Filters @($filter)
   ```

1. Run the following command to delete a package. Replace *package-name* with the package name.

   ```
   Remove-SSMDocument `
       -Name "package-name"
   ```

1. Run the **Get-SSMDocumentList** command again to verify that the package was deleted. The package you deleted shouldn't be included in the list.

   ```
   $filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
   $filter.Key = "Name"
   $filter.Values = "package-name"
   
   Get-SSMDocumentList `
       -Filters @($filter)
   ```

------

## Deleting a package version using the command line


You can use your preferred command line tool to delete a package version from Distributor.

------
#### [ Linux & macOS ]

**To delete a package version using the AWS CLI**

1. Run the following command to list the versions of your package. In the results of this command, look for the package version that you want to delete.

   ```
   aws ssm list-document-versions \
       --name "package-name"
   ```

1. Run the following command to delete a package version. Replace *package-name* with the package name and *version* with the version number.

   ```
   aws ssm delete-document \
       --name "package-name" \
       --document-version version
   ```

1. Run the **list-document-versions** command to verify that the version of the package was deleted. The package version that you deleted shouldn't be found.

   ```
   aws ssm list-document-versions \
       --name "package-name"
   ```

------
#### [ Windows ]

**To delete a package version using the AWS CLI**

1. Run the following command to list the versions of your package. In the results of this command, look for the package version that you want to delete.

   ```
   aws ssm list-document-versions ^
       --name "package-name"
   ```

1. Run the following command to delete a package version. Replace *package-name* with the package name and *version* with the version number.

   ```
   aws ssm delete-document ^
       --name "package-name" ^
       --document-version version
   ```

1. Run the **list-document-versions** command to verify that the version of the package was deleted. The package version that you deleted shouldn't be found.

   ```
   aws ssm list-document-versions ^
       --name "package-name"
   ```

------
#### [ PowerShell ]

**To delete a package version using Tools for PowerShell**

1. Run the following command to list the versions of your package. In the results of this command, look for the package version that you want to delete.

   ```
   Get-SSMDocumentVersionList `
       -Name "package-name"
   ```

1. Run the following command to delete a package version. Replace *package-name* with the package name and *version* with the version number.

   ```
   Remove-SSMDocument `
       -Name "package-name" `
       -DocumentVersion version
   ```

1. Run the **Get-SSMDocumentVersionList** command to verify that the version of the package was deleted. The package version that you deleted shouldn't be found.

   ```
   Get-SSMDocumentVersionList `
       -Name "package-name"
   ```

------

For information about other options you can use with the **list-documents** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/list-documents.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-documents.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*. For information about other options you can use with the **delete-document** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/delete-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/delete-document.html).

# Auditing and logging Distributor activity


You can use AWS CloudTrail to audit activity related to Distributor, a tool in AWS Systems Manager. For more information about auditing and logging options for Systems Manager, see [Logging and monitoring in AWS Systems Manager](monitoring.md).

## Audit Distributor activity using CloudTrail


CloudTrail captures API calls made in the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), and the Systems Manager SDK. The information can be viewed in the CloudTrail console or stored in an Amazon Simple Storage Service (Amazon S3) bucket. One bucket is used for all CloudTrail logs for your account.

Logs of Run Command and State Manager actions show document creation, package installation, and package uninstallation activity. Run Command and State Manager are tools in AWS Systems Manager. For more information about viewing and using CloudTrail logs of Systems Manager activity, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

# Troubleshooting AWS Systems Manager Distributor
Troubleshooting Distributor

The following information can help you troubleshoot problems that might occur when you use Distributor, a tool in AWS Systems Manager.

**Topics**
+ [

## Wrong package with the same name is installed
](#distributor-tshoot-1)
+ [

## Error: Failed to retrieve manifest: Could not find latest version of package
](#distributor-tshoot-2)
+ [

## Error: Failed to retrieve manifest: Validation exception
](#distributor-tshoot-3)
+ [

## Package isn't supported (package is missing install action)
](#distributor-tshoot-4)
+ [

## Error: Failed to download manifest : Document with name does not exist
](#distributor-tshoot-5)
+ [

## Upload failed.
](#distributor-tshoot-6)
+ [

## Error: Failed to find platform: no manifest found for platform: oracle, version 8.9, architecture x86\$164
](#distributor-tshoot-7)

## Wrong package with the same name is installed


**Problem:** You've installed a package, but Distributor installed a different package instead.

**Cause:** During installation, Systems Manager finds AWS published packages as results before user-defined external packages. If your user-defined package name is the same as an AWS published package name, the AWS package is installed instead of your package.

**Solution:** To avoid this problem, name your package something different from the name for an AWS published package.

## Error: Failed to retrieve manifest: Could not find latest version of package


**Problem:** You received an error like the following.

```
Failed to retrieve manifest: ResourceNotFoundException: Could not find the latest version of package 
arn:aws:ssm:::package/package-name status code: 400, request id: guid
```

**Cause:** You're using a version of SSM Agent with Distributor that is earlier than version 2.3.274.0.

**Solution:** Update the version of SSM Agent to version 2.3.274.0 or later. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample) or [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).

## Error: Failed to retrieve manifest: Validation exception


**Problem:** You received an error like the following.

```
Failed to retrieve manifest: ValidationException: 1 validation error detected: Value 'documentArn'
at 'packageName' failed to satisfy constraint: Member must satisfy regular expression pattern:
arn:aws:ssm:region-id:account-id:package/package-name
```

**Cause:** You're using a version of SSM Agent with Distributor that is earlier than version 2.3.274.0.

**Solution:** Update the version of SSM Agent to version 2.3.274.0 or later. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample) or [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).

## Package isn't supported (package is missing install action)


**Problem:** You received an error like the following.

```
Package is not supported (package is missing install action)
```

**Cause:** The package directory structure is incorrect.

**Solution:** Don't zip a parent directory containing the software and required scripts. Instead, create a `.zip` file of all the required contents directly in the absolute path. To verify the `.zip` file was created correctly, unzip the target platform directory and review the directory structure. For example, the install script absolute path should be `/ExamplePackage_targetPlatform/install.sh`.

## Error: Failed to download manifest : Document with name does not exist


**Problem:** You received an error like the following. 

```
Failed to download manifest - failed to retrieve package document description: InvalidDocument: Document with name filename does not exist.
```

**Cause 1:** Distributor can't find the package by the package name when sharing a Distributor package from another account.

**Solution 1:** When sharing a package from another account, use the full Amazon Resource Name (ARN) for the package and not just its name.

**Cause 2:** When using a VPC, you haven't provided your IAM instance profile with access to the AWS managed S3 bucket that contains the document `AWS-ConfigureAWSPackage` for the AWS Region you are targeting.

**Solution 2:** Ensure that your IAM instance profile provides SSM Agent with access to the AWS managed S3 bucket that contains the document `AWS-ConfigureAWSPackage` for the AWS Region you are targeting, as explained in [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions).

## Upload failed.


**Problem:** You received an error like the following. 

```
Upload failed. At least one of your files was not successfully uploaded to your S3 bucket.
```

**Cause:** The name of your software package includes a space. For example, `Hello World.msi` would fail to upload.

## Error: Failed to find platform: no manifest found for platform: oracle, version 8.9, architecture x86\$164


**Problem:** You received an error like the following.

```
Failed to find platform: no manifest found for platform: oracle, version 8.9, architecture x86_64
```

**Cause:** The error means that the JSON package manifest does not have any definition for that platform, in this case, Oracle Linux..

**Solution:** Download the package you want to distribute from the [Trend Micro Deep Security Software](https://help.deepsecurity.trendmicro.com/software.html) site. Create an `.rpm` software package using the [Create a package using the Simple workflow](distributor-working-with-packages-create.md#distributor-working-with-packages-create-simple). Set the following values for the package and complete the upload software package using Distributor:

```
Platform version: _any
Target platform: oracle
Architecture: x86_64
```

# AWS Systems Manager Fleet Manager
Fleet Manager

Fleet Manager, a tool in AWS Systems Manager, is a unified user interface (UI) experience that helps you remotely manage your nodes running on AWS or on premises. With Fleet Manager, you can view the health and performance status of your entire server fleet from one console. You can also gather data from individual nodes to perform common troubleshooting and management tasks from the console. This includes connecting to Windows instances using the Remote Desktop Protocol (RDP), viewing folder and file contents, Windows registry management, operating system user management, and more. 

To get started with Fleet Manager, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/fleet-manager). In the navigation pane, choose **Fleet Manager**.

## Who should use Fleet Manager?


Any AWS customer who wants a centralized way to manage their node fleet should use Fleet Manager.

## How can Fleet Manager benefit my organization?


Fleet Manager offers these benefits:
+ Perform a variety of common systems administration tasks without having to manually connect to your managed nodes.
+ Manage nodes running on multiple platforms from a single unified console.
+ Manage nodes running different operating systems from a single unified console.
+ Improve the efficiency of your systems administration.

## What are the features of Fleet Manager?


Key features of Fleet Manager include the following:
+ **Access the Red Hat Knowledgebase Portal**

  Access binaries, knowledge-shares, and discussion forums on the Red Hat Knowledgebase Portal through your Red Hat Enterprise Linux (RHEL) instances.
+ **Managed node status** 

  View which managed instances are `running` and which are `stopped`. For more information about stopped instances, see [Stop and start your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html) in the *Amazon EC2 User Guide*. For AWS IoT Greengrass core devices, you can view which are `online`, `offline`, or show a status of `Connection lost`.
**Note**  
If you stopped your managed instance before July 12, 2021, it won't display the `stopped` marker. To show the marker, start and stop the instance.
+ **View instance information**

  View information about the folder and file data stored on the volumes attached to your managed instances, performance data about your instances in real-time, and log data stored on your instances.
+ **View edge device information**

  View the AWS IoT Greengrass Thing name for the device, SSM Agent ping status and version, and more.
+ **Manage accounts and registry**

  Manage operating system (OS) user accounts on your instances and registry on your Windows instances.
+ **Control access to features**

  Control access to Fleet Manager features using AWS Identity and Access Management (IAM) policies. With these policies, you can control which individual users or groups in your organization can use various Fleet Manager features, and which managed nodes they can manage.

**Topics**
+ [

## Who should use Fleet Manager?
](#fleet-who)
+ [

## How can Fleet Manager benefit my organization?
](#fleet-benefits)
+ [

## What are the features of Fleet Manager?
](#fleet-features)
+ [

# Setting up Fleet Manager
](setting-up-fleet-manager.md)
+ [

# Working with managed nodes
](fleet-manager-managed-nodes.md)
+ [

# Managing EC2 instances automatically with Default Host Management Configuration
](fleet-manager-default-host-management-configuration.md)
+ [

# Connecting to a Windows Server managed instance using Remote Desktop
](fleet-manager-remote-desktop-connections.md)
+ [

# Managing Amazon EBS volumes on managed instances
](fleet-manager-manage-amazon-ebs-volumes.md)
+ [

# Accessing the Red Hat Knowledge base portal
](fleet-manager-red-hat-knowledge-base-access.md)
+ [

# Troubleshooting managed node availability
](fleet-manager-troubleshooting-managed-nodes.md)

# Setting up Fleet Manager


Before users in your AWS account can use Fleet Manager, a tool in AWS Systems Manager, to monitor and manage your managed nodes, they must be granted the necessary permissions. In addition, any Amazon Elastic Compute Cloud (Amazon EC2) instances; AWS IoT Greengrass core devices; and on-premises servers, edge devices, and virtual machines (VMs) to be monitored and managed using Fleet Manager must be Systems Manager* managed nodes*. A managed node is any machine configured for use with Systems Manager in [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments.

This means your nodes must meet certain prerequisites and be configured with the AWS Systems Manager Agent (SSM Agent).

Depending on the machine type, refer to one of the following topics to ensure your machines meet the requirements for managed nodes.
+ Amazon EC2 instances: [Managing EC2 instances with Systems Manager](systems-manager-setting-up-ec2.md)
**Tip**  
You can also use Quick Setup, a tool in AWS Systems Manager, to help you quickly configure your Amazon EC2 instances as managed instances in an individual account. If your business or organization uses AWS Organizations, you can also configure instances across multiple organizational units (OUs) and AWS Regions. For more information about using Quick Setup to configure managed instances, see [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md).
+ On-premises and other server types in the cloud: [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md)
+ AWS IoT Greengrass (edge) devices: [Managing edge devices with Systems Manager](systems-manager-setting-up-edge-devices.md)

**Topics**
+ [

# Controlling access to Fleet Manager
](configuring-fleet-manager-permissions.md)

# Controlling access to Fleet Manager


To use Fleet Manager, a tool in AWS Systems Manager, your AWS Identity and Access Management (IAM) user or role must have the required permissions. You can create an IAM policy that provides access to all Fleet Manager features, or modify your policy to grant access to the features you choose. You then grant these permissions to users, or identities, in your account.

**Task 1: Create IAM policies to define access permissions**  
Follow one of the methods provided in the followig topic in the *IAM User Guide* to create an IAM to provide identities (users, roles, or user groupss) with access to Fleet Manager:  
+ [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html)
You can use one of the sample policies we provide below, or modify them according to the permissions you want to grant. We provide sample policies for full Fleet Manager access and read-only access. 

**Task 2: Attach the IAM policies to users to grant permissions**  
After you have created the IAM policy or policies that define access permissions to Fleet Manager, use one of the following procedures in the *IAM User Guide* to grant these permissions to identities in your account:  
+ [Adding IAM identity permissions (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console)
+ [Adding IAM identity permissions (AWS CLI)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policy-cli)
+ [Adding IAM identity permissions (AWS API)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policy-api)

**Topics**
+ [

## Sample policy for Fleet Manager administrator access
](#admin-policy-sample)
+ [

## Sample policy for Fleet Manager read-only access
](#read-only-policy-sample)

## Sample policy for Fleet Manager administrator access


The following policy provides permissions to all Fleet Manager features. This means a user can create and delete local users and groups, modify group membership for any local group, and modify Windows Server registry keys or values. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags",
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
            ],
            "Resource": "*"
        },
        {
            "Sid": "General",
            "Effect": "Allow",
            "Action": [
                "ssm:AddTagsToResource",
                "ssm:DescribeInstanceAssociationsStatus",
                "ssm:DescribeInstancePatches",
                "ssm:DescribeInstancePatchStates",
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetServiceSetting",
                "ssm:GetInventorySchema",
                "ssm:ListComplianceItems",
                "ssm:ListInventoryEntries",
                "ssm:ListTagsForResource",
                "ssm:ListCommandInvocations",
                "ssm:ListAssociations",
                "ssm:RemoveTagsFromResource"
            ],
            "Resource": "*"
        },
        {
            "Sid": "DefaultHostManagement",
            "Effect": "Allow",
            "Action": [
                "ssm:ResetServiceSetting",
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::111122223333:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": [
                        "ssm.amazonaws.com"
                    ]
                }
            }
        },
        {
            "Sid": "SendCommand",
            "Effect": "Allow",
            "Action": [
                "ssm:GetDocument",
                "ssm:SendCommand",
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*",
                "arn:aws:ssm:*:111122223333:document/SSM-SessionManagerRunShell",
                "arn:aws:ssm:*:*:document/AWS-PasswordReset",
                "arn:aws:ssm:*:*:document/AWSFleetManager-AddUsersToGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CopyFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateDirectory",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateGroup",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateUser",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateUserInteractive",
                "arn:aws:ssm:*:*:document/AWSFleetManager-CreateWindowsRegistryKey",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteGroup",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteUser",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteWindowsRegistryKey",
                "arn:aws:ssm:*:*:document/AWSFleetManager-DeleteWindowsRegistryValue",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetDiskInformation",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileSystemContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetPerformanceCounters",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetProcessDetails",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetUsers",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsEvents",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsRegistryContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-MountVolume",
                "arn:aws:ssm:*:*:document/AWSFleetManager-MoveFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-RemoveUsersFromGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-RenameFileSystemItem",
                "arn:aws:ssm:*:*:document/AWSFleetManager-SetWindowsRegistryValue",
                "arn:aws:ssm:*:*:document/AWSFleetManager-StartProcess",
                "arn:aws:ssm:*:*:document/AWSFleetManager-TerminateProcess"
            ]
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}"
                    ]
                }
            }
        }
    ]
}
```

------

## Sample policy for Fleet Manager read-only access


The following policy provides permissions to read-only Fleet Manager features. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeTags"
            ],
            "Resource": "*"
        },
        {
            "Sid": "General",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceAssociationsStatus",
                "ssm:DescribeInstancePatches",
                "ssm:DescribeInstancePatchStates",
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetServiceSetting",
                "ssm:GetInventorySchema",
                "ssm:ListComplianceItems",
                "ssm:ListInventoryEntries",
                "ssm:ListTagsForResource",
                "ssm:ListCommandInvocations",
                "ssm:ListAssociations"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SendCommand",
            "Effect": "Allow",
            "Action": [
                "ssm:GetDocument",
                "ssm:SendCommand",
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*",
                "arn:aws:ssm:*:111122223333:document/SSM-SessionManagerRunShell",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetDiskInformation",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetFileSystemContent",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetGroups",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetPerformanceCounters",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetProcessDetails",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetUsers",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsEvents",
                "arn:aws:ssm:*:*:document/AWSFleetManager-GetWindowsRegistryContent"
            ]
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}"
                    ]
                }
            }
        }
    ]
}
```

------

# Working with managed nodes


A *managed node* is any machine configured for AWS Systems Manager. You can configure the following machine types as managed nodes: 
+ Amazon Elastic Compute Cloud (Amazon EC2) instances
+ Servers on your own premises (on-premises servers)
+ AWS IoT Greengrass core devices
+ AWS IoT and non-AWS edge devices
+ Virtual machines (VMs), including VMs in other cloud environments

In the Systems Manager console, any machine prefixed with "mi-" has been configured as a managed node using a [*hybrid activation*](activations.md). Edge devices display their AWS IoT Thing name.

**Note**  
The only supported feature for macOS instances is viewing the file system.

**About Systems Manager instances tiers**  
AWS Systems Manager offers a standard-instances tier and an advanced-instances tier. Both support managed nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The standard-instances tier allows you to register a maximum of 1,000 machines per AWS account per AWS Region. If you need to register more than 1,000 machines in a single account and Region, then use the advanced-instances tier. You can create as many managed nodes as you like in the advanced-instances tier. All managed nodes configured for Systems Manager are priced on a pay-per-use basis. For more information about enabling the advanced instances tier, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md). For more information about pricing, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

Note the following additional information about the standard-instances tier and advanced-instances tier:
+ Advanced instances also allow you to connect to your non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment by using AWS Systems Manager Session Manager. Session Manager provides interactive shell access to your instances. For more information, see [AWS Systems Manager Session Manager](session-manager.md).
+ The standard-instances quota also applies to EC2 instances that use a Systems Manager on-premises activation (which isn't a common scenario).
+ To patch applications released by Microsoft on virtual machines (VMs) on-premises instances, activate the advanced-instances tier. There is a charge to use the advanced-instances tier. There is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Patching applications released by Microsoft on Windows Server](patch-manager-patching-windows-applications.md).

**Display managed nodes**  
If you don't see your managed nodes listed in the console, then do the following:

1. Verify that the console is open in the AWS Region where you created your managed nodes. You can switch Regions by using the list in the top, right corner of the console. 

1. Verify that the setup steps for your managed nodes meet Systems Manager requirements. For information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

1. For non-EC2 machines, verify that you completed the hybrid activation process. For more information, see [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md).

Note the following additional information:
+ The Fleet Manager console does not display Amazon EC2 nodes that have been terminated.
+ Systems Manager requires accurate time references in order to perform operations on your machines. If the date and time aren't set correctly on your managed nodes, the machines might not match the signature date of your API requests. For more information, see [Use cases and best practices](systems-manager-best-practices.md).
+ When you create or edit tags, the system can take up to one hour to display changes in the table filter.
+ After the status of a managed node has been `Connection Lost` for at least 30 days, the node might no longer be listed in the Fleet Manager console. To restore it to the list, the issue that caused the lost connection must be resolved. For troubleshooting tips, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md).

**Verify Systems Manager support on a managed node**  
AWS Config provides AWS Managed Rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resource configurations comply with common best practices. AWS Config Managed Rules include the [ec2-instance-managed-by-systems-manager](https://docs.aws.amazon.com/config/latest/developerguide/ec2-instance-managed-by-systems-manager.html) rule. This rule checks whether the Amazon EC2 instances in your account are managed by Systems Manager. For more information, see [AWS Config Managed Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html). 

**Increase security posture on managed nodes**  
For information about increasing your security posture against unauthorized root-level commands on your managed nodes, see [Restricting access to root-level commands through SSM Agent](ssm-agent-restrict-root-level-commands.md).

**Deregister managed nodes**  
You can deregister managed nodes at any time. For example, if you're managing multiple nodes with the same AWS Identity and Access Management (IAM) role and you notice any kind of malicious behavior, you can deregister any number of machines at any point. (In order to re-register the same machine, you must use a different hybrid Activation Code and Activation ID than previously used to register it.) For information about deregistering managed nodes, see [Deregistering managed nodes in a hybrid and multicloud environment](fleet-manager-deregister-hybrid-nodes.md).

**Topics**
+ [

# Configuring instance tiers
](fleet-manager-configure-instance-tiers.md)
+ [

# Resetting passwords on managed nodes
](fleet-manager-reset-password.md)
+ [

# Deregistering managed nodes in a hybrid and multicloud environment
](fleet-manager-deregister-hybrid-nodes.md)
+ [

# Working with OS file systems using Fleet Manager
](fleet-manager-file-system-management.md)
+ [

# Monitoring managed node performance
](fleet-manager-monitoring-node-performance.md)
+ [

# Working with processes
](fleet-manager-manage-processes.md)
+ [

# Viewing logs on managed nodes
](fleet-manager-view-node-logs.md)
+ [

# Managing OS user accounts and groups on managed nodes using Fleet Manager
](fleet-manager-manage-os-user-accounts.md)
+ [

# Managing the Windows registry on managed nodes
](fleet-manager-manage-windows-registry.md)

# Configuring instance tiers


This topic describes the scenarios when you must activate the advanced-instanced tier. 

AWS Systems Manager offers a standard-instances tier and an advanced-instances tier for non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. 

You can register up to 1,000 standard [hybrid-activated nodes](activations.md) per account per AWS Region at no additional cost. However, registering more than 1,000 hybrid nodes requires that you activate the advanced-instances tier. There is a charge to use the advanced-instances tier. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

Even with fewer than 1,000 registered hybrid-activated nodes, two other scenarios require the advanced-instances tier: 
+ You want to use Session Manager to connect to non-EC2 nodes.
+ You want to patch applications (not operating systems) released by Microsoft on non-EC2 nodes.
**Note**  
There is no charge to patch applications released by Microsoft on Amazon EC2 instances.

## Advanced-instances tier detailed scenarios


The following information provides details on the three scenarios for which you must activate the advanced-instances tier.

Scenario 1: You want to register more than 1,000 hybrid-activated nodes  
Using the standard-instances tier, you can register a maximum of 1,000 non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment per AWS Region in a specific account without additional charge. If you need to register more than 1,000 non-EC2 nodes in a Region, you must use the advanced-instances tier. You can then activate as many machines for your hybrid and multicloud environment as you want. Charges for the advanced-instances tier are based on the number of advanced nodes activated as Systems Manager managed nodes and the hours those nodes are running.  
All Systems Manager managed nodes that use the activation process described in [Create a hybrid activation to register nodes with Systems Manager](hybrid-activation-managed-nodes.md) are then subject to charge if you exceed 1,000 on-premises nodes in a Region in a specific account .   
You can also activate existing Amazon Elastic Compute Cloud (Amazon EC2) instances using Systems Manager hybrid activations and work with them as non-EC2 instances, such as for testing. These also qualify as hybrid nodes. This isn't a common scenario.

Scenario 2: Patching Microsoft-released applications on hybrid-activated nodes  
The advanced-instances tier is also required if you want to patch Microsoft-released applications on non-EC2 nodes in a hybrid and multicloud environment. If you activate the advanced-instances tier to patch Microsoft applications on non-EC2 nodes, charges are then incurred for all on-premises nodes, even if you have fewer than 1,000.  
There is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Patching applications released by Microsoft on Windows Server](patch-manager-patching-windows-applications.md).

Scenario 3: Connecting to hybrid-activated nodes using Session Manager  
Session Manager provides interactive shell access to your instances. To connect to hybrid-activated managed nodes using Session Manager, you must activate the advanced-instances tier. Charges are then incurred for all hybrid-activated nodes, even if you have fewer than 1,000.

**Summary: When do I need the advanced-instances tier?**  
Use the following table to review when you must use the advanced-instances tier, and for which scenarios additional charges apply.


****  

| Scenario | Advanced-instances tier required? | Additional charges apply? | 
| --- | --- | --- | 
|  The number of hybrid-activated nodes in my Region in a specific account is more than 1,000.  | Yes | Yes | 
|  I want to use Patch Manager to patch Microsoft-released applications on any number of hybrid-activated nodes, even less than 1,000.  | Yes | Yes | 
|  I want to use Session Manager to connect to any number of hybrid-activated nodes, even less than 1,000.  | Yes | Yes | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/fleet-manager-configure-instance-tiers.html)  | No | No | 

**Topics**
+ [

## Advanced-instances tier detailed scenarios
](#systems-manager-managed-instances-tier-scenarios)
+ [

# Turning on the advanced-instances tier
](fleet-manager-enable-advanced-instances-tier.md)
+ [

# Reverting from the advanced-instances tier to the standard-instances tier
](fleet-manager-revert-to-standard-tier.md)

# Turning on the advanced-instances tier


AWS Systems Manager offers a standard-instances tier and an advanced-instances tier for non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The standard-instances tier lets you register a maximum of 1,000 hybrid-activated machines per AWS account per AWS Region. The advanced-instances tier is also required to use Patch Manager to patch Microsoft-released applications on non-EC2 nodes, and to connect to non-EC2 nodes using Session Manager. For more information, see [Turning on the advanced-instances tier](#fleet-manager-enable-advanced-instances-tier).

This section describes how to configure your hybrid and multicloud environment to use the advanced-instances tier.

**Before you begin**  
Review pricing details for advanced instances. Advanced instances are available on a per-use-basis. For more information see, [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/). 

## Configuring permissions to turn on the advanced-instances tier


Verify that you have permission in AWS Identity and Access Management (IAM) to change your environment from the standard-instances tier to the advanced-instances tier. You must either have the `AdministratorAccess` IAM policy attached to your user, group, or role, or you must have permission to change the Systems Manager activation-tier service setting. The activation-tier setting uses the following API operations: 
+ [GetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetServiceSetting.html)
+ [UpdateServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateServiceSetting.html)
+ [ResetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResetServiceSetting.html)

Use the following procedure to add an inline IAM policy to a user account. This policy allows a user to view the current managed-instance tier setting. This policy also allows the user to change or reset the current setting in the specified AWS account and AWS Region.

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Users**.

1. In the list, choose the name of the user to embed a policy in.

1. Choose the **Permissions** tab.

1. On the right side of the page, under **Permission policies**, choose **Add inline policy**. 

1. Choose the **JSON** tab.

1. Replace the default content with the following:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:GetServiceSetting"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:ResetServiceSetting",
                   "ssm:UpdateServiceSetting"
               ],
               "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/activation-tier"
           }
       ]
   }
   ```

------

1. Choose **Review policy**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy. For example: **Managed-Instances-Tier**.

1. Choose **Create policy**.

Administrators can specify read-only permission by assigning the following inline policy to the user.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetServiceSetting"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:ResetServiceSetting",
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "*"
        }
    ]
}
```

------

For more information about creating and editing IAM policies, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

## Turning on the advanced-instances tier (console)


The following procedure shows you how to use the Systems Manager console to change *all* non-EC2 nodes that were added using managed-instance activation, in the specified AWS account and AWS Region, to use the advanced-instances tier.

**Before you begin**  
Verify that the console is open in the AWS Region where you created your managed instances. You can switch Regions by using the list in the top, right corner of the console. 

Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. For information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

**Important**  
The following procedure describes how to change an account-level setting. This change results in charges being billed to your account.

**To turn on the advanced-instances tier (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Settings**, and then choose **Change instance tier settings**.

1. Review the information in the dialog box about changing account settings.

1. If you approve, choose the option to accept, and then choose **Change setting**.

The system can take several minutes to complete the process of moving all instances from the standard-instances tier to the advanced-instances tier.

**Note**  
For information about changing back to the standard-instances tier, see [Reverting from the advanced-instances tier to the standard-instances tier](fleet-manager-revert-to-standard-tier.md).

## Turning on the advanced-instances tier (AWS CLI)


The following procedure shows you how to use the AWS Command Line Interface to change *all* on-premises servers and VMs that were added using managed-instance activation, in the specified AWS account and AWS Region, to use the advanced-instances tier.

**Important**  
The following procedure describes how to change an account-level setting. This change results in charges being billed to your account.

**To turn on the advanced-instances tier using the AWS CLI**

1. Open the AWS CLI and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier \
       --setting-value advanced
   ```

------
#### [ Windows ]

   ```
   aws ssm update-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier ^
       --setting-value advanced
   ```

------

   There is no output if the command succeeds.

1. Run the following command to view the current service settings for managed nodes in the current AWS account and AWS Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------
#### [ Windows ]

   ```
   aws ssm get-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------

   The command returns information like the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/managed-instance/activation-tier",
           "SettingValue": "advanced",
           "LastModifiedDate": 1555603376.138,
           "LastModifiedUser": "arn:aws:sts::123456789012:assumed-role/Administrator/User_1",
           "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier",
           "Status": "PendingUpdate"
       }
   }
   ```

## Turning on the advanced-instances tier (PowerShell)


The following procedure shows you how to use the AWS Tools for Windows PowerShell to change *all* on-premises servers and VMs that were added using managed-instance activation, in the specified AWS account and AWS Region, to use the advanced-instances tier.

**Important**  
The following procedure describes how to change an account-level setting. This change results in charges being billed to your account.

**To turn on the advanced-instances tier using PowerShell**

1. Open AWS Tools for Windows PowerShell and run the following command. Replace each *example resource placeholder* with your own information.

   ```
   Update-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier" `
       -SettingValue "advanced"
   ```

   There is no output if the command succeeds.

1. Run the following command to view the current service settings for managed nodes in the current AWS account and AWS Region.

   ```
   Get-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier"
   ```

   The command returns information like the following.

   ```
   ARN:arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier
   LastModifiedDate : 4/18/2019 4:02:56 PM
   LastModifiedUser : arn:aws:sts::123456789012:assumed-role/Administrator/User_1
   SettingId        : /ssm/managed-instance/activation-tier
   SettingValue     : advanced
   Status           : PendingUpdate
   ```

The system can take several minutes to complete the process of moving all nodes from the standard-instances tier to the advanced-instances tier.

**Note**  
For information about changing back to the standard-instances tier, see [Reverting from the advanced-instances tier to the standard-instances tier](fleet-manager-revert-to-standard-tier.md).

# Reverting from the advanced-instances tier to the standard-instances tier


This section describes how to change hybrid-activated nodes running in the advanced-instances tier back to the standard-instances tier. This configuration applies to all hybrid-activated nodes in an AWS account and a single AWS Region.

**Before you begin**  
Review the following important details.

**Note**  
You can't revert back to the standard-instance tier if you're running more than 1,000 hybrid-activated nodes in the account and Region. You must first deregister nodes until you have 1,000 or fewer. This also applies to Amazon Elastic Compute Cloud (Amazon EC2) instances that use a Systems Manager hybrid activation (which isn't a common scenario). For more information, see [Deregistering managed nodes in a hybrid and multicloud environment](fleet-manager-deregister-hybrid-nodes.md).
After you revert, you won't be able to use Session Manager, a tool in AWS Systems Manager, to interactively access your hybrid-activated nodes.
After you revert, you won't be able to use Patch Manager, a tool in AWS Systems Manager, to patch applications released by Microsoft on hybrid-activated nodes.
The process of reverting all hybrid-activated nodes back to the standard-instance tier can take 30 minutes or more to complete.

This section describes how to revert all hybrid-activated nodes in an AWS account and AWS Region from the advanced-instances tier to the standard-instances tier.

## Reverting to the standard-instances tier (console)


The following procedure shows you how to use the Systems Manager console to change all hybrid-activated nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment to use the standard-instances tier in the specified AWS account and AWS Region.

**To revert to the standard-instances tier (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the **Account settings** dropdown and choose **Instance tier settings**.

1. Choose **Change account setting**.

1. Review the information in the pop-up about changing account settings, and then if you approve, choose the option to accept and continue.

## Reverting to the standard-instances tier (AWS CLI)


The following procedure shows you how to use the AWS Command Line Interface to change all hybrid-activated nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment to use the standard-instances tier in the specified AWS account and AWS Region.

**To revert to the standard-instances tier using the AWS CLI**

1. Open the AWS CLI and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier \
       --setting-value standard
   ```

------
#### [ Windows ]

   ```
   aws ssm update-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier ^
       --setting-value standard
   ```

------

   There is no output if the command succeeds.

1. Run the following command 30 minutes later to view the settings for managed instances in the current AWS account and AWS Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-service-setting \
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------
#### [ Windows ]

   ```
   aws ssm get-service-setting ^
       --setting-id arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier
   ```

------

   The command returns information like the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/managed-instance/activation-tier",
           "SettingValue": "standard",
           "LastModifiedDate": 1555603376.138,
           "LastModifiedUser": "System",
           "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier",
           "Status": "Default"
       }
   }
   ```

   The status changes to *Default* after the request has been approved.

## Reverting to the standard-instances tier (PowerShell)


The following procedure shows you how to use AWS Tools for Windows PowerShell to change hybrid-activated nodes in your hybrid and multicloud environment to use the standard-instances tier in the specified AWS account and AWS Region.

**To revert to the standard-instances tier using PowerShell**

1. Open AWS Tools for Windows PowerShell and run the following command.

   ```
   Update-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier" `
       -SettingValue "standard"
   ```

   There is no output if the command succeeds.

1. Run the following command 30 minutes later to view the settings for managed instances in the current AWS account and AWS Region.

   ```
   Get-SSMServiceSetting `
       -SettingId "arn:aws:ssm:region:aws-account-id:servicesetting/ssm/managed-instance/activation-tier"
   ```

   The command returns information like the following.

   ```
   ARN: arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/managed-instance/activation-tier
   LastModifiedDate : 4/18/2019 4:02:56 PM
   LastModifiedUser : System
   SettingId        : /ssm/managed-instance/activation-tier
   SettingValue     : standard
   Status           : Default
   ```

   The status changes to *Default* after the request has been approved.

# Resetting passwords on managed nodes


You can reset the password for any user on a managed node. This includes Amazon Elastic Compute Cloud (Amazon EC2) instances; AWS IoT Greengrass core devices; and on-premises servers, edge devices, and virtual machines (VMs) that are managed by AWS Systems Manager. The password reset functionality is built on Session Manager, a tool in AWS Systems Manager. You can use this functionality to connect to managed nodes without opening inbound ports, maintaining bastion hosts, or managing SSH keys. 

Password reset is useful when a user has forgotten a password, or when you want to quickly update a password without making an RDP or SSH connection to a managed node. 

**Prerequisites**  
Before you can reset the password on a managed node, the following requirements must be met:
+ The managed node on which you want to change a password must be a Systems Manager managed node. Also, SSM Agent version 2.3.668.0 or later must be installed on the managed node.) For information about installing or updating SSM Agent, see [Working with SSM Agent](ssm-agent.md).
+ The password reset functionality uses the Session Manager configuration that is set up for your account to connect to the managed node. Therefore, the prerequisites for using Session Manager must have been completed for your account in the current AWS Region. For more information, see [Setting up Session Manager](session-manager-getting-started.md).
**Note**  
Session Manager support for on-premises nodes is provided for the advanced-instances tier only. For more information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).
+ The AWS user who is changing the password must have the `ssm:SendCommand` permission for the managed node. For more information, see [Restricting Run Command access based on tags](run-command-setting-up.md#tag-based-access).

**Restricting access**  
You can limit a user's ability to reset passwords to specific managed nodes. This is done by using identity-based policies for the Session Manager `ssm:StartSession` operation with the `AWS-PasswordReset` SSM document. For more information, see [Control user session access to instances](session-manager-getting-started-restrict-access.md).

**Encrypting data**  
Turn on AWS Key Management Service (AWS KMS) complete encryption for Session Manager data to use the password reset option for managed nodes. For more information, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).

## Reset a password on a managed node


You can reset a password on a Systems Manager managed node using the Systems Manager **Fleet Manager** console or the AWS Command Line Interface (AWS CLI).

**To change the password on a managed node (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the node that needs a new password.

1. Choose **Instance actions, Reset password**.

1. For **User name**, enter the name of the user for which you're changing the password. This can be any user name that has an account on the node.

1. Choose **Submit**.

1. Follow the prompts in the **Enter new password** command window to specify the new password.
**Note**  
If the version of SSM Agent on the managed node doesn't support password resets, you're prompted to install a supported version using Run Command, a tool in AWS Systems Manager.

**To reset the password on a managed node (AWS CLI)**

1. To reset the password for a user on a managed node, run the following command. Replace each *example resource placeholder* with your own information.
**Note**  
To use the AWS CLI to reset a password, the Session Manager plugin must be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

------
#### [ Linux & macOS ]

   ```
   aws ssm start-session \
       --target instance-id \
       --document-name "AWS-PasswordReset" \
       --parameters '{"username": ["user-name"]}'
   ```

------
#### [ Windows ]

   ```
   aws ssm start-session ^
       --target instance-id ^
       --document-name "AWS-PasswordReset" ^
       --parameters username="user-name"
   ```

------

1. Follow the prompts in the **Enter new password** command window to specify the new password.

## Troubleshoot password resets on managed nodes


Many password reset issues can be resolved by ensuring that you have completed the [password reset prerequisites](#pw-reset-prereqs). For other problems, use the following information to help you troubleshoot password reset issues.

**Topics**
+ [

### Managed node not available
](#password-reset-troubleshooting-instances)
+ [

### SSM Agent not up-to-date (console)
](#password-reset-troubleshooting-ssmagent-console)
+ [

### Password reset options aren't provided (AWS CLI)
](#password-reset-troubleshooting-ssmagent-cli)
+ [

### No authorization to run `ssm:SendCommand`
](#password-reset-troubleshooting-sendcommand)
+ [

### Session Manager error message
](#password-reset-troubleshooting-session-manager)

### Managed node not available


**Problem**: You want to reset the password for a managed node on the **Managed instances** console page, but the node isn't in the list.
+ **Solution**: The managed node you want to connect to might not be configured for Systems Manager. To use an EC2 instance with Systems Manager, an AWS Identity and Access Management (IAM) instance profile that gives Systems Manager permission to perform actions on your instances must be attached to the instance. For information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). 

  To use a non-EC2 machine with Systems Manager, create an IAM service role that gives Systems Manager permission to perform actions on your managed nodes. For more information, see [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md). (Session Manager support for on-premises servers and VMs is provided for the advanced-instances tier only. For more information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).)

### SSM Agent not up-to-date (console)


**Problem**: A message reports that the version of SSM Agent doesn't support password reset functionality.
+ **Solution**: Version 2.3.668.0 or later of SSM Agent is required to perform password resets. In the console, you can update the agent on the managed node by choosing **Update SSM Agent**. 

  An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

### Password reset options aren't provided (AWS CLI)


**Problem**: You connect successfully to a managed node using the AWS CLI `[https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html)` command. You specified the SSM Document `AWS-PasswordReset` and provided a valid user name, but prompts to change the password aren't displayed.
+ **Solution**: The version of SSM Agent on the managed node isn't up-to-date. Version 2.3.668.0 or later is required to perform password resets. 

  An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

### No authorization to run `ssm:SendCommand`


**Problem**: You attempt to connect to a managed node to change the password but receive an error message saying that you aren't authorized to run `ssm:SendCommand` on the managed node.
+ **Solution**: Your IAM policy must include permission to run the `ssm:SendCommand` command. For information, see [Restricting Run Command access based on tags](run-command-setting-up.md#tag-based-access).

### Session Manager error message


**Problem**: You receive an error message related to Session Manager.
+ **Solution**: Password reset support requires that Session Manager is configured correctly. For information, see [Setting up Session Manager](session-manager-getting-started.md) and [Troubleshooting Session Manager](session-manager-troubleshooting.md).

# Deregistering managed nodes in a hybrid and multicloud environment


If you no longer want to manage an on-premises server, edge device, or virtual machine (VM) by using AWS Systems Manager, then you can deregister it. Deregistering a hybrid-activated node removes it from the list of managed nodes in Systems Manager. AWS Systems Manager Agent (SSM Agent) running on the hybrid-activated node won't be able to refresh its authorization token because it's no longer registered. SSM Agent hibernates and reduce its ping frequency to Systems Manager in the cloud to once per hour. Systems Manager stores the command history for a deregistered managed node for 30 days.

**Note**  
You can reregister an on-premises server, edge device, or VM using the same activation code and ID as long as you haven't reached the instance limit for the designated activation code and ID. You can verify the instance limit in the console by choosing **Node tools**, and then choose **Hybrid activations**. If the value of **Registered instances** is less than **Registration limit**, you can reregister a machine using the same activation code and ID. If it's greater, you must use a different activation code and ID.

The following procedure describes how to deregister a hybrid-activated node by using the Systems Manager console. For information about how to do this by using the AWS Command Line Interface, see [deregister-managed-instance](https://docs.aws.amazon.com/cli/latest/reference/ssm/deregister-managed-instance.html).

For related information, see the following topics:
+ [Deregister and reregister a managed node (Linux)](hybrid-multicloud-ssm-agent-install-linux.md#systems-manager-install-managed-linux-deregister-reregister) (Linux)
+ [Deregister and reregister a managed node (Windows Server)](hybrid-multicloud-ssm-agent-install-windows.md#systems-manager-install-managed-win-deregister-reregister) (Windows Server)

**To deregister a hybrid-activated node (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the checkbox next to the managed node that you want to deregister.

1. Choose **Node actions, Tools, Deregister this managed node**.

1. Review the information in the **Deregister this managed node** dialog box. If you approve, choose **Deregister**.

# Working with OS file systems using Fleet Manager
Working with OS file systems

You can use Fleet Manager, a tool in AWS Systems Manager, to work with the file system on your managed nodes. Using Fleet Manager, you can view information about the directory and file data stored on the volumes attached to your managed nodes. For example, you can view the name, size, extension, owner, and permissions for your directories and files. Up to 10,000 lines of file data can be previewed as text from the Fleet Manager console. You can also use this feature to `tail` files. When using `tail` to view file data, the last 10 lines of the file are displayed initially. As new lines of data are written to the file, the view is updated in real time. As a result, you can review log data from the console, which can improve the efficiency of your troubleshooting and systems administration. Additionally, you can create directories and copy, cut, paste, rename, or delete files and directories.

We recommend creating regular backups, or taking snapshots of the Amazon Elastic Block Store (Amazon EBS) volumes attached to your managed nodes. When copying, or cutting and pasting files, existing files and directories in the destination path with the same name as the new files or directories are replaced. Serious problems can occur if you replace or modify system files and directories. AWS doesn't guarantee that these problems can be solved. Modify system files at your own risk. You're responsible for all file and directory changes, and ensuring you have backups. Deleting or replacing files and directories can't be undone.

**Note**  
Fleet Manager uses Session Manager, a tool in AWS Systems Manager, to view text previews and `tail` files. For Amazon Elastic Compute Cloud (Amazon EC2) instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

**Topics**
+ [

# Viewing the OS file system using Fleet Manager
](fleet-manager-viewing-file-system.md)
+ [

# Previewing OS files using Fleet Manager
](fleet-manager-preview-os-files.md)
+ [

# Tailing OS files using Fleet Manager
](fleet-manager-tailing-os-files.md)
+ [

# Copying, cutting, and pasting OS files or directories using Fleet Manager
](fleet-manager-move-files-or-directories.md)
+ [

# Renaming OS files and directories using Fleet Manager
](fleet-manager-renaming-files-and-directories.md)
+ [

# Deleting OS files and directories using Fleet Manager
](fleet-manager-deleting-files-and-directories.md)
+ [

# Creating OS directories using Fleet Manager
](fleet-manager-creating-directories.md)
+ [

# Cutting, copying, and pasting OS directories using Fleet Manager
](fleet-manager-managing-directories.md)

# Viewing the OS file system using Fleet Manager
Viewing the OS file system

You can use Fleet Manager to view the OS file system on a Systems Manager managed node. 

**To view the file OS system using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the file system you want to view.

1. Choose **Tools, File system**.

# Previewing OS files using Fleet Manager
Previewing OS files

You can use Fleet Manager to preview text files on an OS.

**To view text previews of files using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to preview.

1. Choose **Tools, File system**.

1. Select the **File name** of the directory that contains the file you want to preview.

1. Choose the button next to the file whose content you want to preview.

1. Choose **Actions, Preview as text**.

# Tailing OS files using Fleet Manager
Tailing OS file

You can use Fleet Manager to tail a file on a managed node.

**To tail OS files with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to tail.

1. Choose **Tools, File system**.

1. Select the **File name** of the directory that contains the file you want to tail.

1. Choose the button next to the file whose content you want to tail.

1. Choose **Actions, Tail file**.

# Copying, cutting, and pasting OS files or directories using Fleet Manager
Copying, cutting, and pasting OS files or directories

You can use Fleet Manager to copy, cut, and paste OS files on a managed node.

**To copy or cut and paste files or directories using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to copy, or cut and paste.

1. Choose **Tools, File system**.

1. To copy or cut a file, select the **File name** of the directory that contains the file you want to copy or cut. To copy or cut a directory, choose the button next to the directory that you want to copy or cut and then proceed to step 8.

1. Choose the button next to the file you want to copy or cut.

1. In the **Actions** menu, choose **Copy** or **Cut**.

1. In the **File system** view, choose the button next to the directory you want to paste the file in.

1. In the **Actions** menu, choose **Paste**.

# Renaming OS files and directories using Fleet Manager
Renaming OS files and directories

You can use Fleet Manager to rename files and directories on a managed node in your account.

**To rename files or directories with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files or directories you want to rename.

1. Choose **Tools, File system**.

1. To rename a file, select the **File name** of the directory that contains the file you want to rename. To rename a directory, choose the button next to the directory that you want to rename and then proceed to step 8.

1. Choose the button next to the file whose content you want to rename.

1. Choose **Actions, Rename**.

1. For **File name**, enter the new name for the file and select **Rename**.

# Deleting OS files and directories using Fleet Manager
Deleting OS files and directories

You can use Fleet Manager to delete files and directories on a managed node in your account.

**To delete files or directories using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files or directories you want to delete.

1. Choose **Tools, File system**.

1. To delete a file, select the **File name** of the directory that contains the file you want to delete. To delete a directory, choose the button next to the directory that you want to delete and then proceed to step 7.

1. Choose the button next to the file with the content you want to delete.

1. Choose **Actions, Delete**.

# Creating OS directories using Fleet Manager
Creating OS directories

You can use Fleet Manager to create directories on a managed node in your account.

**To create a directory using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node you want to create a directory in.

1. Choose **Tools, File system**.

1. Select the **File name** of the directory where you want to create a new directory.

1. Select **Create directory**.

1. For **Directory name**, enter the name for the new directory, and then select **Create directory**.

# Cutting, copying, and pasting OS directories using Fleet Manager
Cutting, copying, and pasting OS directories

You can use Fleet Manager to cut, copy, and paste directories on a managed node in your account.

**To copy or cut and paste directories with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node with the files you want to copy, or cut and paste.

1. Choose **Tools, File system**.

1. Choose the button next to the directory that you want to copy or cut and then proceed to step 8.

1. In the **Actions** menu, choose **Copy** or **Cut**.

1. In the **File system** view, choose the button next to the directory you want to paste the file in.

1. In the **Actions** menu, choose **Paste**.

# Monitoring managed node performance
Monitoring performance

You can use Fleet Manager, a tool in AWS Systems Manager, to view performance data about your managed nodes in real time. The performance data is retrieved from performance counters.

The following performance counters are available in Fleet Manager:
+ CPU utilization
+ Disk input/output (I/O) utilization
+ Network traffic
+ Memory usage

**Note**  
Fleet Manager uses Session Manager, a tool in AWS Systems Manager, to retrieve performance data. For Amazon Elastic Compute Cloud (Amazon EC2) instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

**To view performance data with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node whose performance you want to monitor.

1. Choose **View details**.

1. Choose **Tools, Performance counters**.

# Working with processes


You can use Fleet Manager, a tool in AWS Systems Manager, to work with processes on your managed nodes. Using Fleet Manager, you can view information about processes. For example, you can see the CPU utilization and memory usage of processes in addition to their handles and threads. With Fleet Manager, you can start and terminate processes from the console.

**Note**  
Fleet Manager uses Session Manager, a tool in AWS Systems Manager, to retrieve process data. For Amazon Elastic Compute Cloud (Amazon EC2) instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

**Topics**
+ [

# Viewing details about OS processes using Fleet Manager
](fleet-manager-view-process-details.md)
+ [

# Starting an OS process on a managed node using Fleet Manager
](fleet-manager-start-process.md)
+ [

# Terminating an OS process using Fleet Manager
](fleet-manager-terminate-process.md)

# Viewing details about OS processes using Fleet Manager
Viewing details about OS processes

You can use Fleet Manager view details about processes on your managed nodes.

**To view details about processes with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the node whose processes you want to view.

1. Choose **Tools, Processes**.

# Starting an OS process on a managed node using Fleet Manager
Starting an OS process on a managed node

You can use Fleet Manager to start a process on a managed node.

**To start a process with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node you want to start a process on.

1. Choose **Tools, Processes**.

1. Select **Start new process**.

1. For **Process name or full path**, enter the name of the process or the full path to the executable.

1. (Optional) For **Working directory**, enter the directory path where you want the process to run.

# Terminating an OS process using Fleet Manager
Terminating an OS process

**To terminate an OS process using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Select the link of the managed node you want to start a process on.

1. Choose **Tools, Processes**.

1. Choose the button next to the process you want to terminate.

1. Choose **Actions, Terminate process** or **Actions, Terminate process tree**. 
**Note**  
Terminating a process tree also terminates all processes and applications using that process.

# Viewing logs on managed nodes
Viewing logs

You can use Fleet Manager, a tool in AWS Systems Manager, to view log data stored on your managed nodes. For Windows managed nodes, you can view Windows event logs and copy their details from the console. To help you search events, filter Windows event logs by **Event level**, **Event ID**, **Event source**, and **Time created**. You can also view other log data using the procedure to view the file system. For more information about viewing the file system with Fleet Manager, see [Working with OS file systems using Fleet Manager](fleet-manager-file-system-management.md).

**To view Windows event logs with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node whose event logs you want to view.

1. Choose **View details**.

1. Choose **Tools, Windows event logs**.

1. Choose the **Log name** that contains the events you want to view.

1. Choose the button next to the **Log name** you want to view, and then select **View events**.

1. Choose the button next to the event you want to view, and then select **View event details**.

1. (Optional) Select **Copy as JSON** to copy the event details to your clipboard.

# Managing OS user accounts and groups on managed nodes using Fleet Manager
Managing OS user accounts and groups

You can use Fleet Manager, a tool in AWS Systems Manager, to manage operating system (OS) user accounts and groups on your managed nodes. For example, you can create and delete users and groups. Additionally, you can view details like group membership, user roles, and status.

**Important**  
Fleet Manager uses Run Command and Session Manager, tools in AWS Systems Manager, for various user management operations. As a result, a user could grant permissions to an operating system user account that they would otherwise be unable to. This is because AWS Systems Manager Agent (SSM Agent) runs on Amazon Elastic Compute Cloud (Amazon EC2) instances using root permissions (Linux) or SYSTEM permissions (Windows Server). For more information about restricting access to root-level commands through SSM Agent, see [Restricting access to root-level commands through SSM Agent](ssm-agent-restrict-root-level-commands.md). To restrict access to this feature, we recommend creating AWS Identity and Access Management (IAM) policies for your users that only allow access to the actions you define. For more information about creating IAM policies for Fleet Manager, see [Controlling access to Fleet Manager](configuring-fleet-manager-permissions.md).

**Topics**
+ [

# Creating an OS user or group using Fleet Manager
](manage-os-user-accounts-create.md)
+ [

# Updating user or group membership using Fleet Manager
](manage-os-user-accounts-update.md)
+ [

# Deleting an OS user or group using Fleet Manager
](manage-os-user-accounts-delete.md)

# Creating an OS user or group using Fleet Manager


**Note**  
Fleet Manager uses Session Manager to set passwords for new users. For Amazon EC2 instances, the instance profile attached to your managed instances must provide permissions for Session Manager to use this feature. For more information about adding Session Manager permissions to an instance profile, see [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).

Instead of logging on directly to a server to create a user account or group, you can use the Fleet Manager console to perform the same tasks.

**To create an OS user account using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to create a new user on.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Users** tab, and then choose **Create user**.

1. Enter a value for the **Name** of the new user.

1. (Recommended) Select the check box next to **Set password**. You will be prompted to provide a password for the new user at the end of the procedure.

1. Select **Create user**. If you selected the check box to create a password for the new user, you will be prompted to enter a value for the password and select **Done**. If the password you specify doesn't meet the requirements specified by your managed node's local or domain policies, an error is returned.

**To create an OS group using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to create a group in.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Groups** tab, and then choose **Create group**.

1. Enter a value for the **Name** of the new group.

1. (Optional) Enter a value for the **Description** of the new group.

1. (Optional) Select users to add to the **Group members** for the new group.

1. Select **Create group**.

# Updating user or group membership using Fleet Manager


Instead of logging on directly to a server to update a user account or group, you can use the Fleet Manager console to perform the same tasks.

**To add an OS user account to a new group using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the user account exists that you want to update.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Users** tab.

1. Choose the button next to the user you want to update.

1. Choose **Actions, Add user to group**.

1. Choose the group you want to add the user to under **Add to group**.

1. Select **Add user to group**.

**To edit an OS group's membership using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the group exists that you want to update.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Groups** tab.

1. Choose the button next to the group you want to update.

1. Choose **Actions, Modify group**.

1. Choose the users you want to add or remove under **Group members**.

1. Select **Modify group**.

# Deleting an OS user or group using Fleet Manager


Instead of logging on directly to a server to delete a user account or group, you can use the Fleet Manager console to perform the same tasks.

**To delete an OS user account using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the user account exists that you want to delete.

1. Choose **View details**.

1. Choose **Users and groups**.

1. Choose the **Users** tab.

1. Choose the button next to the user you want to delete.

1. Choose **Actions, Delete local user**.

**To delete an OS group using Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node where the group exists that you want to delete.

1. Choose **View details**.

1. Choose **Tools, Users and groups**.

1. Choose the **Group** tab.

1. Choose the button next to the group you want to update.

1. Choose **Actions, Delete local group**.

# Managing the Windows registry on managed nodes
Managing the Windows registry

You can use Fleet Manager, a tool in AWS Systems Manager, to manage the registry on your Windows Server managed nodes. From the Fleet Manager console you can create, copy, update, and delete registry entries and values.

**Important**  
We recommend creating a backup of the registry, or taking a snapshot of the root Amazon Elastic Block Store (Amazon EBS) volume attached to your managed node, before you modify the registry. Serious problems can occur if you modify the registry incorrectly. These problems might require you to reinstall the operating system, or restore the root volume of your node from a snapshot. AWS doesn't guarantee that these problems can be solved. Modify the registry at your own risk. You're responsible for all registry changes, and ensuring you have backups.

## Create a Windows registry key or entry


**To create a Windows registry key with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to create a registry key on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive you want to create a new registry key in by selecting the **Registry name**.

1. Choose **Create, Create registry key**.

1. Choose the button next to the registry entry you want to create a new key in.

1. Choose **Create registry key**.

1. Enter a value for the **Name** of the new registry key, and select **Submit**.

**To create a Windows registry entry with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the instance you want to create a registry entry on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key you want to create a new registry entry in by selecting the **Registry name**.

1. Choose **Create, Create registry entry**.

1. Enter a value for the **Name** of the new registry entry.

1. Choose the **Type** of value you want to create for the registry entry. For more information about registry value types, see [Registry value types](https://docs.microsoft.com/en-us/windows/win32/sysinfo/registry-value-types).

1. Enter a value for the **Value** of the new registry entry.

## Update a Windows registry entry


**To update a Windows registry entry with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to update a registry entry on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key you want to update by selecting the **Registry name**.

1. Choose the button next to the registry entry you want to update.

1. Choose **Actions, Update registry entry**.

1. Enter the new value for the **Value** of the registry entry.

1. Choose **Update**.

## Delete a Windows registry entry or key


**To delete a Windows registry key with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to delete a registry key on.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key you want to delete by selecting the **Registry name**.

1. Choose the button next to the registry key you want to delete.

1. Choose **Actions, Delete registry key**.

**To delete a Windows registry entry with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed node you want to delete a registry entry on.

1. Choose **View details**.

1. Choose **Tools, Windows registry**.

1. Choose the hive, and subsequent registry key containing the entry you want to delete by selecting the **Registry name**.

1. Choose the button next to the registry entry you want to delete.

1. Choose **Actions, Delete registry entry**.

# Managing EC2 instances automatically with Default Host Management Configuration


The Default Host Management Configuration setting allows AWS Systems Manager to manage your Amazon EC2 instances automatically as *managed instances*. A managed instance is an EC2 instance that is configured for use with Systems Manager. 

The benefits of managing your instances with Systems Manager include the following:
+ Connect to your EC2 instances securely using Session Manager.
+ Perform automated patch scans using Patch Manager.
+ View detailed information about your instances using Systems Manager Inventory.
+ Track and manage instances using Fleet Manager.
+ Keep SSM Agent up to date automatically.

*Fleet Manager, Inventory, Patch Manager, and Session Manager are tools in Systems Manager.*

Using Default Host Management Configuration, you can manage EC2 instances without having to manually create an AWS Identity and Access Management (IAM) instance profile. Instead, Default Host Management Configuration creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the AWS account and AWS Region where it's activated. 

If the permissions provided aren't sufficient for your use case, you can also add policies to the default IAM role created by the Default Host Management Configuration. Alternatively, if you don't need permissions for all of the capabilities provided by the default IAM role, you can create your own custom role and policies. Any changes made to the IAM role you choose for Default Host Management Configuration applies to all managed Amazon EC2 instances in the Region and account.

For more information about the policy used by Default Host Management Configuration, see [AWS managed policy: AmazonSSMManagedEC2InstanceDefaultPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonSSMManagedEC2InstanceDefaultPolicy).

**Implement least privilege access**  
The procedures in this topic are intended to be performed only by administrators. Therefore, we recommend implementing *least privilege access* in order to prevent non-administrative users from configuring or modifying the Default Host Management Configuration. To view example policies that restrict access to the Default Host Management Configuration, see [Least privilege policy examples for Default Host Management Configuration](#least-privilege-examples) later in this topic. 

**Important**  
Registration information for instances registered using Default Host Management Configuration is stored locally in the `var/lib/amazon/ssm` or `C:\ProgramData\Amazon` directories. Removing these directories or their files will prevent the instance from acquiring the necessary credentials to connect to Systems Manager using Default Host Management Configuration. In these cases, you must use an IAM instance profile to provide the required permissions to your instance, or recreate the instance.

**Topics**
+ [

## Prerequisites
](#dhmc-prerequisites)
+ [

## Activating the Default Host Management Configuration setting
](#dhmc-activate)
+ [

## Deactivating the Default Host Management Configuration setting
](#dhmc-deactivate)
+ [

## Least privilege policy examples for Default Host Management Configuration
](#least-privilege-examples)

## Prerequisites


In order to use Default Host Management Configuration in the AWS Region and AWS account where you activate the setting, the following requirements must be met.
+ An instance to be managed must use Instance Metadata Service Version 2 (IMDSv2).

  Default Host Management Configuration doesn't support Instance Metadata Service Version 1. For information about transitioning to IMDSv2, see [Transition to using Instance Metadata Service Version 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-transition-to-version-2.html) in the *Amazon EC2 User Guide*
+ SSM Agent version 3.2.582.0 or later must be installed on the instance to be managed.

  For information about checking the version of SSM Agent installed on your instance, see [Checking the SSM Agent version number](ssm-agent-get-version.md).

  For information about updating SSM Agent, see [Automatically updating SSM Agent](ssm-agent-automatic-updates.md#ssm-agent-automatic-updates-console).
+ You, as the administrator performing the tasks in this topic, must have permissions for the [GetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetServiceSetting.html), [ResetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResetServiceSetting.html), and [UpdateServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateServiceSetting.html) API operations. Additionally, you must have permissions for the `iam:PassRole` permission for the `AWSSystemsManagerDefaultEC2InstanceManagementRole` IAM role. The following is an example policy providing these permissions. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "ssm:GetServiceSetting",
                  "ssm:ResetServiceSetting",
                  "ssm:UpdateServiceSetting"
              ],
              "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
          },
          {
              "Effect": "Allow",
              "Action": [
                  "iam:PassRole"
              ],
              "Resource": "arn:aws:iam::111122223333:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
              "Condition": {
                  "StringEquals": {
                      "iam:PassedToService": [
                          "ssm.amazonaws.com"
                      ]
                  }
              }
          }
      ]
  }
  ```

------
+ If an IAM instance profile is already attached to an EC2 instance that is to be managed using Systems Manager, you must remove any permissions from it that allow the `ssm:UpdateInstanceInformation` operation. SSM Agent attempts to use instance profile permissions before using the Default Host Management Configuration permissions. If you allow the `ssm:UpdateInstanceInformation` operation in your own IAM instance profile, the instance will not use the Default Host Management Configuration permissions.

## Activating the Default Host Management Configuration setting


You can activate Default Host Management Configuration from the Fleet Manager console, or by using the AWS Command Line Interface or AWS Tools for Windows PowerShell.

You must turn on the Default Host Management Configuration one by one in each Region you where you want your Amazon EC2 instances managed by this setting.

After turning on Default Host Management Configuration, it might take up to 30 minutes for your instances to use the credentials of the role you choose in step 5 in the following procedure.

**To activate Default Host Management Configuration (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management, Configure Default Host Management Configuration**.

1. Turn on **Enable Default Host Management Configuration**.

1. Choose the AWS Identity and Access Management (IAM) role used to enable Systems Manager tools for your instances. We recommend using the default role provided by Default Host Management Configuration. It contains the minimum set of permissions necessary to manage your Amazon EC2 instances using Systems Manager. If you prefer to use a custom role, the role's trust policy must allow Systems Manager as a trusted entity. 

1. Choose **Configure** to complete setup. 

**To activate Default Host Management Configuration (command line)**

1. Create a JSON file on your local machine containing the following trust relationship policy.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement":[
           {
               "Sid":"",
               "Effect":"Allow",
               "Principal":{
                   "Service":"ssm.amazonaws.com"
               },
               "Action":"sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Open the AWS CLI or Tools for Windows PowerShell and run one of the following commands, depending on the operating system type of your local machine, to create a service role in your account. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws iam create-role \
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole \
   --path /service-role/ \
   --assume-role-policy-document file://trust-policy.json
   ```

------
#### [ Windows ]

   ```
   aws iam create-role ^
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole ^
   --path /service-role/ ^
   --assume-role-policy-document file://trust-policy.json
   ```

------
#### [ PowerShell ]

   ```
   New-IAMRole `
   -RoleName "AWSSystemsManagerDefaultEC2InstanceManagementRole" `
   -Path "/service-role/" `
   -AssumeRolePolicyDocument "file://trust-policy.json"
   ```

------

1. Run the following command to attach the `AmazonSSMManagedEC2InstanceDefaultPolicy` managed policy to your newly created role. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws iam attach-role-policy \
   --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy \
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ Windows ]

   ```
   aws iam attach-role-policy ^
   --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy ^
   --role-name AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ PowerShell ]

   ```
   Register-IAMRolePolicy `
   -PolicyArn "arn:aws:iam::aws:policy/AmazonSSMManagedEC2InstanceDefaultPolicy" `
   -RoleName "AWSSystemsManagerDefaultEC2InstanceManagementRole"
   ```

------

1. Open the AWS CLI or Tools for Windows PowerShell and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-service-setting \
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role \
   --setting-value service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ Windows ]

   ```
   aws ssm update-service-setting ^
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role ^
   --setting-value service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMServiceSetting `
   -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role" `
   -SettingValue "service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole"
   ```

------

   There is no output if the command succeeds.

1. Run the following command to view the current service settings for Default Host Management Configuration in the current AWS account and AWS Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-service-setting \
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
   ```

------
#### [ Windows ]

   ```
   aws ssm get-service-setting ^
   --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMServiceSetting `
   -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
   ```

------

   The command returns information like the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/managed-instance/default-ec2-instance-management-role",
           "SettingValue": "service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
           "LastModifiedDate": "2022-11-28T08:21:03.576000-08:00",
           "LastModifiedUser": "System",
           "ARN": "arn:aws:ssm:us-east-2:-123456789012:servicesetting/ssm/managed-instance/default-ec2-instance-management-role",
           "Status": "Custom"
       }
   }
   ```

## Deactivating the Default Host Management Configuration setting


You can deactivate Default Host Management Configuration from the Fleet Manager console, or by using the AWS Command Line Interface or AWS Tools for Windows PowerShell.

You must turn off the Default Host Management Configuration setting one by one in each Region where you no longer want your your Amazon EC2 instances managed by this configuration. Deactivating it in one Region doesn't deactivate it in all Regions.

If you deactivate Default Host Management Configuration, and you have not attached an instance profile to your Amazon EC2 instances that allows access to Systems Manager, they will no longer be managed by Systems Manager. 

**To deactivate Default Host Management Configuration (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management, Default Host Management Configuration**.

1. Turn off **Enable Default Host Management Configuration**.

1. Choose **Configure** to disable Default Host Management Configuration.

**To deactivate Default Host Management Configuration (command line)**
+ Open the AWS CLI or Tools for Windows PowerShell and run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

  ```
  aws ssm reset-service-setting \
  --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
  ```

------
#### [ Windows ]

  ```
  aws ssm reset-service-setting ^
  --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role
  ```

------
#### [ PowerShell ]

  ```
  Reset-SSMServiceSetting `
  -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
  ```

------

## Least privilege policy examples for Default Host Management Configuration


The following sample policies demonstrate how to prevent members of your organization from making changes to the Default Host Management Configuration setting in your account.

### Service control policy for AWS Organizations


The following policy demonstrates how to prevent non-administrative members in your AWS Organizations from updating your Default Host Management Configuration setting. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
            "Effect": "Deny",
            "Action": [
                "ssm:UpdateServiceSetting",
                "ssm:ResetServiceSetting"
            ],
            "Resource": "arn:aws:ssm:*:*:servicesetting/ssm/managed-instance/default-ec2-instance-management-role",
            "Condition": {
                "StringNotEqualsIgnoreCase": {
                    "aws:PrincipalTag/job-function": [
                        "administrator"
                    ]
                }
            }
        },
        {
            "Effect": "Deny",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::*:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "ssm.amazonaws.com"
                },
                "StringNotEqualsIgnoreCase": {
                    "aws:PrincipalTag/job-function": [
                        "administrator"
                    ]
                }
            }
        },
        {
            "Effect": "Deny",
            "Resource": "arn:aws:iam::*:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:DeleteRole"
            ],
            "Condition": {
                "StringNotEqualsIgnoreCase": {
                    "aws:PrincipalTag/job-function": [
                        "administrator"
                    ]
                }
            }
        }
    ]
}
```

------

### Policy for IAM principals


The following policy demonstrates how to prevent IAM groups, roles, or users in your AWS Organizations from updating your Default Host Management Configuration setting. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "ssm:UpdateServiceSetting",
                "ssm:ResetServiceSetting"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/managed-instance/default-ec2-instance-management-role"
        },
        {
            "Effect": "Deny",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:DeleteRole",
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::111122223333:role/service-role/AWSSystemsManagerDefaultEC2InstanceManagementRole"
        }
    ]
}
```

------

# Connecting to a Windows Server managed instance using Remote Desktop
Connecting to a managed instance using Remote Desktop

You can use Fleet Manager, a tool in AWS Systems Manager, to connect to your Windows Server Amazon Elastic Compute Cloud (Amazon EC2) instances using the Remote Desktop Protocol (RDP). Fleet Manager Remote Desktop, which is powered by [Amazon DCV](https://docs.aws.amazon.com/dcv/latest/adminguide/what-is-dcv.html), provides you with secure connectivity to your Windows Server instances directly from the Systems Manager console. You can have up to four simultaneous connections in a single browser window.

The Fleet Manager Remote Desktop API is named AWS Systems Manager GUI Connect. For information about using the Systems Manager GUI Connect API, see the *[AWS Systems Manager GUI Connect API Reference](https://docs.aws.amazon.com/ssm-guiconnect/latest/APIReference)*.

Currently, you can only use Remote Desktop with instances that are running Windows Server 2012 RTM or higher. Remote Desktop supports only English language inputs. 

Fleet Manager Remote Desktop is a console-only service and doesn't support command-line connections to your managed instances. To connect to a Windows Server managed instance through a shell, you can use Session Manager, another tool in AWS Systems Manager. For more information, see [AWS Systems Manager Session Manager](session-manager.md).

**Note**  
The duration of an RDP connection is not determined by the duration of your AWS Identity and Access Management (IAM) credentials. Instead, the connection persists until the maximum connection duration or idle time limit is met, whichever comes first. For more information, see [Remote connection duration and concurrency](#rdp-duration-concurrency).

For information about configuring AWS Identity and Access Management (IAM) permissions to allow your instances to interact with Systems Manager, see [Configure instance permissions for Systems Manager](setup-instance-permissions.md).

**Topics**
+ [

## Setting up your environment
](#rdp-prerequisites)
+ [

## Configuring IAM permissions for Remote Desktop
](#rdp-iam-policy-examples)
+ [

## Authenticating Remote Desktop connections
](#rdp-authentication)
+ [

## Remote connection duration and concurrency
](#rdp-duration-concurrency)
+ [

## Systems Manager GUI Connect handling of AWS IAM Identity Center attributes
](#iam-identity-center-attribute-handling)
+ [

## Connect to a managed node using Remote Desktop
](#rdp-connect-to-node)
+ [

## Viewing information about current and completed connections
](#list-connections)

## Setting up your environment


Before using Remote Desktop, verify that your environment meets the following requirements:
+ **Managed node configuration**

  Make sure that your Amazon EC2 instances are configured as [managed nodes](fleet-manager-managed-nodes.md) in Systems Manager.
+ **SSM Agent minimum version**

  Verify that nodes are running SSM Agent version 3.0.222.0 or higher. For information about how to check which agent version is running on a node, see [Checking the SSM Agent version number](ssm-agent-get-version.md). For information about installing or updating SSM Agent, see [Working with SSM Agent](ssm-agent.md).
+ **RDP port configuration**

  To accept remote connections, the Remote Desktop Services service on your Windows Server nodes must use default RDP port 3389. This is the default configuration on Amazon Machine Images (AMIs) provided by AWS. You are not explicitly required to open any inbound ports to use Remote Desktop.
+ **PSReadLine module version for keyboard functionality**

  To ensure that your keyboard functions properly in PowerShell, verify that nodes running Windows Server 2022 have PSReadLine module version 2.2.2 or higher installed. If they are running an older version, you can install the required version using the following commands.

  ```
  Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
  ```

  After the NuGet package provider is installed, run the following command.

  ```
  Install-Module `
   -Name PSReadLine `
   -Repository PSGallery `
   -MinimumVersion 2.2.2 -Force
  ```
+ **Session Manager configuration**

  Before you can use Remote Desktop, you must complete the prerequisites for Session Manager setup. When you connect to an instance using Remote Desktop, any session preferences defined for your AWS account and AWS Region are applied. For more information, see [Setting up Session Manager](session-manager-getting-started.md).
**Note**  
If you log Session Manager activity using Amazon Simple Storage Service (Amazon S3), then your Remote Desktop connections will generate the following error in `bucket_name/Port/stderr`. This error is expected behavior and can be safely ignored.  

  ```
  Setting up data channel with id SESSION_ID failed: failed to create websocket for datachannel with error: CreateDataChannel failed with no output or error: createDataChannel request failed: unexpected response from the service <BadRequest>
  <ClientErrorMessage>Session is already terminated</ClientErrorMessage>
  </BadRequest>
  ```

## Configuring IAM permissions for Remote Desktop


In addition to the required IAM permissions for Systems Manager and Session Manager, the user or role you use must be allowed permissions for initiating connections.

**Permissions for initiating connections**  
To make RDP connections to EC2 instances in the console, the following permissions are required:
+ `ssm-guiconnect:CancelConnection`
+ `ssm-guiconnect:GetConnection`
+ `ssm-guiconnect:StartConnection`

**Permissions for listing connections**  
In order to view lists of connections in the console, the following permission is required:

`ssm-guiconnect:ListConnections`

The following are example IAM policies that you can attach to a user or role to allow different types of interaction with Remote Desktop. Replace each *example resource placeholder* with your own information.

### Standard policy for connecting to EC2 instances


------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:GetPasswordData"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSM",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetInventorySchema"
            ],
            "Resource": "*"
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}"
                    ]
                }
            }
        },
        {
            "Sid": "SSMStartSession",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*",
                "arn:aws:ssm:*::document/AWS-StartPortForwardingSession"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "SSMMessages",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:session/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "GuiConnect",
            "Effect": "Allow",
            "Action": [
                "ssm-guiconnect:CancelConnection",
                "ssm-guiconnect:GetConnection",
                "ssm-guiconnect:StartConnection",
                "ssm-guiconnect:ListConnections"
            ],
            "Resource": "*"
        }
    ]
}
```

------

### Policy for connecting to EC2 instances with specific tags


**Note**  
In the following IAM policy, the `SSMStartSession` section requires an Amazon Resource Name (ARN) for the `ssm:StartSession` action. As shown, the ARN you specify does *not* require an AWS account ID. If you specify an account ID, Fleet Manager returns an `AccessDeniedException`.  
The `AccessTaggedInstances` section, which is located lower in the example policy, also requires ARNs for `ssm:StartSession`. For those ARNs, you do specify AWS account IDs.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:GetPasswordData"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSM",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetInventorySchema"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSMStartSession",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:*::document/AWS-StartPortForwardingSession"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "AccessTaggedInstances",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*",
                "arn:aws:ssm:*:111122223333:managed-instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/tag key": [
                        "tag value"
                    ]
                }
            }
        },
        {
            "Sid": "SSMMessages",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:session/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "GuiConnect",
            "Effect": "Allow",
            "Action": [
                "ssm-guiconnect:CancelConnection",
                "ssm-guiconnect:GetConnection",
                "ssm-guiconnect:StartConnection",
                "ssm-guiconnect:ListConnections"
            ],
            "Resource": "*"
        }
    ]
}
```

------

### Policy for AWS IAM Identity Center users to connect to EC2 instances


------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "SSO",
            "Effect": "Allow",
            "Action": [
                "sso:ListDirectoryAssociations*",
                "identitystore:DescribeUser"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EC2",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:GetPasswordData"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSM",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceProperties",
                "ssm:GetCommandInvocation",
                "ssm:GetInventorySchema"
            ],
            "Resource": "*"
        },
        {
            "Sid": "TerminateSession",
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userName}"
                    ]
                }
            }
        },
        {
            "Sid": "SSMStartSession",
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:instance/*",
                "arn:aws:ssm:*:*:managed-instance/*",
                "arn:aws:ssm:*:*:document/AWS-StartPortForwardingSession"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "SSMSendCommand",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:instance/*",
                "arn:aws:ssm:*:*:managed-instance/*",
                "arn:aws:ssm:*:*:document/AWSSSO-CreateSSOUser"
            ]
        },
        {
            "Sid": "SSMMessages",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:session/*"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "ssm-guiconnect.amazonaws.com"
                }
            }
        },
        {
            "Sid": "GuiConnect",
            "Effect": "Allow",
            "Action": [
                "ssm-guiconnect:CancelConnection",
                "ssm-guiconnect:GetConnection",
                "ssm-guiconnect:StartConnection",
                "ssm-guiconnect:ListConnections"
            ],
            "Resource": "*"
        }
    ]
}
```

------

## Authenticating Remote Desktop connections


When establishing a remote connection, you can authenticate using Windows credentials or the Amazon EC2 key pair (`.pem` file) that is associated with the instance. For information about using key pairs, see [Amazon EC2 key pairs and Windows instances](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.

Alternatively, if you're authenticated to the AWS Management Console using AWS IAM Identity Center, you can connect to your instances without providing additional credentials. For an example of a policy to allow remote connection authentication using IAM Identity Center, see [Configuring IAM permissions for Remote Desktop](#rdp-iam-policy-examples). 

Remote Desktop connections using IAM Identity Center authentication are available in all AWS Regions where IAM Identity Center is supported.

**Before you begin**  
Note the following conditions for using IAM Identity Center authentication before you begin connecting using Remote Desktop.
+ Remote Desktop supports IAM Identity Center authentication for nodes in the same AWS Region where you enabled IAM Identity Center.
+ Remote Desktop supports IAM Identity Center user names of up to 16 characters. 
+ Remote Desktop supports IAM Identity Center user names consisting of alphanumeric characters and the following special characters: `.` `-` `_`
**Important**  
Connections won't succeed for IAM Identity Center user names that contain the following characters: `+` `=` `,`   
IAM Identity Center supports these characters in user names, but Fleet Manager RDP connections do not.  
In addition, if an IAM Identity Center user name contains one or more `@` symbols, Fleet Manager disregards the first `@` symbol and all characters that follow it, whether or not the `@` introduces the domain portion of an email address. For instance, for the IAM Identity Center user name `diego_ramirez@example.com`, the `@example.com` portion is ignored and the user name for Fleet Manager becomes `diego_ramirez`. For `diego_r@mirez@example.com`, Fleet Manager disregards `@mirez@example.com`, and the username for Fleet Manager becomes `diego_r`.
+ When a connection is authenticated using IAM Identity Center, Remote Desktop creates a local Windows user in the instance’s Local Administrators group. This user persists after the remote connection has ended. 
+ Remote Desktop does not allow IAM Identity Center authentication for nodes that are Microsoft Active Directory domain controllers.
+ Although Remote Desktop allows you to use IAM Identity Center authentication for nodes *joined* to an Active Directory domain, we do not recommend doing so. This authentication method grants administrative permissions to users which might override more restrictive permissions granted by the domain.

## Remote connection duration and concurrency


The following conditions apply to active Remote Desktop connections:
+ **Connection duration**

  By default, a Remote Desktop connection is disconnected after 60 minutes. To prevent a connection from being disconnected, you can choose **Renew session** before being disconnected to reset the duration timer.
+ **Connection timeout**

  A Remote Desktop connection disconnects after it has been idle for more than 10 minutes.
+ **Connection persistence**

  After you connect to a Windows Server using Remote Desktop, the connection persists until the maximum connection duration (60 minutes) or idle timeout limit (10 minutes) is met. Connection duration is not determined by the duration of your AWS Identity and Access Management (IAM) credentials. The connection persists after IAM credentials expire if the connection duration limits are not met. When using Remote Desktop, you should terminate your connection after your IAM credentials expire by leaving the browser page.
+ **Concurrent connections**

  By default, you can have a maximum of 5 active Remote Desktop connections at one time for the same AWS account and AWS Region. To request a service quota increase of up to 50 concurrent connections, see [Requesting a quota increase ](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html)in the *Service Quotas User Guide*.
**Note**  
The standard license for Windows Server allows for two concurrent RDP connections. To support more connections, you must purchase additional Client Access Licenses (CALs) from Microsoft or Microsoft Remote Desktop Services licenses from AWS. For more information on supplemental licensing, see the following topics:  
[Client Access Licenses and Management Licenses](https://www.microsoft.com/en-us/licensing/product-licensing/client-access-license) on the Microsoft website
[Use License Manager user-based subscriptions for supported software products](https://docs.aws.amazon.com/license-manager/latest/userguide/user-based-subscriptions.html) in the *License Manager User Guide*

## Systems Manager GUI Connect handling of AWS IAM Identity Center attributes


Systems Manager GUI Connect is the API that supports Fleet Manager connections to EC2 instances using RDP. The following IAM Identity Center user data is retained after a connection is closed:
+ `username`

Systems Manager GUI Connect encrypts this identity attribute at rest using an AWS managed key by default. Customer managed keys are not supported for encrypting this attribute in Systems Manager GUI Connect. If you delete a user in your IAM Identity Center instance, Systems Manager GUI Connect continues to retain the `username` attribute associated with that user for 7 years, after which it is deleted. This data is retained to support auditing events, such as listing Systems Manager GUI Connect connection history. The data can't be deleted manually.

## Connect to a managed node using Remote Desktop


**Browser copy/paste support for text**  
Using the Google Chrome and Microsoft Edge browsers, you can copy and paste text from a managed node to your local machine, and from your local machine to a managed node that you are connected to.

Using the Mozilla Firefox browser, you can copy and paste text from a managed node to your local machine only. Copying from your local machine to the managed node is not supported.

**To connect to a managed node using Fleet Manager Remote Desktop**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the node that you want to connect to. You can select either the check box or the node name.

1. On the **Node actions** menu, choose **Connect with Remote Desktop**.

1. Choose your preferred **Authentication type**. If you choose **User credentials**, enter the user name and password for a Windows user account on the node that you're connecting to. If you choose **Key pair**, you can provide authentication using one of the following methods:

   1. Choose **Browse local machine** if you want to select the PEM key associated with your instance from your local file system.

      - or -

   1. Choose **Paste key pair content** if you want to copy the contents of the PEM file and paste them in to the provided field.

1. Select **Connect**.

1. To choose your preferred display resolution, in the **Actions** menu, choose **Resolutions**, and then select from the following:
   + **Adapt Automatically**
   + **1920 x 1080**
   + **1400 x 900**
   + **1366 x 768**
   + **800 x 600**

   The **Adapt Automatically** option sets the resolution based on your detected screen size.

## Viewing information about current and completed connections


You can use the Fleet Manager section of the Systems Manager console to view information about RDP connections that have been made in your account. Using a set of filters, you can limit the list of connections displayed to a time range, a specific instance, the user who made the connections, and connections of a specific status. The console also provides tabs that show information about all currently active connections, and all past connections.

**To view information about current and completed connections**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose **Account management, Connect with Remote Desktop**.

1. Choose one of the following tabs:
   + **Active connections**
   + **Connection history**

1. To further narrow the list of connection results displayed, specify one or more filters in the search (![\[\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/search-icon.png)) box. You can also enter a free-text search term.

# Managing Amazon EBS volumes on managed instances
Managing Amazon EBS volumes

[Amazon Elastic Block Store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) (Amazon EBS) provides block level storage volumes for use with Amazon Elastic Compute Cloud (EC2) instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances.

You can use Fleet Manager, a tool in AWS Systems Manager, to manage Amazon EBS volumes on your managed instances. For example, you can initialize an EBS volume, format a partition, and mount the volume to make it available for use.

**Note**  
Fleet Manager currently supports Amazon EBS volume management for Windows Server instances only.

## View EBS volume details


**To view details for an EBS volume with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed instance for which you want to view EBS volume details.

1. Choose **View details**.

1. Choose **Tools, EBS volumes**.

1. To view details for an EBS volume, choose its ID in the **Volume ID ** column.

## Initialize and format an EBS volume


**To initialize and format an EBS volume with Fleet Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the button next to the managed instance for which you want to initialize, format, and mount an EBS volume. You can only initialize an EBS volume if its disk is empty.

1. Choose **View details**.

1. In the **Tools** menu, choose **EBS volumes**.

1. Choose the button next to the EBS volume you want to initialize and format.

1. Choose **Initialize and format**.

1. In **Partition style**, choose the partition style you want to use for the EBS volume.

1. (Optional) Choose a **Drive letter** for the partition.

1. (Optional) Enter a **Partition name** to identify the partition.

1. Choose the **File system** to use to organize files and data stored in the partition.

1. Choose **Confirm** to make the EBS volume available for use. You can't change the partition configuration from the AWS Management Console after confirming, however, you can use SSH or RDP to log into the instance to change the partition configuration.

# Accessing the Red Hat Knowledge base portal


You can use Fleet Manager, a tool in AWS Systems Manager, to access the Knowledge base portal if you are a Red Hat customer. You are considered a Red Hat customer if you run Red Hat Enterprise Linux (RHEL) instances or use RHEL services on AWS. The Knowledge base portal includes binaries, and knowledge-share and discussion forums for community support that are available only to Red Hat licensed customers.

In addition to the required AWS Identity and Access Management (IAM) permissions for Systems Manager and Fleet Manager, the user or role you use to access the console must allow the `rhelkb:GetRhelURL` action to access the Knowledge base portal.

**To access the Red Hat Knowledgebase Portal**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. Choose the RHEL instance you want to use to connect to the Red Hat Knowledgebase Portal.

1. Choose **Account management**, **Access Red Hat Knowledgebase** to open the Red Hat Knowledge base page.

If you use RHEL on AWS to run fully supported RHEL workloads, you can also access the Red Hat Knowledge base through Red Hat's website by using your AWS credentials.

# Troubleshooting managed node availability


For several AWS Systems Manager tools like Run Command, Distributor, and Session Manager, you can choose to manually select the managed nodes on which you want to run an operation. In cases like these, after you specify that you want to choose nodes manually, the system displays a list of managed nodes where you can run the operation.

This topic provides information to help you diagnose why a managed node *that you have confirmed is running* isn't included in your lists of managed nodes in Systems Manager. 

In order for a node to be managed by Systems Manager and available in lists of managed nodes, it must meet three requirements:
+ SSM Agent must be installed and running on the node with a supported operating system.
**Note**  
Some AWS managed Amazon Machine Images (AMIs) are configured to launch instances with [SSM Agent](ssm-agent.md) preinstalled. (You can also configure a custom AMI to preinstall SSM Agent.) For more information, see [Find AMIs with the SSM Agent preinstalled](ami-preinstalled-agent.md).
+ For Amazon Elastic Compute Cloud (Amazon EC2) instances, you must attach an AWS Identity and Access Management (IAM) instance profile to the instance. The instance profile enables the instance to communicate with the Systems Manager service. If you don't assign an instance profile to the instance, you register it using a [hybrid activation](activations.md), which is not a common scenario.
+ SSM Agent must be able to connect to a Systems Manager endpoint in order to register itself with the service. Thereafter, the managed node must be available to the service, which is confirmed by the service sending a signal every five minutes to check the instance's health. 
+ After the status of a managed node has been `Connection Lost` for at least 30 days, the node might no longer be listed in the Fleet Manager console. To restore it to the list, the issue that caused the lost connection must be resolved.

After you verify that a managed node is running, you can use the following command to check whether SSM Agent successfully registered with the Systems Manager service. This command doesn't return results until a successful registration has taken place.

------
#### [ Linux & macOS ]

```
aws ssm describe-instance-associations-status \
    --instance-id instance-id
```

------
#### [ Windows ]

```
aws ssm describe-instance-associations-status ^
    --instance-id instance-id
```

------
#### [ PowerShell ]

```
Get-SSMInstanceAssociationsStatus `
    -InstanceId instance-id
```

------

If registration was successful and the managed node is now available for Systems Manager operations, the command returns results similar to the following.

```
{
    "InstanceAssociationStatusInfos": [
        {
            "AssociationId": "fa262de1-6150-4a90-8f53-d7eb5EXAMPLE",
            "Name": "AWS-GatherSoftwareInventory",
            "DocumentVersion": "1",
            "AssociationVersion": "1",
            "InstanceId": "i-02573cafcfEXAMPLE",
            "Status": "Pending",
            "DetailedStatus": "Associated"
        },
        {
            "AssociationId": "f9ec7a0f-6104-4273-8975-82e34EXAMPLE",
            "Name": "AWS-RunPatchBaseline",
            "DocumentVersion": "1",
            "AssociationVersion": "1",
            "InstanceId": "i-02573cafcfEXAMPLE",
            "Status": "Queued",
            "AssociationName": "SystemAssociationForScanningPatches"
        }
    ]
}
```

If registration hasn't completed yet or was unsuccessful, the command returns results similar to the following:

```
{
    "InstanceAssociationStatusInfos": []
}
```

If the command doesn't return results after 5 minutes or so, use the following information to help you troubleshoot problems with your managed nodes.

**Topics**
+ [

## Solution 1: Verify that SSM Agent is installed and running on the managed node
](#instances-missing-solution-1)
+ [

## Solution 2: Verify that an IAM instance profile has been specified for the instance (EC2 instances only)
](#instances-missing-solution-2)
+ [

## Solution 3: Verify service endpoint connectivity
](#instances-missing-solution-3)
+ [

## Solution 4: Verify target operating system support
](#instances-missing-solution-4)
+ [

## Solution 5: Verify you're working in the same AWS Region as the Amazon EC2 instance
](#instances-missing-solution-5)
+ [

## Solution 6: Verify the proxy configuration you applied to SSM Agent on your managed node
](#instances-missing-solution-6)
+ [

## Solution 7: Install a TLS certificate on managed instances
](#hybrid-tls-certificate)
+ [

# Troubleshooting managed node availability using `ssm-cli`
](troubleshooting-managed-nodes-using-ssm-cli.md)

## Solution 1: Verify that SSM Agent is installed and running on the managed node


Make sure the latest version of SSM Agent is installed and running on the managed node.

To determine whether SSM Agent is installed and running on a managed node, see [Checking SSM Agent status and starting the agent](ssm-agent-status-and-restart.md).

To install or reinstall SSM Agent on a managed node, see the following topics:
+ [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md)
+ [How to install the SSM Agent on hybrid Linux nodes](hybrid-multicloud-ssm-agent-install-linux.md)
+ [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md)
+ [How to install the SSM Agent on hybrid Windows nodes ](hybrid-multicloud-ssm-agent-install-windows.md)

## Solution 2: Verify that an IAM instance profile has been specified for the instance (EC2 instances only)


For Amazon Elastic Compute Cloud (Amazon EC2) instances, verify that the instance is configured with an AWS Identity and Access Management (IAM) instance profile that allows the instance to communicate with the Systems Manager API. Also verify that your user has an IAM trust policy that allows your user to communicate with the Systems Manager API.

**Note**  
On-premises servers, edge devices, and virtual machines (VMs) use an IAM service role instead of an instance profile. For more information, see [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md).

**To determine whether an instance profile with the necessary permissions is attached to an EC2 instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. Choose the instance to check for an instance profile.

1. On the **Description** tab in the bottom pane, locate **IAM role** and choose the name of the role.

1. On the role **Summary** page for the instance profile, on the **Permissions** tab, ensure that `AmazonSSMManagedInstanceCore` is listed under **Permissions policies**.

   If a custom policy is used instead, ensure that it provides the same permissions as `AmazonSSMManagedInstanceCore`.

   [Open `AmazonSSMManagedInstanceCore` in the console](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore$jsonEditor)

   For information about other policies that can be attached to an instance profile for Systems Manager, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

## Solution 3: Verify service endpoint connectivity


Verify that the instance has connectivity to the Systems Manager service endpoints. This connectivity is provided by creating and configuring VPC endpoints for Systems Manager, or by allowing HTTPS (port 443) outbound traffic to the service endpoints.

For Amazon EC2 instances, the Systems Manager service endpoint for the AWS Region is used to register the instance if your virtual private cloud (VPC) configuration allows outbound traffic. However, if the VPC configuration the instance was launched in does not allow outbound traffic and you can't change this configuration to allow connectivity to the public service endpoints, you must configure interface endpoints for your VPC instead.

For more information, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).

## Solution 4: Verify target operating system support


Verify that the operation you have chosen can be run on the type of managed node you expect to see listed. Some Systems Manager operations can target only Windows instances or only Linux instances. For example, the Systems Manager (SSM) documents `AWS-InstallPowerShellModule` and `AWS-ConfigureCloudWatch` can be run only on Windows instances. In the **Run a command** page, if you choose either of these documents and select **Choose instances manually**, only your Windows instances are listed and available for selection.

## Solution 5: Verify you're working in the same AWS Region as the Amazon EC2 instance


Amazon EC2 instances are created and available in specific AWS Regions, such as the US East (Ohio) Region (us-east-2) or Europe (Ireland) Region (eu-west-1). Ensure that you're working in the same AWS Region as the Amazon EC2 instance that you want to work with. For more information, see [Choosing a Region](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html#select-region) in *Getting Started with the AWS Management Console*.

## Solution 6: Verify the proxy configuration you applied to SSM Agent on your managed node


Verify that the proxy configuration you applied to SSM Agent on your managed node is correct. If the proxy configuration is incorrect, the node can't connect to the required service endpoints, or Systems Manager might identify the operating system of the managed node incorrectly. For more information, see [Configuring SSM Agent to use a proxy on Linux nodes](configure-proxy-ssm-agent.md) and [Configure SSM Agent to use a proxy for Windows Server instances](configure-proxy-ssm-agent-windows.md).

## Solution 7: Install a TLS certificate on managed instances


A Transport Layer Security (TLS) certificate must be installed on each managed instance you use with AWS Systems Manager. AWS services use these certificates to encrypt calls to other AWS services.

A TLS certificate is already installed by default on each Amazon EC2 instance created from any Amazon Machine Image (AMI). Most modern operating systems include the required TLS certificate from Amazon Trust Services CAs in their trust store.

To verify whether the required certificate is installed on your instance run the following command based on the operating system of your instance. Be sure to replace the *region* portion of the URL with the AWS Region where your managed instance is located.

------
#### [ Linux & macOS ]

```
curl -L https://ssm.region.amazonaws.com
```

------
#### [ Windows ]

```
Invoke-WebRequest -Uri https://ssm.region.amazonaws.com
```

------

The command should return an `UnknownOperationException` error. If you receive an SSL/TLS error message instead then the required certificate might not be installed.

If you find the required Amazon Trust Services CA certificates aren't installed on your base operating systems, on instances created from AMIs that aren't supplied by Amazon, or on your own on-premises servers and VMs, you must install and allow a certificate from [Amazon Trust Services](https://www.amazontrust.com/repository/), or use AWS Certificate Manager (ACM) to create and manage certificates for a supported integrated service.

Each of your managed instances must have one of the following Transport Layer Security (TLS) certificates installed.
+ Amazon Root CA 1
+ Starfield Services Root Certificate Authority - G2
+ Starfield Class 2 Certificate Authority

For information about using ACM, see the *[AWS Certificate Manager User Guide](https://docs.aws.amazon.com/acm/latest/userguide/)*.

If certificates in your computing environment are managed by a Group Policy Object (GPO), then you might need to configure Group Policy to include one of these certificates.

For more information about the Amazon Root and Starfield certificates, see the blog post [How to Prepare for AWS’s Move to Its Own Certificate Authority](https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/).

# Troubleshooting managed node availability using `ssm-cli`


The `ssm-cli` is a standalone command line tool included in the SSM Agent installation. When you install SSM Agent 3.1.501.0 or later on a machine, you can run `ssm-cli` commands on that machine. The output of those commands helps you determine whether the machine meets the minimum requirements for an Amazon EC2 instance or non-EC2 machine to be managed by AWS Systems Manager, and therefore added to lists of managed nodes in Systems Manager. (SSM Agent version 3.1.501.0 was released in November, 2021.)

**Minimum requirements**  
For an Amazon EC2 instance or non-EC2 machine to be managed by AWS Systems Manager, and available in lists of managed nodes, it must meet three primary requirements:
+ SSM Agent must be installed and running on a machine with a [supported operating system](operating-systems-and-machine-types.md#prereqs-operating-systems).

  Some AWS managed Amazon Machine Images (AMIs) for EC2 are configured to launch instances with [SSM Agent](ssm-agent.md) preinstalled. (You can also configure a custom AMI to preinstall SSM Agent.) For more information, see [Find AMIs with the SSM Agent preinstalled](ami-preinstalled-agent.md).
+ An AWS Identity and Access Management (IAM) instance profile (for EC2 instances) or IAM service role (for non-EC2 machines) that supplies the required permissions to communicate with the Systems Manager service must be attached to the machine.
+ SSM Agent must be able to connect to a Systems Manager endpoint to register itself with the service. Thereafter, the managed node must be available to the service, which is confirmed by the service sending a signal every five minutes to check the managed node's health.

**Preconfigured commands in `ssm-cli`**  
Preconfigured commands are included that gather the required information to help you diagnose why a machine that you have confirmed is running isn't included in your lists of managed nodes in Systems Manager. These commands are run when you specify the `get-diagnostics` option.

On the machine, run the following command to use `ssm-cli` to help you troubleshoot managed node availability. 

------
#### [ Linux & macOS ]

```
ssm-cli get-diagnostics --output table
```

------
#### [ Windows ]

On Windows Server machines, you must navigate to the `C:\Program Files\Amazon\SSM` directory before running the command.

```
ssm-cli.exe get-diagnostics --output table
```

------
#### [ PowerShell ]

On Windows Server machines, you must navigate to the `C:\Program Files\Amazon\SSM` directory before running the command.

```
.\ssm-cli.exe get-diagnostics --output table
```

------

The command returns output as a table similar to the following. 

**Note**  
Connectivity checks to the `ssmmessages`, `s3`, `kms`, `logs`, and `monitoring` endpoints are for additional optional features such as Session Manager that can log to Amazon Simple Storage Service (Amazon S3) or Amazon CloudWatch Logs, and use AWS Key Management Service (AWS KMS) encryption.

------
#### [ Linux & macOS ]

```
[root@instance]# ssm-cli get-diagnostics --output table
┌───────────────────────────────────────┬─────────┬───────────────────────────────────────────────────────────────────────┐
│ Check                                 │ Status  │ Note                                                                  │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ EC2 IMDS                              │ Success │ IMDS is accessible and has instance id i-0123456789abcdefa in Region  │
│                                       │         │ us-east-2                                                             │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Hybrid instance registration          │ Skipped │ Instance does not have hybrid registration                            │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssm endpoint          │ Success │ ssm.us-east-2.amazonaws.com is reachable                              │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to ec2messages endpoint  │ Success │ ec2messages.us-east-2.amazonaws.com is reachable                      │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssmmessages endpoint  │ Success │ ssmmessages.us-east-2.amazonaws.com is reachable                      │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to s3 endpoint           │ Success │ s3.us-east-2.amazonaws.com is reachable                               │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to kms endpoint          │ Success │ kms.us-east-2.amazonaws.com is reachable                              │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to logs endpoint         │ Success │ logs.us-east-2.amazonaws.com is reachable                             │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Connectivity to monitoring endpoint   │ Success │ monitoring.us-east-2.amazonaws.com is reachable                       │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ AWS Credentials                       │ Success │ Credentials are for                                                   │
│                                       │         │ arn:aws:sts::123456789012:assumed-role/Fullaccess/i-0123456789abcdefa │
│                                       │         │ and will expire at 2021-08-17 18:47:49 +0000 UTC                      │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Agent service                         │ Success │ Agent service is running and is running as expected user              │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ Proxy configuration                   │ Skipped │ No proxy configuration detected                                       │
├───────────────────────────────────────┼─────────┼───────────────────────────────────────────────────────────────────────┤
│ SSM Agent version                     │ Success │ SSM Agent version is 3.0.1209.0, latest available agent version is    │
│                                       │         │ 3.1.192.0                                                             │
└───────────────────────────────────────┴─────────┴───────────────────────────────────────────────────────────────────────┘
```

------
#### [ Windows Server and PowerShell ]

```
PS C:\Program Files\Amazon\SSM> .\ssm-cli.exe get-diagnostics --output table      
┌───────────────────────────────────────┬─────────┬─────────────────────────────────────────────────────────────────────┐
│ Check                                 │ Status  │ Note                                                                │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ EC2 IMDS                              │ Success │ IMDS is accessible and has instance id i-0123456789EXAMPLE in       │
│                                       │         │ Region us-east-2                                                    │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Hybrid instance registration          │ Skipped │ Instance does not have hybrid registration                          │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssm endpoint          │ Success │ ssm.us-east-2.amazonaws.com is reachable                            │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to ec2messages endpoint  │ Success │ ec2messages.us-east-2.amazonaws.com is reachable                    │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to ssmmessages endpoint  │ Success │ ssmmessages.us-east-2.amazonaws.com is reachable                    │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to s3 endpoint           │ Success │ s3.us-east-2.amazonaws.com is reachable                             │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to kms endpoint          │ Success │ kms.us-east-2.amazonaws.com is reachable                            │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to logs endpoint         │ Success │ logs.us-east-2.amazonaws.com is reachable                           │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Connectivity to monitoring endpoint   │ Success │ monitoring.us-east-2.amazonaws.com is reachable                     │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ AWS Credentials                       │ Success │ Credentials are for                                                 │
│                                       │         │  arn:aws:sts::123456789012:assumed-role/SSM-Role/i-123abc45EXAMPLE  │
│                                       │         │  and will expire at 2021-09-02 13:24:42 +0000 UTC                   │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Agent service                         │ Success │ Agent service is running and is running as expected user            │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Proxy configuration                   │ Skipped │ No proxy configuration detected                                     │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ Windows sysprep image state           │ Success │ Windows image state value is at desired value IMAGE_STATE_COMPLETE  │
├───────────────────────────────────────┼─────────┼─────────────────────────────────────────────────────────────────────┤
│ SSM Agent version                     │ Success │ SSM Agent version is 3.2.815.0, latest agent version in us-east-2   │
│                                       │         │ is 3.2.985.0                                                        │
└───────────────────────────────────────┴─────────┴─────────────────────────────────────────────────────────────────────┘
```

------

The following table provides additional details for each of the checks performed by `ssm-cli`.


**`ssm-cli` diagnostic checks**  

| Check | Details | 
| --- | --- | 
| Amazon EC2 instance metadata service | Indicates whether the managed node is able to reach the metadata service. A failed test indicates a connectivity issue to http://169.254.169.254 which can be caused by local route, proxy, or operating system (OS) firewall and proxy configurations. | 
| Hybrid instance registration | Indicates whether SSM Agent is registered using a hybrid activation. | 
| Connectivity to ssm endpoint | Indicates whether the node is able to reach the service endpoints for Systems Manager on TCP port 443. A failed test indicates connectivity issues to https://ssm.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity issues can be caused by the VPC configuration including security groups, network access control lists, route tables, or OS firewalls and proxies. | 
| Connectivity to ec2messages endpoint | Indicates whether the node is able to reach the service endpoints for Systems Manager on TCP port 443. A failed test indicates connectivity issues to https://ec2messages.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity issues can be caused by the VPC configuration including security groups, network access control lists, route tables, or OS firewalls and proxies. | 
| Connectivity to ssmmessages endpoint | Indicates whether the node is able to reach the service endpoints for Systems Manager on TCP port 443. A failed test indicates connectivity issues to https://ssmmessages.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity issues can be caused by the VPC configuration including security groups, network access control lists, route tables, or OS firewalls and proxies. | 
| Connectivity to s3 endpoint | Indicates whether the node is able to reach the service endpoint for Amazon Simple Storage Service on TCP port 443. A failed test indicates connectivity issues to https://s3.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list. | 
| Connectivity to kms endpoint |  Indicates whether the node is able to reach the service endpoint for AWS Key Management Service on TCP port 443. A failed test indicates connectivity issues to `https://kms.region.amazonaws.com` depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list.  | 
| Connectivity to logs endpoint | Indicates whether the node is able to reach the service endpoint for Amazon CloudWatch Logs on TCP port 443. A failed test indicates connectivity issues to https://logs.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list. | 
| Connectivity to monitoring endpoint | Indicates whether the node is able to reach the service endpoint for Amazon CloudWatch on TCP port 443. A failed test indicates connectivity issues to https://monitoring.region.amazonaws.com depending on the AWS Region where the node is located. Connectivity to this endpoint is not required for a node to appear in your managed nodes list. | 
| AWS Credentials | Indicates whether SSM Agent has the required credentials based on the IAM instance profile (for EC2 instances) or IAM service role (for non-EC2 machines) attached to the machine. A failed test indicates that no IAM instance profile or IAM service role is attached to the machine, or it does not contain the required permissions for Systems Manager. | 
| Agent service | Indicates whether SSM Agent service is running, and whether the service is running as root for Linux or macOS, or SYSTEM for Windows Server. A failed test indicates SSM Agent service is not running or is not running as root or SYSTEM. | 
| Proxy configuration | Indicates whether SSM Agent is configured to use a proxy. | 
| Sysprep image state (Windows only) | Indicates the state of Sysprep on the node. SSM Agent will not start on the node if the Sysprep state is a value other than IMAGE\$1STATE\$1COMPLETE. | 
| SSM Agent version | Indicates whether the latest available version of SSM Agent is installed. | 

# AWS Systems Manager Hybrid Activations
Hybrid Activations

To configure non-EC2 machines for use with AWS Systems Manager in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment, you create a *hybrid activation*. Non-EC2 machine types supported as managed nodes include the following:
+ Servers on your own premises (on-premises servers)
+ AWS IoT Greengrass core devices
+ AWS IoT and non-AWS edge devices
+ Virtual machines (VMs), including VMs in other cloud environments

When you run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-activation.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-activation.html) command to start a hybrid activation process, you receive an activation code and ID in the command response. You then include the activation code and ID with the command to install SSM Agent on the machine, as described in step 3 of [Install SSM Agent on hybrid Linux nodes](hybrid-multicloud-ssm-agent-install-linux.md) and step 4 of [Install SSM Agent on hybrid Windows Server nodes](hybrid-multicloud-ssm-agent-install-windows.md).

This activation process applies to all non-EC2 machine types *except* AWS IoT Greengrass core devices. For information about configuring AWS IoT Greengrass core devices for Systems Manager, see [Managing edge devices with Systems Manager](systems-manager-setting-up-edge-devices.md).

**Note**  
Support isn't currently provided for non-EC2 macOS machines.

**About Systems Manager instances tiers**  
AWS Systems Manager offers a standard-instances tier and an advanced-instances tier. Both support managed nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The standard-instances tier allows you to register a maximum of 1,000 machines per AWS account per AWS Region. If you need to register more than 1,000 machines in a single account and Region, then use the advanced-instances tier. You can create as many managed nodes as you like in the advanced-instances tier. All managed nodes configured for Systems Manager are priced on a pay-per-use basis. For more information about enabling the advanced instances tier, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md). For more information about pricing, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

Note the following additional information about the standard-instances tier and advanced-instances tier:
+ Advanced instances also allow you to connect to your non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment by using AWS Systems Manager Session Manager. Session Manager provides interactive shell access to your instances. For more information, see [AWS Systems Manager Session Manager](session-manager.md).
+ The standard-instances quota also applies to EC2 instances that use a Systems Manager on-premises activation (which isn't a common scenario).
+ To patch applications released by Microsoft on virtual machines (VMs) on-premises instances, activate the advanced-instances tier. There is a charge to use the advanced-instances tier. There is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Patching applications released by Microsoft on Windows Server](patch-manager-patching-windows-applications.md).

# AWS Systems Manager Inventory
Inventory

AWS Systems Manager Inventory provides visibility into your AWS computing environment. You can use Inventory to collect *metadata* from your managed nodes. You can store this metadata in a central Amazon Simple Storage Service (Amazon S3) bucket, and then use built-in tools to query the data and quickly determine which nodes are running the software and configurations required by your software policy, and which nodes need to be updated. You can configure Inventory on all of your managed nodes by using a one-click procedure. You can also configure and view inventory data from multiple AWS Regions and AWS accounts by using Amazon Athena. To get started with Inventory, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/inventory). In the navigation pane, choose **Inventory**.

If the pre-configured metadata types collected by Systems Manager Inventory don't meet your needs, then you can create custom inventory. Custom inventory is simply a JSON file with information that you provide and add to the managed node in a specific directory. When Systems Manager Inventory collects data, it captures this custom inventory data. For example, if you run a large data center, you can specify the rack location of each of your servers as custom inventory. You can then view the rack space data when you view other inventory data.

**Important**  
Systems Manager Inventory collects *only* metadata from your managed nodes. Inventory doesn't access proprietary information or data.

The following table describes the types of data you can collect with Systems Manager Inventory. The table also describes different offerings for targeting nodes and the collection intervals you can specify.


****  

| Configuration | Details | 
| --- | --- | 
|  Metadata types  |  You can configure Inventory to collect the following types of data: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-inventory.html)  To view a list of all metadata collected by Inventory, see [Metadata collected by Inventory](inventory-schema.md).   | 
|  Nodes to target  |  You can choose to inventory all managed nodes in your AWS account, individually select nodes, or target groups of nodes by using tags. For more information about collecting inventory data from all of your managed nodes, see [Inventory all managed nodes in your AWS account](inventory-collection.md#inventory-management-inventory-all).  | 
|  When to collect information  |  You can specify a collection interval in terms of minutes, hours, and days. The shortest collection interval is every 30 minutes.   | 

**Note**  
Depending on the amount of data collected, the system can take several minutes to report the data to the output you specified. After the information is collected, the data is sent over a secure HTTPS channel to a plain-text AWS store that is accessible only from your AWS account. 

You can view the data in the Systems Manager console on the **Inventory** page, which includes several predefined cards to help you query the data.

![\[Systems Manager Inventory cards in the Systems Manager console.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-cards.png)


**Note**  
Inventory cards automatically filter out Amazon EC2 managed instances with a state of *Terminated* and *Stopped*. For on-premises and AWS IoT Greengrass core device managed nodes, Inventory cards automatically filter out nodes with a state of *Terminated*. 

If you create a resource data sync to synchronize and store all of your data in a single Amazon S3 bucket, then you can drill down into the data on the **Inventory Detailed View** page. For more information, see [Querying inventory data from multiple Regions and accounts](systems-manager-inventory-query.md).

**EventBridge support**  
This Systems Manager tool is supported as an *event* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**Topics**
+ [

# Learn more about Systems Manager Inventory
](inventory-about.md)
+ [

# Setting up Systems Manager Inventory
](systems-manager-inventory-setting-up.md)
+ [

# Configuring inventory collection
](inventory-collection.md)
+ [

# Querying inventory data from multiple Regions and accounts
](systems-manager-inventory-query.md)
+ [

# Querying an inventory collection by using filters
](inventory-query-filters.md)
+ [

# Aggregating inventory data
](inventory-aggregate.md)
+ [

# Working with custom inventory
](inventory-custom.md)
+ [

# Viewing inventory history and change tracking
](inventory-history.md)
+ [

# Stopping data collection and deleting inventory data
](systems-manager-inventory-delete.md)
+ [

# Assigning custom inventory metadata to a managed node
](inventory-custom-metadata.md)
+ [

# Using the AWS CLI to configure inventory data collection
](inventory-collection-cli.md)
+ [

# Walkthrough: Using resource data sync to aggregate inventory data
](inventory-resource-data-sync.md)
+ [

# Troubleshooting problems with Systems Manager Inventory
](syman-inventory-troubleshooting.md)

# Learn more about Systems Manager Inventory
Learn more about Inventory

When you configure AWS Systems Manager Inventory, you specify the type of metadata to collect, the managed nodes from where the metadata should be collected, and a schedule for metadata collection. These configurations are saved with your AWS account as an AWS Systems Manager State Manager association. An association is simply a configuration.

**Note**  
Inventory only collects metadata. It doesn't collect any personal or proprietary data.

**Topics**
+ [

# Metadata collected by Inventory
](inventory-schema.md)
+ [

# Working with file and Windows registry inventory
](inventory-file-and-registry.md)

# Metadata collected by Inventory


The following sample shows the complete list of metadata collected by each AWS Systems Manager Inventory plugin.

```
{
    "typeName": "AWS:InstanceInformation",
    "version": "1.0",
    "attributes":[
      { "name": "AgentType",                              "dataType" : "STRING"},
      { "name": "AgentVersion",                           "dataType" : "STRING"},
      { "name": "ComputerName",                           "dataType" : "STRING"},
      { "name": "InstanceId",                             "dataType" : "STRING"},
      { "name": "IpAddress",                              "dataType" : "STRING"},
      { "name": "PlatformName",                           "dataType" : "STRING"},
      { "name": "PlatformType",                           "dataType" : "STRING"},
      { "name": "PlatformVersion",                        "dataType" : "STRING"},
      { "name": "ResourceType",                           "dataType" : "STRING"},
      { "name": "AgentStatus",                            "dataType" : "STRING"},
      { "name": "InstanceStatus",                         "dataType" : "STRING"}    
    ]
  },
  {
    "typeName" : "AWS:Application",
    "version": "1.1",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "ApplicationType",    "dataType": "STRING"},
      { "name": "Publisher",          "dataType": "STRING"},
      { "name": "Version",            "dataType": "STRING"},
      { "name": "Release",            "dataType": "STRING"},
      { "name": "Epoch",              "dataType": "STRING"},
      { "name": "InstalledTime",      "dataType": "STRING"},
      { "name": "Architecture",       "dataType": "STRING"},
      { "name": "URL",                "dataType": "STRING"},
      { "name": "Summary",            "dataType": "STRING"},
      { "name": "PackageId",          "dataType": "STRING"}
    ]
  },
  {
    "typeName" : "AWS:File",
    "version": "1.0",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "Size",    	      "dataType": "STRING"},
      { "name": "Description",        "dataType": "STRING"},
      { "name": "FileVersion",        "dataType": "STRING"},
      { "name": "InstalledDate",      "dataType": "STRING"},
      { "name": "ModificationTime",   "dataType": "STRING"},
      { "name": "LastAccessTime",     "dataType": "STRING"},
      { "name": "ProductName",        "dataType": "STRING"},
      { "name": "InstalledDir",       "dataType": "STRING"},
      { "name": "ProductLanguage",    "dataType": "STRING"},
      { "name": "CompanyName",        "dataType": "STRING"},
      { "name": "ProductVersion",       "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:AWSComponent",
    "version": "1.0",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "ApplicationType",    "dataType": "STRING"},
      { "name": "Publisher",          "dataType": "STRING"},
      { "name": "Version",            "dataType": "STRING"},
      { "name": "InstalledTime",      "dataType": "STRING"},
      { "name": "Architecture",       "dataType": "STRING"},
      { "name": "URL",                "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:WindowsUpdate",
    "version":"1.0",
    "attributes":[
      { "name": "HotFixId",           "dataType": "STRING"},
      { "name": "Description",        "dataType": "STRING"},
      { "name": "InstalledTime",      "dataType": "STRING"},
      { "name": "InstalledBy",        "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:Network",
    "version":"1.0",
    "attributes":[
      { "name": "Name",               "dataType": "STRING"},
      { "name": "SubnetMask",         "dataType": "STRING"},
      { "name": "Gateway",            "dataType": "STRING"},
      { "name": "DHCPServer",         "dataType": "STRING"},
      { "name": "DNSServer",          "dataType": "STRING"},
      { "name": "MacAddress",         "dataType": "STRING"},
      { "name": "IPV4",               "dataType": "STRING"},
      { "name": "IPV6",               "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:PatchSummary",
    "version":"1.0",
    "attributes":[
      { "name": "PatchGroup",                       "dataType": "STRING"},
      { "name": "BaselineId",                       "dataType": "STRING"},
      { "name": "SnapshotId",                       "dataType": "STRING"},
      { "name": "OwnerInformation",                 "dataType": "STRING"},
      { "name": "InstalledCount",                   "dataType": "NUMBER"},
      { "name": "InstalledPendingRebootCount",      "dataType": "NUMBER"},
      { "name": "InstalledOtherCount",              "dataType": "NUMBER"},
      { "name": "InstalledRejectedCount",           "dataType": "NUMBER"},
      { "name": "NotApplicableCount",               "dataType": "NUMBER"},
      { "name": "UnreportedNotApplicableCount",     "dataType": "NUMBER"},
      { "name": "MissingCount",                     "dataType": "NUMBER"},
      { "name": "FailedCount",                      "dataType": "NUMBER"},
      { "name": "OperationType",                    "dataType": "STRING"},
      { "name": "OperationStartTime",               "dataType": "STRING"},
      { "name": "OperationEndTime",                 "dataType": "STRING"},
      { "name": "InstallOverrideList",              "dataType": "STRING"},
      { "name": "RebootOption",                     "dataType": "STRING"},
      { "name": "LastNoRebootInstallOperationTime", "dataType": "STRING"},
      { "name": "ExecutionId",                      "dataType": "STRING",                 "isOptional": "true"},
      { "name": "NonCompliantSeverity",             "dataType": "STRING",                 "isOptional": "true"},
      { "name": "SecurityNonCompliantCount",        "dataType": "NUMBER",                 "isOptional": "true"},
      { "name": "CriticalNonCompliantCount",        "dataType": "NUMBER",                 "isOptional": "true"},
      { "name": "OtherNonCompliantCount",           "dataType": "NUMBER",                 "isOptional": "true"}
    ]
  },
  {
    "typeName": "AWS:PatchCompliance",
    "version":"1.0",
    "attributes":[
      { "name": "Title",                        "dataType": "STRING"},
      { "name": "KBId",                         "dataType": "STRING"},
      { "name": "Classification",               "dataType": "STRING"},
      { "name": "Severity",                     "dataType": "STRING"},
      { "name": "State",                        "dataType": "STRING"},
      { "name": "InstalledTime",                "dataType": "STRING"}
    ]
  },
  {
    "typeName": "AWS:ComplianceItem",
    "version":"1.0",
    "attributes":[
      { "name": "ComplianceType",               "dataType": "STRING",                 "isContext": "true"},
      { "name": "ExecutionId",                  "dataType": "STRING",                 "isContext": "true"},
      { "name": "ExecutionType",                "dataType": "STRING",                 "isContext": "true"},
      { "name": "ExecutionTime",                "dataType": "STRING",                 "isContext": "true"},
      { "name": "Id",                           "dataType": "STRING"},
      { "name": "Title",                        "dataType": "STRING"},
      { "name": "Status",                       "dataType": "STRING"},
      { "name": "Severity",                     "dataType": "STRING"},
      { "name": "DocumentName",                 "dataType": "STRING"},
      { "name": "DocumentVersion",              "dataType": "STRING"},
      { "name": "Classification",               "dataType": "STRING"},
      { "name": "PatchBaselineId",              "dataType": "STRING"},
      { "name": "PatchSeverity",                "dataType": "STRING"},
      { "name": "PatchState",                   "dataType": "STRING"},
      { "name": "PatchGroup",                   "dataType": "STRING"},
      { "name": "InstalledTime",                "dataType": "STRING"},
      { "name": "InstallOverrideList",          "dataType": "STRING",                 "isOptional": "true"},
      { "name": "DetailedText",                 "dataType": "STRING",                 "isOptional": "true"},
      { "name": "DetailedLink",                 "dataType": "STRING",                 "isOptional": "true"},
      { "name": "CVEIds",                       "dataType": "STRING",                 "isOptional": "true"}
    ]
  },
  {
    "typeName": "AWS:ComplianceSummary",
    "version":"1.0",
    "attributes":[
      { "name": "ComplianceType",                 "dataType": "STRING"},
      { "name": "PatchGroup",                     "dataType": "STRING"},
      { "name": "PatchBaselineId",                "dataType": "STRING"},
      { "name": "Status",                         "dataType": "STRING"},
      { "name": "OverallSeverity",                "dataType": "STRING"},
      { "name": "ExecutionId",                    "dataType": "STRING"},
      { "name": "ExecutionType",                  "dataType": "STRING"},
      { "name": "ExecutionTime",                  "dataType": "STRING"},
      { "name": "CompliantCriticalCount",         "dataType": "NUMBER"},
      { "name": "CompliantHighCount",             "dataType": "NUMBER"},
      { "name": "CompliantMediumCount",           "dataType": "NUMBER"},
      { "name": "CompliantLowCount",              "dataType": "NUMBER"},
      { "name": "CompliantInformationalCount",    "dataType": "NUMBER"},
      { "name": "CompliantUnspecifiedCount",      "dataType": "NUMBER"},
      { "name": "NonCompliantCriticalCount",      "dataType": "NUMBER"},
      { "name": "NonCompliantHighCount",          "dataType": "NUMBER"},
      { "name": "NonCompliantMediumCount",        "dataType": "NUMBER"},
      { "name": "NonCompliantLowCount",           "dataType": "NUMBER"},
      { "name": "NonCompliantInformationalCount", "dataType": "NUMBER"},
      { "name": "NonCompliantUnspecifiedCount",   "dataType": "NUMBER"}
    ]
  },
  {
    "typeName": "AWS:InstanceDetailedInformation",
    "version":"1.0",
    "attributes":[
      { "name": "CPUModel",                     "dataType": "STRING"},
      { "name": "CPUCores",                     "dataType": "NUMBER"},
      { "name": "CPUs",                         "dataType": "NUMBER"},
      { "name": "CPUSpeedMHz",                  "dataType": "NUMBER"},
      { "name": "CPUSockets",                   "dataType": "NUMBER"},
      { "name": "CPUHyperThreadEnabled",        "dataType": "STRING"},
      { "name": "OSServicePack",                "dataType": "STRING"}
    ]
   },
   {
     "typeName": "AWS:Service",
     "version":"1.0",
     "attributes":[
       { "name": "Name",                         "dataType": "STRING"},
       { "name": "DisplayName",                  "dataType": "STRING"},
       { "name": "ServiceType",                  "dataType": "STRING"},
       { "name": "Status",                       "dataType": "STRING"},
       { "name": "DependentServices",            "dataType": "STRING"},
       { "name": "ServicesDependedOn",           "dataType": "STRING"},
       { "name": "StartType",                    "dataType": "STRING"}
     ]
    },
    {
      "typeName": "AWS:WindowsRegistry",
      "version":"1.0",
      "attributes":[
        { "name": "KeyPath",                         "dataType": "STRING"},
        { "name": "ValueName",                       "dataType": "STRING"},
        { "name": "ValueType",                       "dataType": "STRING"},
        { "name": "Value",                           "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:WindowsRole",
      "version":"1.0",
      "attributes":[
        { "name": "Name",                         "dataType": "STRING"},
        { "name": "DisplayName",                  "dataType": "STRING"},
        { "name": "Path",                         "dataType": "STRING"},
        { "name": "FeatureType",                  "dataType": "STRING"},
        { "name": "DependsOn",                    "dataType": "STRING"},
        { "name": "Description",                  "dataType": "STRING"},
        { "name": "Installed",                    "dataType": "STRING"},
        { "name": "InstalledState",               "dataType": "STRING"},
        { "name": "SubFeatures",                  "dataType": "STRING"},
        { "name": "ServerComponentDescriptor",    "dataType": "STRING"},
        { "name": "Parent",                       "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:Tag",
      "version":"1.0",
      "attributes":[
        { "name": "Key",                     "dataType": "STRING"},
        { "name": "Value",                   "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:ResourceGroup",
      "version":"1.0",
      "attributes":[
        { "name": "Name",                   "dataType": "STRING"},
        { "name": "Arn",                    "dataType": "STRING"}
      ]
    },
    {
      "typeName": "AWS:BillingInfo",
      "version": "1.0",
      "attributes": [
        { "name": "BillingProductId",       "dataType": "STRING"}
      ]
    }
```

**Note**  
For `"typeName": "AWS:InstanceInformation"`, `InstanceStatus` can be one of the following: Active, ConnectionLost, Stopped, Terminated.
With the release of version 2.5, RPM Package Manager replaced the Serial attribute with Epoch. The Epoch attribute is a monotonically increasing integer like Serial. When you inventory by using the `AWS:Application` type, a larger value for Epoch means a newer version. If Epoch values are the same or empty, then use the value of the Version or Release attribute to determine the newer version. 
Some metadata is not available from Linux instances. Specifically, for "typeName": "AWS:Network", the following metadata types are not yet supported for Linux instances. They ARE supported for Windows.  
\$1 "name": "SubnetMask", "dataType": "STRING"\$1,
\$1 "name": "DHCPServer", "dataType": "STRING"\$1,
\$1 "name": "DNSServer", "dataType": "STRING"\$1,
\$1 "name": "Gateway", "dataType": "STRING"\$1,

# Working with file and Windows registry inventory


AWS Systems Manager Inventory allows you to search and inventory files on Windows Server, Linux, and macOS operating systems. You can also search and inventory the Windows Registry.

**Files**: You can collect metadata information about files, including file names, the time files were created, the time files were last modified and accessed, and file sizes, to name a few. To start collecting file inventory, you specify a file path where you want to perform the inventory, one or more patterns that define the types of files you want to inventory, and if the path should be traversed recursively. Systems Manager inventories all file metadata for files in the specified path that match the pattern. File inventory uses the following parameter input.

```
{
"Path": string,
"Pattern": array[string],
"Recursive": true,
"DirScanLimit" : number // Optional
}
```
+ **Path**: The directory path where you want to inventory files. For Windows, you can use environment variables like `%PROGRAMFILES% `as long as the variable maps to a single directory path. For example, if you use %PATH% that maps to multiple directory paths, Inventory throws an error. 
+ **Pattern**: An array of patterns to identify files.
+ **Recursive**: A Boolean value indicating whether Inventory should recursively traverse the directories.
+ **DirScanLimit**: An optional value specifying how many directories to scan. Use this parameter to minimize performance impact on your managed nodes. Inventory scans a maximum of 5,000 directories.

**Note**  
Inventory collects metadata for a maximum of 500 files across all specified paths.

Here are some examples of how to specify the parameters when performing an inventory of files.
+ On Linux and macOS, collect metadata of .sh files in the `/home/ec2-user` directory, excluding all subdirectories.

  ```
  [{"Path":"/home/ec2-user","Pattern":["*.sh", "*.sh"],"Recursive":false}]
  ```
+ On Windows, collect metadata of all ".exe" files in the Program Files folder, including subdirectories recursively.

  ```
  [{"Path":"C:\Program Files","Pattern":["*.exe"],"Recursive":true}]
  ```
+ On Windows, collect metadata of specific log patterns.

  ```
  [{"Path":"C:\ProgramData\Amazon","Pattern":["*amazon*.log"],"Recursive":true}]
  ```
+ Limit the directory count when performing recursive collection.

  ```
  [{"Path":"C:\Users","Pattern":["*.ps1"],"Recursive":true, "DirScanLimit": 1000}]
  ```

**Windows Registry**: You can collect Windows Registry keys and values. You can choose a key path and collect all keys and values recursively. You can also collect a specific registry key and its value for a specific path. Inventory collects the key path, name, type, and the value.

```
{
"Path": string, 
"Recursive": true,
"ValueNames": array[string] // optional
}
```
+ **Path**: The path to the Registry key.
+ **Recursive**: A Boolean value indicating whether Inventory should recursively traverse Registry paths.
+ **ValueNames**: An array of value names for performing inventory of Registry keys. If you use this parameter, Systems Manager will inventory only the specified value names for the specified path.

**Note**  
Inventory collects a maximum of 250 Registry key values for all specified paths.

Here are some examples of how to specify the parameters when performing an inventory of the Windows Registry.
+ Collect all keys and values recursively for a specific path.

  ```
  [{"Path":"HKEY_LOCAL_MACHINE\SOFTWARE\Amazon","Recursive": true}]
  ```
+ Collect all keys and values for a specific path (recursive search turned off).

  ```
  [{"Path":"HKEY_LOCAL_MACHINE\SOFTWARE\Intel\PSIS\PSIS_DECODER", "Recursive": false}]
  ```
+ Collect a specific key by using the `ValueNames` option.

  ```
  {"Path":"HKEY_LOCAL_MACHINE\SOFTWARE\Amazon\MachineImage","ValueNames":["AMIName"]}
  ```

# Setting up Systems Manager Inventory
Setting up Inventory

Before you use AWS Systems Manager Inventory to collect metadata about the applications, services, AWS components and more running on your managed nodes, we recommend that you configure resource data sync to centralize the storage of your inventory data in a single Amazon Simple Storage Service (Amazon S3) bucket. We also recommend that you configure Amazon EventBridge monitoring of inventory events. These processes make it easier to view and manage inventory data and collection.

**Topics**
+ [

# Creating a resource data sync for Inventory
](inventory-create-resource-data-sync.md)
+ [

# Using EventBridge to monitor Inventory events
](systems-manager-inventory-setting-up-eventbridge.md)

# Creating a resource data sync for Inventory


This topic describes how to set up and configure resource data sync for AWS Systems Manager Inventory. For information about resource data sync for Systems Manager Explorer, see [Setting up Systems Manager Explorer to display data from multiple accounts and Regions](Explorer-resource-data-sync.md).

## About resource data sync


You can use Systems Manager resource data sync to send inventory data collected from all of your managed nodes to a single Amazon Simple Storage Service (Amazon S3) bucket. Resource data sync then automatically updates the centralized data when new inventory data is collected. With all inventory data stored in a target Amazon S3 bucket, you can use services like Amazon Athena and Amazon Quick to query and analyze the aggregated data.

For example, say that you've configured inventory to collect data about the operating system (OS) and applications running on a fleet of 150 managed nodes. Some of these nodes are located in an on-premises data center, and others are running in Amazon Elastic Compute Cloud (Amazon EC2) across multiple AWS Regions. If you have *not* configured resource data sync, you either need to manually gather the collected inventory data for each managed node, or you have to create scripts to gather this information. You would then need to port the data into an application so that you can run queries and analyze it.

With resource data sync, you perform a one-time operation that synchronizes all inventory data from all of your managed nodes. After the sync is successfully created, Systems Manager creates a baseline of all inventory data and saves it in the target Amazon S3 bucket. When new inventory data is collected, Systems Manager automatically updates the data in the Amazon S3 bucket. You can then quickly and cost-effectively port the data to Amazon Athena and Amazon Quick.

Diagram 1 shows how resource data sync aggregates inventory data from Amazon EC2 and other machine types in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment to a target Amazon S3 bucket. This diagram also shows how resource data sync works with multiple AWS accounts and AWS Regions.

**Diagram 1: Resource data sync with multiple AWS accounts and AWS Regions**

![\[Systems Manager resource data sync architecture\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-resource-data-sync-updated.png)


If you delete a managed node, resource data sync preserves the inventory file for the deleted node. For running nodes, however, resource data sync automatically overwrites old inventory files when new files are created and written to the Amazon S3 bucket. If you want to track inventory changes over time, you can use the AWS Config service to track the `SSM:ManagedInstanceInventory` resource type. For more information, see [Getting Started with AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html).

Use the procedures in this section to create a resource data sync for Inventory by using the Amazon S3 and AWS Systems Manager consoles. You can also use AWS CloudFormation to create or delete a resource data sync. To use CloudFormation, add the [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-resourcedatasync.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-resourcedatasync.html) resource to your CloudFormation template. For information, see one of the following documentation resources:
+ [AWS CloudFormation resource for resource data sync in AWS Systems Manager](https://aws.amazon.com/blogs/mt/aws-cloudformation-resource-for-resource-data-sync-in-aws-systems-manager/) (blog)
+ [Working with AWS CloudFormation Templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html) in the *AWS CloudFormation User Guide*

**Note**  
You can use AWS Key Management Service (AWS KMS) to encrypt inventory data in the Amazon S3 bucket. For an example of how to create an encrypted sync by using the AWS Command Line Interface (AWS CLI) and how to work with the centralized data in Amazon Athena and Amazon Quick, see [Walkthrough: Using resource data sync to aggregate inventory data](inventory-resource-data-sync.md). 

## Before you begin


Before you create a resource data sync, use the following procedure to create a central Amazon S3 bucket to store aggregated inventory data. The procedure describes how to assign a bucket policy that allows Systems Manager to write inventory data to the bucket from multiple accounts. If you already have an Amazon S3 bucket that you want to use to aggregate inventory data for resource data sync, then you must configure the bucket to use the policy in the following procedure.

**Note**  
Systems Manager Inventory can't add data to a specified Amazon S3 bucket if that bucket is configured to use Object Lock. Verify that the Amazon S3 bucket you create or choose for resource data sync isn't configured to use Amazon S3 Object Lock. For more information, see [How Amazon S3 Object Lock works](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html) in the *Amazon Simple Storage Service User Guide*.

**To create and configure an Amazon S3 bucket for resource data sync**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated Inventory data. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. Choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace *amzn-s3-demo-bucket* with the name of the S3 bucket you created. Replace *account\$1ID\$1number* with a valid AWS account ID number. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "SSMBucketPermissionsCheck",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:GetBucketAcl",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           {
               "Sid": " SSMBucketDelivery",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:PutObject",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=111122223333/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=444455556666/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=123456789012/*",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*/accountid=777788889999/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "s3:x-amz-acl": "bucket-owner-full-control",
                       "aws:SourceAccount": "111122223333"
                   },
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:ssm:*:111122223333:resource-data-sync/*"
                   }
               }
           }
       ]
   }
   ```

------

1. Save your changes.

## Create a resource data sync for Inventory


Use the following procedure to create a resource data sync for Systems Manager Inventory by using the Systems Manager console. For information about how to create a resource data sync by using the AWS CLI, see [Using the AWS CLI to configure inventory data collection](inventory-collection-cli.md).

**To create a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Account management** menu, choose **Resource data sync**.

1. Choose **Create resource data sync**.

1. In the **Sync name** field, enter a name for the sync configuration.

1. In the **Bucket name** field, enter the name of the Amazon S3 bucket you created using the **To create and configure an Amazon S3 bucket for resource data sync** procedure.

1. (Optional) In the **Bucket prefix** field, enter the name of an Amazon S3 bucket prefix (subdirectory).

1. In the **Bucket region** field, choose **This region** if the Amazon S3 bucket you created is located in the current AWS Region. If the bucket is located in a different AWS Region, choose **Another region**, and enter the name of the Region.
**Note**  
If the sync and the target Amazon S3 bucket are located in different regions, you might be subject to data transfer pricing. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

1. (Optional) In the **KMS Key ARN** field, type or paste a KMS Key ARN to encrypt inventory data in Amazon S3.

1. Choose **Create**.

To synchronize inventory data from multiple AWS Regions, you must create a resource data sync in *each* Region. Repeat this procedure in each AWS Region where you want to collect inventory data and send it to the central Amazon S3 bucket. When you create the sync in each Region, specify the central Amazon S3 bucket in the **Bucket name** field. Then use the **Bucket region** option to choose the Region where you created the central Amazon S3 bucket, as shown in the following screen shot. The next time the association runs to collect inventory data, Systems Manager stores the data in the central Amazon S3 bucket. 

![\[Systems Manager resource data sync from multiple AWS Regions\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-rds-multiple-regions.png)


## Creating an inventory resource data sync for accounts defined in AWS Organizations


You can synchronize inventory data from AWS accounts defined in AWS Organizations to a central Amazon S3 bucket. After you complete the following procedures, inventory data is synchronized to *individual* Amazon S3 key prefixes in the central bucket. Each key prefix represents a different AWS account ID.

**Before you begin**  
Before you begin, verify that you set up and configured AWS accounts in AWS Organizations. For more information, see [ in the *AWS Organizations User Guide*.](https://docs.aws.amazon.com/organizations/latest/userguide/rgs_getting-started.html)

Also, be aware that you must create the organization-based resource data sync for each AWS Region and AWS account defined in AWS Organizations. 

### Creating a central Amazon S3 bucket


Use the following procedure to create a central Amazon S3 bucket to store aggregated inventory data. The procedure describes how to assign a bucket policy that allows Systems Manager to write inventory data to the bucket from your AWS Organizations account ID. If you already have an Amazon S3 bucket that you want to use to aggregate inventory data for resource data sync, then you must configure the bucket to use the policy in the following procedure.

**To create and configure an Amazon S3 bucket for resource data sync for multiple accounts defined in AWS Organizations**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated inventory data. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. Choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace *amzn-s3-demo-bucket *and *organization-id* with the name of the Amazon S3 bucket you created and a valid AWS Organizations account ID.

   Optionally, replace *bucket-prefix* with the name of an Amazon S3 prefix (subdirectory). If you didn't create a prefix, remove *bucket-prefix*/ from the ARN in the following policy. 

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "SSMBucketPermissionsCheck",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "s3:GetBucketAcl",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket"
       },
       {
         "Sid": " SSMBucketDelivery",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "s3:PutObject",
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-bucket/bucket-prefix/*/accountid=*/*"
         ],
         "Condition": {
           "StringEquals": {
             "s3:x-amz-acl": "bucket-owner-full-control",
             "aws:SourceOrgID": "organization-id"
                     }
         }
       },
       {
         "Sid": " SSMBucketDeliveryTagging",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "s3:PutObjectTagging",
         "Resource": [
           "arn:aws:s3:::amzn-s3-demo-bucket/bucket-prefix/*/accountid=*/*"
         ]
       }
     ]
   }
   ```

------

### Create an inventory resource data sync for accounts defined in AWS Organizations


The following procedure describes how to use the AWS CLI to create a resource data sync for accounts that are defined in AWS Organizations. You must use the AWS CLI to perform this task. You must also perform this procedure for each AWS Region and AWS account defined in AWS Organizations.

**To create a resource data sync for an account defined in AWS Organizations (AWS CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to verify that you don't have any other *AWS Organizations-based* resource data syncs. You can have multiple standard syncs, including multiple standard syncs and an Organizations-based sync. But, you can only have one Organizations-based resource data sync.

   ```
   aws ssm list-resource-data-sync
   ```

   If the command returns other Organizations-based resource data sync, you must delete them or choose not to create a new one.

1. Run the following command to create a resource data sync for an account defined in AWS Organizations. For amzn-s3-demo-bucket, specify the name of the Amazon S3 bucket you created earlier in this topic. If you created a prefix (subdirectory) for your bucket, then specify this information for *prefix-name*. 

   ```
   aws ssm create-resource-data-sync --sync-name name --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix-name,SyncFormat=JsonSerDe,Region=AWS Region, for example us-east-2,DestinationDataSharing={DestinationDataSharingType=Organization}"
   ```

1. Repeat Steps 2 and 3 for every AWS Region and AWS account where you want to synchronize data to the central Amazon S3 bucket.

### Managing resource data syncs


Each AWS account can have 5 resource data syncs per AWS Region. You can use the AWS Systems Manager Fleet Manager console to manage your resource data syncs.

**To view resource data syncs**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Account management** dropdown, choose **Resource data syncs**.

1. Select a resource data sync from the table, and then choose **View details** to view information about your resource data sync.

**To delete a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Account management** dropdown, choose **Resource data syncs**.

1. Select a resource data sync from the table, and then choose **Delete**.

# Using EventBridge to monitor Inventory events


You can configure a rule in Amazon EventBridge to create an event in response to AWS Systems Manager Inventory resource state changes. EventBridge supports events for the following Inventory state changes. All events are sent on a best effort basis.

**Custom inventory type deleted for a specific instance**: If a rule is configured to monitor for this event, EventBridge creates an event when a custom inventory type on a specific managed is deleted. EventBridge sends one event per node per custom inventory type. Here is a sample event pattern.

```
{
    "timestampMillis": 1610042981103,
    "source": "SSM",
    "account": "123456789012",
    "type": "INVENTORY_RESOURCE_STATE_CHANGE",
    "startTime": "Jan 7, 2021 6:09:41 PM",
    "resources": [
        {
            "arn": "arn:aws:ssm:us-east-1:123456789012:managed-instance/i-12345678"
        }
    ],
    "body": {
        "action-status": "succeeded",
        "action": "delete",
        "resource-type": "managed-instance",
        "resource-id": "i-12345678",
        "action-reason": "",
        "type-name": "Custom:MyCustomInventoryType"
    }
}
```

**Custom inventory type deleted event for all instances**: If a rule is configured to monitor for this event, EventBridge creates an event when a custom inventory type for all managed nodes is deleted. Here is a sample event pattern.

```
{
    "timestampMillis": 1610042904712,
    "source": "SSM",
    "account": "123456789012",
    "type": "INVENTORY_RESOURCE_STATE_CHANGE",
    "startTime": "Jan 7, 2021 6:08:24 PM",
    "resources": [
        
    ],
    "body": {
        "action-status": "succeeded",
        "action": "delete-summary",
        "resource-type": "managed-instance",
        "resource-id": "",
        "action-reason": "The delete for type name Custom:SomeCustomInventoryType was completed. The deletion summary is: {\"totalCount\":1,\"remainingCount\":0,\"summaryItems\":[{\"version\":\"1.1\",\"count\":1,\"remainingCount\":0}]}",
        "type-name": "Custom:MyCustomInventoryType"
    }
}
```

**[PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) call with old schema version event**: If a rule is configured to monitor for this event, EventBridge creates an event when a PutInventory call is made that uses a schema version that is lower than the current schema. This event applies to all inventory types. Here is a sample event pattern.

```
{
    "timestampMillis": 1610042629548,
    "source": "SSM",
    "account": "123456789012",
    "type": "INVENTORY_RESOURCE_STATE_CHANGE",
    "startTime": "Jan 7, 2021 6:03:49 PM",
    "resources": [
        {
            "arn": "arn:aws:ssm:us-east-1:123456789012:managed-instance/i-12345678"
        }
    ],
    "body": {
        "action-status": "failed",
        "action": "put",
        "resource-type": "managed-instance",
        "resource-id": "i-01f017c1b2efbe2bc",
        "action-reason": "The inventory item with type name Custom:MyCustomInventoryType was sent with a disabled schema verison 1.0. You must send a version greater than 1.0",
        "type-name": "Custom:MyCustomInventoryType"
    }
}
```

For information about how to configure EventBridge to monitor for these events, see [Configuring EventBridge for Systems Manager events](monitoring-systems-manager-events.md).

# Configuring inventory collection


This section describes how to configure AWS Systems Manager Inventory collection on one or more managed nodes by using the Systems Manager console. For an example of how to configure inventory collection by using the AWS Command Line Interface (AWS CLI), see [Using the AWS CLI to configure inventory data collection](inventory-collection-cli.md).

When you configure inventory collection, you start by creating a AWS Systems Manager State Manager association. Systems Manager collects the inventory data when the association is run. If you don't create the association first, and attempt to invoke the `aws:softwareInventory` plugin by using, for example, AWS Systems Manager Run Command, the system returns the following error: `The aws:softwareInventory plugin can only be invoked via ssm-associate.`

**Note**  
Be aware of the following behavior if you create multiple inventory associations for a managed node:  
Each node can be assigned an inventory association that targets *all* nodes (--targets "Key=InstanceIds,Values=\$1").
Each node can also be assigned a specific association that uses either tag key/value pairs or an AWS resource group.
If a node is assigned multiple inventory associations, the status shows *Skipped* for the association that hasn't run. The association that ran most recently displays the actual status of the inventory association.
If a node is assigned multiple inventory associations and each uses a tag key/value pair, then those inventory associations fail to run on the node because of the tag conflict. The association still runs on nodes that don't have the tag key/value conflict. 

**Before You Begin**  
Before you configure inventory collection, complete the following tasks.
+ Update AWS Systems Manager SSM Agent on the nodes you want to inventory. By running the latest version of SSM Agent, you ensure that you can collect metadata for all supported inventory types. For information about how to update SSM Agent by using State Manager, see [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).
+ Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. For information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).
+ For Microsoft Windows Server nodes, verify that your managed node is configured with Windows PowerShell 3.0 (or later). SSM Agent uses the `ConvertTo-Json` cmdlet in PowerShell to convert Windows update inventory data to the required format.
+ (Optional) Create a resource data sync to centrally store inventory data in an Amazon S3 bucket. resource data sync then automatically updates the centralized data when new inventory data is collected. For more information, see [Walkthrough: Using resource data sync to aggregate inventory data](inventory-resource-data-sync.md).
+ (Optional) Create a JSON file to collect custom inventory. For more information, see [Working with custom inventory](inventory-custom.md).

## Inventory all managed nodes in your AWS account


You can inventory all managed nodes in your AWS account by creating a global inventory association. A global inventory association performs the following actions:
+ Automatically applies the global inventory configuration (association) to all existing managed nodes in your AWS account. Managed nodes that already have an inventory association are skipped when the global inventory association is applied and runs. When a node is skipped, the detailed status message states `Overridden By Explicit Inventory Association`. Those nodes are skipped by the global association, but they will still report inventory when they run their assigned inventory association.
+ Automatically adds new nodes created in your AWS account to the global inventory association.

**Note**  
If a managed node is configured for the global inventory association, and you assign a specific association to that node, then Systems Manager Inventory deprioritizes the global association and applies the specific association.
Global inventory associations are available in SSM Agent version 2.0.790.0 or later. For information about how to update SSM Agent on your nodes, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).

### Configuring inventory collection with one click (console)


Use the following procedure to configure Systems Manager Inventory for all managed nodes in your AWS account and in a single AWS Region. 

**To configure all of your managed nodes in the current Region for Systems Manager inventory**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. In the **Managed instances with inventory enabled** card, choose **Click here to enable inventory on all instances**.  
![\[Enabling Systems Manager Inventory on all managed nodes.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-one-click-1.png)

   If successful, the console displays the following message.  
![\[Enabling Systems Manager Inventory on all managed nodes.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-one-click-2.png)

   Depending on the number of managed nodes in your account, it can take several minutes for the global inventory association to be applied. Wait a few minutes and then refresh the page. Verify that the graphic changes to reflect that inventory is configured on all of your managed nodes.

### Configuring collection by using the console


This section includes information about how to configure Systems Manager Inventory to collect metadata from your managed nodes by using the Systems Manager console. You can quickly collect metadata from all nodes in a specific AWS account (and any future nodes that might be created in that account) or you can selectively collect inventory data by using tags or node IDs.

**Note**  
Before completing this procedure, check to see if a global inventory association already exists. If a global inventory association already exists, anytime you launch a new instance, the association will be applied to it, and the new instance will be inventoried.

**To configure inventory collection**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. Choose **Setup Inventory**.

1. In the **Targets** section, identify the nodes where you want to run this operation by choosing one of the following.
   + **Selecting all managed instances in this account** - This option selects all managed nodes for which there is no existing inventory association. If you choose this option, nodes that already had inventory associations are skipped during inventory collection, and shown with a status of **Skipped** in inventory results. For more information, see [Inventory all managed nodes in your AWS account](#inventory-management-inventory-all). 
   + **Specifying a tag** - Use this option to specify a single tag to identify nodes in your account from which you want to collect inventory. If you use a tag, any nodes created in the future with the same tag will also report inventory. If there is an existing inventory association with all nodes, using a tag to select specific nodes as a target for a different inventory overrides node membership in the **All managed instances** target group. Managed nodes with the specified tag are skipped on future inventory collection from **All managed instances**.
   + **Manually selecting instances** - Use this option to choose specific managed nodes in your account. Explicitly choosing specific nodes by using this option overrides inventory associations on the **All managed instances** target. The node is skipped on future inventory collection from **All managed instances**.
**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. In the **Schedule** section, choose how often you want the system to collect inventory metadata from your nodes.

1. In the **Parameters** section, use the lists to turn on or turn off different types of inventory collection. For more information about collecting File and Windows Registry inventory, see [Working with file and Windows registry inventory](inventory-file-and-registry.md).

1. In the **Advanced** section, choose **Sync inventory execution logs to an Amazon S3 bucket** if you want to store the association execution status in an Amazon S3 bucket.

1. Choose **Setup Inventory**. Systems Manager creates a State Manager association and immediately runs Inventory on the nodes.

1. In the navigation pane, choose **State Manager**. Verify that a new association was created that uses the `AWS-GatherSoftwareInventory` document. The association schedule uses a rate expression. Also, verify that the **Status** field shows **Success**. If you chose the option to **Sync inventory execution logs to an Amazon S3 bucket**, then you can view the log data in Amazon S3 after a few minutes. If you want to view inventory data for a specific node, then choose **Managed Instances** in the navigation pane. 

1. Choose a node, and then choose **View details**.

1. On the node details page, choose **Inventory**. Use the **Inventory type** lists to filter the inventory.

# Querying inventory data from multiple Regions and accounts


AWS Systems Manager Inventory integrates with Amazon Athena to help you query inventory data from multiple AWS Regions and AWS accounts. Athena integration uses resource data sync so that you can view inventory data from all of your managed nodes on the **Detailed View** page in the AWS Systems Manager console.

**Important**  
This feature uses AWS Glue to crawl the data in your Amazon Simple Storage Service (Amazon S3) bucket, and Amazon Athena to query the data. Depending on how much data is crawled and queried, you can be charged for using these services. With AWS Glue, you pay an hourly rate, billed by the second, for crawlers (discovering data) and ETL jobs (processing and loading data). With Athena, you're charged based on the amount of data scanned by each query. We encourage you to view the pricing guidelines for these services before you use Amazon Athena integration with Systems Manager Inventory. For more information, see [Amazon Athena pricing](https://aws.amazon.com/athena/pricing/) and [AWS Glue pricing](https://aws.amazon.com/glue/pricing/).

You can view inventory data on the **Detailed View** page in all AWS Regions where Amazon Athena is available. For a list of supported Regions, see [Amazon Athena Service Endpoints](https://docs.aws.amazon.com/general/latest/gr/athena.html#athena_region) in the *Amazon Web Services General Reference*.

**Before you begin**  
Athena integration uses resource data sync. You must set up and configure resource data sync to use this feature. For more information, see [Walkthrough: Using resource data sync to aggregate inventory data](inventory-resource-data-sync.md).

Also, be aware that the **Detailed View** page displays inventory data for the *owner* of the central Amazon S3 bucket used by resource data sync. If you aren't the owner of the central Amazon S3 bucket, then you won't see inventory data on the **Detailed View** page.

## Configuring access


Before you can query and view data from multiple accounts and Regions on the **Detailed View** page in the Systems Manager console, you must configure your IAM entity with permission to view the data.

If the inventory data is stored in an Amazon S3 bucket that uses AWS Key Management Service (AWS KMS) encryption, you must also configure your IAM entity and the `Amazon-GlueServiceRoleForSSM` service role for AWS KMS encryption. 

**Topics**
+ [

### Configuring your IAM entity to access the Detailed View page
](#systems-manager-inventory-query-iam-user)
+ [

### (Optional) Configure permissions for viewing AWS KMS encrypted data
](#systems-manager-inventory-query-kms)

### Configuring your IAM entity to access the Detailed View page


The following describes the minimum permissions required to view inventory data on the **Detailed View** page.

The `AWSQuicksightAthenaAccess` managed policy

The following `PassRole` and additional required permissions block

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowGlue",
            "Effect": "Allow",
            "Action": [
                "glue:GetCrawler",
                "glue:GetCrawlers",
                "glue:GetTables",
                "glue:StartCrawler",
                "glue:CreateCrawler"
            ],
            "Resource": "*"
        },
        {
            "Sid": "iamPassRole",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::111122223333:role/SSMInventoryGlueRole",
                "arn:aws:iam::111122223333:role/SSMInventoryServiceRole"
            ],
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "glue.amazonaws.com"
                }
            }
        },
        {
            "Sid": "iamRoleCreation",
            "Effect": "Allow",
            "Action": [
                "iam:CreateRole",
                "iam:AttachRolePolicy"
            ],
            "Resource": "arn:aws:iam::111122223333:role/*"
        },
        {
            "Sid": "iamPolicyCreation",
            "Effect": "Allow",
            "Action": "iam:CreatePolicy",
            "Resource": "arn:aws:iam::111122223333:policy/*"
        }
    ]
}
```

------

(Optional) If the Amazon S3 bucket used to store inventory data is encrypted by using AWS KMS, you must also add the following block to the policy.

```
{
    "Effect": "Allow",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": [
        "arn:aws:kms:Region:account_ID:key/key_ARN"
    ]
}
```

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### (Optional) Configure permissions for viewing AWS KMS encrypted data


If the Amazon S3 bucket used to store inventory data is encrypted by using the AWS Key Management Service (AWS KMS), you must configure your IAM entity and the **Amazon-GlueServiceRoleForSSM** role with `kms:Decrypt` permissions for the AWS KMS key. 

**Before you begin**  
To provide the `kms:Decrypt` permissions for the AWS KMS key, add the following policy block to your IAM entity:

```
{
    "Effect": "Allow",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": [
        "arn:aws:kms:Region:account_ID:key/key_ARN"
    ]
}
```

If you haven't done so already, complete that procedure and add `kms:Decrypt` permissions for the AWS KMS key.

Use the following procedure to configure the **Amazon-GlueServiceRoleForSSM** role with `kms:Decrypt` permissions for the AWS KMS key. 

**To configure the **Amazon-GlueServiceRoleForSSM** role with `kms:Decrypt` permissions**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**, and then use the search field to locate the **Amazon-GlueServiceRoleForSSM** role. The **Summary** page opens.

1. Use the search field to find the **Amazon-GlueServiceRoleForSSM** role. Choose the role name. The **Summary** page opens.

1. Choose the role name. The **Summary** page opens.

1. Choose **Add inline policy**. The **Create policy** page opens.

1. Choose the **JSON** tab.

1. Delete the existing JSON text in the editor, and then copy and paste the following policy into the JSON editor. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": [
                   "arn:aws:kms:us-east-1:111122223333:key/key_ARN"
               ]
           }
       ]
   }
   ```

------

1. Choose **Review policy**

1. On the **Review Policy** page, enter a name in the **Name** field.

1. Choose **Create policy**.

## Querying data on the inventory detailed view page


Use the following procedure to view inventory data from multiple AWS Regions and AWS accounts on the Systems Manager Inventory **Detailed View** page.

**Important**  
The Inventory **Detailed View** page is only available in AWS Regions that offer Amazon Athena. If the following tabs aren't displayed on the Systems Manager Inventory page, it means Athena isn't available in the Region and you can't use the **Detailed View** to query data.  

![\[Displaying Inventory Dashboard | Detailed View | Settings tabs\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-detailed-view-for-error.png)


**To view inventory data from multiple Regions and accounts in the AWS Systems Manager console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. Choose the **Detailed View** tab.  
![\[Accessing the AWS Systems Manager Inventory Detailed View page\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-detailed-view.png)

1. Choose the resource data sync for which you want to query data.  
![\[Displaying inventory data in the AWS Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-display-data.png)

1. In the **Inventory Type** list, choose the type of inventory data that you want to query, and then press Enter.  
![\[Choosing an inventory type in the AWS Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-type.png)

1. To filter the data, choose the Filter bar, and then choose a filter option.  
![\[Filtering inventory data in the AWS Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-filter.png)

You can use the **Export to CSV** button to view the current query set in a spreadsheet application such as Microsoft Excel. You can also use the **Query History** and **Run Advanced Queries** buttons to view history details and interact with your data in Amazon Athena.

### Editing the AWS Glue crawler schedule


AWS Glue crawls the inventory data in the central Amazon S3 bucket twice daily, by default. If you frequently change the types of data to collect on your nodes then you might want to crawl the data more frequently, as described in the following procedure.

**Important**  
AWS Glue charges your AWS account based on an hourly rate, billed by the second, for crawlers (discovering data) and ETL jobs (processing and loading data). Before you change the crawler schedule, view the [AWS Glue pricing](https://aws.amazon.com/glue/pricing/) page.

**To change the inventory data crawler schedule**

1. Open the AWS Glue console at [https://console.aws.amazon.com/glue/](https://console.aws.amazon.com/glue/).

1. In the navigation pane, choose **Crawlers**.

1. In the crawlers list, choose the option next to the Systems Manager Inventory data crawler. The crawler name uses the following format:

   `AWSSystemsManager-s3-bucket-name-Region-account_ID`

1. Choose **Action**, and then choose **Edit crawler**.

1. In the navigation pane, choose **Schedule**.

1. In the **Cron expression** field, specify a new schedule by using a cron format. For more information about the cron format, see [Time-Based Schedules for Jobs and Crawlers](https://docs.aws.amazon.com/glue/latest/dg/monitor-data-warehouse-schedule.html) in the *AWS Glue Developer Guide*.

**Important**  
You can pause the crawler to stop incurring charges from AWS Glue. If you pause the crawler, or if you change the frequency so that the data is crawled less often, then the Inventory **Detailed View** might display data that isn't current.

# Querying an inventory collection by using filters


After you collect inventory data, you can use the filter capabilities in AWS Systems Manager to query a list of managed nodes that meet certain filter criteria. 

**To query nodes based on inventory filters**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. In the **Filter by resource groups, tags or inventory types** section, choose the filter box. A list of predefined filters is displayed.

1. Choose an attribute to filter on. For example, choose `AWS:Application`. If prompted, choose a secondary attribute to filter. For example, choose `AWS:Application.Name`. 

1. Choose a delimiter from the list. For example, choose **Begin with**. A text box is displayed in the filter.

1. Enter a value in the text box. For example, enter *Amazon* (SSM Agent is named *Amazon SSM Agent*). 

1. Press Enter. The system returns a list of managed nodes that include an application name that begins with the word *Amazon*.

**Note**  
You can combine multiple filters to refine your search.

# Aggregating inventory data


After you configure your managed nodes for AWS Systems Manager Inventory, you can view aggregated counts of inventory data. For example, say you configured dozens or hundreds of managed nodes to collect the `AWS:Application` inventory type. By using the information in this section, you can see an exact count of how many nodes are configured to collect this data.

You can also see specific inventory details by aggregating on a data type. For example, the `AWS:InstanceInformation` inventory type collects operating system platform information with the `Platform` data type. By aggregating data on the `Platform` data type, you can quickly see how many nodes are running Windows Server, how many are running Linux, and how many are running macOS. 

The procedures in this section describe how to view aggregated counts of inventory data by using the AWS Command Line Interface (AWS CLI). You can also view pre-configured aggregated counts in the AWS Systems Manager console on the **Inventory** page. These pre-configured dashboards are called *Inventory Insights* and they offer one-click remediation of your inventory configuration issues.

Note the following important details about aggregation counts of inventory data:
+ If you terminate a managed node that is configured to collect inventory data, Systems Manager retains the inventory data for 30 days and then deletes it. For running nodes, the systems deletes inventory data that is older than 30 days. If you need to store inventory data longer than 30 days, you can use AWS Config to record history or periodically query and upload the data to an Amazon Simple Storage Service (Amazon S3) bucket.
+ If a node was previously configured to report a specific inventory data type, for example `AWS:Network`, and later you change the configuration to stop collecting that type, aggregation counts still show `AWS:Network` data until the node has been terminated and 30 days have passed.

For information about how to quickly configure and collect inventory data from all nodes in a specific AWS account (and any future nodes that might be created in that account), see [Inventory all managed nodes in your AWS account](inventory-collection.md#inventory-management-inventory-all).

**Topics**
+ [

## Aggregating inventory data to see counts of nodes that collect specific types of data
](#inventory-aggregate-type)
+ [

## Aggregating inventory data with groups to see which nodes are and aren't configured to collect an inventory type
](#inventory-aggregate-groups)

## Aggregating inventory data to see counts of nodes that collect specific types of data
Aggregating for specific types of data

You can use the AWS Systems Manager [GetInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetInventory.html) API operation to view aggregated counts of nodes that collect one or more inventory types and data types. For example, the `AWS:InstanceInformation` inventory type allows you to view an aggregate of operating systems by using the GetInventory API operation with the `AWS:InstanceInformation.PlatformType` data type. Here is an example AWS CLI command and output.

```
aws ssm get-inventory --aggregators "Expression=AWS:InstanceInformation.PlatformType"
```

The system returns information like the following.

```
{
   "Entities":[
      {
         "Data":{
            "AWS:InstanceInformation":{
               "Content":[
                  {
                     "Count":"7",
                     "PlatformType":"windows"
                  },
                  {
                     "Count":"5",
                     "PlatformType":"linux"
                  }
               ]
            }
         }
      }
   ]
}
```

**Getting started**  
Determine the inventory types and data types for which you want to view counts. You can view a list of inventory types and data types that support aggregation by running the following command in the AWS CLI.

```
aws ssm get-inventory-schema --aggregator
```

The command returns a JSON list of inventory types and data types that support aggregation. The **TypeName** field shows supported inventory types. And the **Name** field shows each data type. For example, in the following list, the `AWS:Application` inventory type includes data types for `Name` and `Version`.

```
{
    "Schemas": [
        {
            "TypeName": "AWS:Application",
            "Version": "1.1",
            "DisplayName": "Application",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                },
                {
                    "DataType": "STRING",
                    "Name": "Version"
                }
            ]
        },
        {
            "TypeName": "AWS:InstanceInformation",
            "Version": "1.0",
            "DisplayName": "Platform",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "PlatformName"
                },
                {
                    "DataType": "STRING",
                    "Name": "PlatformType"
                },
                {
                    "DataType": "STRING",
                    "Name": "PlatformVersion"
                }
            ]
        },
        {
            "TypeName": "AWS:ResourceGroup",
            "Version": "1.0",
            "DisplayName": "ResourceGroup",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                }
            ]
        },
        {
            "TypeName": "AWS:Service",
            "Version": "1.0",
            "DisplayName": "Service",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                },
                {
                    "DataType": "STRING",
                    "Name": "DisplayName"
                },
                {
                    "DataType": "STRING",
                    "Name": "ServiceType"
                },
                {
                    "DataType": "STRING",
                    "Name": "Status"
                },
                {
                    "DataType": "STRING",
                    "Name": "StartType"
                }
            ]
        },
        {
            "TypeName": "AWS:WindowsRole",
            "Version": "1.0",
            "DisplayName": "WindowsRole",
            "Attributes": [
                {
                    "DataType": "STRING",
                    "Name": "Name"
                },
                {
                    "DataType": "STRING",
                    "Name": "DisplayName"
                },
                {
                    "DataType": "STRING",
                    "Name": "FeatureType"
                },
                {
                    "DataType": "STRING",
                    "Name": "Installed"
                }
            ]
        }
    ]
}
```

You can aggregate data for any of the listed inventory types by creating a command that uses the following syntax.

```
aws ssm get-inventory --aggregators "Expression=InventoryType.DataType"
```

Here are some examples.

**Example 1**

This example aggregates a count of the Windows roles used by your nodes.

```
aws ssm get-inventory --aggregators "Expression=AWS:WindowsRole.Name"
```

**Example 2**

This example aggregates a count of the applications installed on your nodes.

```
aws ssm get-inventory --aggregators "Expression=AWS:Application.Name"
```

**Combining multiple aggregators**  
You can also combine multiple inventory types and data types in one command to help you better understand the data. Here are some examples.

**Example 1**

This example aggregates a count of the operating system types used by your nodes. It also returns the specific name of the operating systems.

```
aws ssm get-inventory --aggregators '[{"Expression": "AWS:InstanceInformation.PlatformType", "Aggregators":[{"Expression": "AWS:InstanceInformation.PlatformName"}]}]'
```

**Example 2**

This example aggregates a count of the applications running on your nodes and the specific version of each application.

```
aws ssm get-inventory --aggregators '[{"Expression": "AWS:Application.Name", "Aggregators":[{"Expression": "AWS:Application.Version"}]}]'
```

If you prefer, you can create an aggregation expression with one or more inventory types and data types in a JSON file and call the file from the AWS CLI. The JSON in the file must use the following syntax.

```
[
       {
           "Expression": "string",
           "Aggregators": [
               {
                  "Expression": "string"
               }
           ]
       }
]
```

You must save the file with the .json file extension. 

Here is an example that uses multiple inventory types and data types.

```
[
       {
           "Expression": "AWS:Application.Name",
           "Aggregators": [
               {
                   "Expression": "AWS:Application.Version",
                   "Aggregators": [
                     {
                     "Expression": "AWS:InstanceInformation.PlatformType"
                     }
                   ]
               }
           ]
       }
]
```

Use the following command to call the file from the AWS CLI. 

```
aws ssm get-inventory --aggregators file://file_name.json
```

The command returns information like the following.

```
{"Entities": 
 [
   {"Data": 
     {"AWS:Application": 
       {"Content": 
         [
           {"Count": "3", 
            "PlatformType": "linux", 
            "Version": "2.6.5", 
            "Name": "audit-libs"}, 
           {"Count": "2", 
            "PlatformType": "windows", 
            "Version": "2.6.5", 
            "Name": "audit-libs"}, 
           {"Count": "4", 
            "PlatformType": "windows", 
            "Version": "6.2.8", 
            "Name": "microsoft office"}, 
           {"Count": "2", 
            "PlatformType": "windows", 
            "Version": "2.6.5", 
            "Name": "chrome"}, 
           {"Count": "1", 
            "PlatformType": "linux", 
            "Version": "2.6.5", 
            "Name": "chrome"}, 
           {"Count": "2", 
            "PlatformType": "linux", 
            "Version": "6.3", 
            "Name": "authconfig"}
         ]
       }
     }, 
    "ResourceType": "ManagedInstance"}
 ]
}
```

## Aggregating inventory data with groups to see which nodes are and aren't configured to collect an inventory type
Using groups

Groups in Systems Manager Inventory allow you to quickly see a count of which managed nodes are and aren’t configured to collect one or more inventory types. With groups, you specify one or more inventory types and a filter that uses the `exists` operator.

For example, say that you have four managed nodes configured to collect the following inventory types:
+ Node 1: `AWS:Application`
+ Node 2: `AWS:File`
+ Node 3: `AWS:Application`, `AWS:File`
+ Node 4: `AWS:Network`

You can run the following command from the AWS CLI to see how many nodes are configured to collect both the `AWS:Application` and `AWS:File inventory` types. The response also returns a count of how many nodes aren't configured to collect both of these inventory types.

```
aws ssm get-inventory --aggregators 'Groups=[{Name=ApplicationAndFile,Filters=[{Key=TypeName,Values=[AWS:Application],Type=Exists},{Key=TypeName,Values=[AWS:File],Type=Exists}]}]'
```

The command response shows that only one managed node is configured to collect both the `AWS:Application` and `AWS:File` inventory types.

```
{
   "Entities":[
      {
         "Data":{
            "ApplicationAndFile":{
               "Content":[
                  {
                     "notMatchingCount":"3"
                  },
                  {
                     "matchingCount":"1"
                  }
               ]
            }
         }
      }
   ]
}
```

**Note**  
Groups don't return data type counts. Also, you can't drill-down into the results to see the IDs of nodes that are or aren't configured to collect the inventory type.

If you prefer, you can create an aggregation expression with one or more inventory types in a JSON file and call the file from the AWS CLI. The JSON in the file must use the following syntax:

```
{
   "Aggregators":[
      {
         "Groups":[
            {
               "Name":"Name",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "Inventory_type"
                     ],
                     "Type":"Exists"
                  },
                  {
                     "Key":"TypeName",
                     "Values":[
                        "Inventory_type"
                     ],
                     "Type":"Exists"
                  }
               ]
            }
         ]
      }
   ]
}
```

You must save the file with the .json file extension. 

Use the following command to call the file from the AWS CLI. 

```
aws ssm get-inventory --cli-input-json file://file_name.json
```

**Additional examples**  
The following examples show you how to aggregate inventory data to see which managed nodes are and aren't configured to collect the specified inventory types. These examples use the AWS CLI. Each example includes a full command with filters that you can run from the command line and a sample input.json file if you prefer to enter the information in a file.

**Example 1**

This example aggregates a count of nodes that are and aren't configured to collect either the `AWS:Application` or the `AWS:File` inventory types.

Run the following command from the AWS CLI.

```
aws ssm get-inventory --aggregators 'Groups=[{Name=ApplicationORFile,Filters=[{Key=TypeName,Values=[AWS:Application, AWS:File],Type=Exists}]}]'
```

If you prefer to use a file, copy and paste the following sample into a file and save it as input.json.

```
{
   "Aggregators":[
      {
         "Groups":[
            {
               "Name":"ApplicationORFile",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:Application",
                        "AWS:File"
                     ],
                     "Type":"Exists"
                  }
               ]
            }
         ]
      }
   ]
}
```

Run the following command from the AWS CLI.

```
aws ssm get-inventory --cli-input-json file://input.json
```

The command returns information like the following.

```
{
   "Entities":[
      {
         "Data":{
            "ApplicationORFile":{
               "Content":[
                  {
                     "notMatchingCount":"1"
                  },
                  {
                     "matchingCount":"3"
                  }
               ]
            }
         }
      }
   ]
}
```

**Example 2**

This example aggregates a count of nodes that are and aren't configured to collect the `AWS:Application`, `AWS:File`, and `AWS:Network` inventory types.

Run the following command from the AWS CLI.

```
aws ssm get-inventory --aggregators 'Groups=[{Name=Application,Filters=[{Key=TypeName,Values=[AWS:Application],Type=Exists}]}, {Name=File,Filters=[{Key=TypeName,Values=[AWS:File],Type=Exists}]}, {Name=Network,Filters=[{Key=TypeName,Values=[AWS:Network],Type=Exists}]}]'
```

If you prefer to use a file, copy and paste the following sample into a file and save it as input.json.

```
{
   "Aggregators":[
      {
         "Groups":[
            {
               "Name":"Application",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:Application"
                     ],
                     "Type":"Exists"
                  }
               ]
            },
            {
               "Name":"File",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:File"
                     ],
                     "Type":"Exists"
                  }
               ]
            },
            {
               "Name":"Network",
               "Filters":[
                  {
                     "Key":"TypeName",
                     "Values":[
                        "AWS:Network"
                     ],
                     "Type":"Exists"
                  }
               ]
            }
         ]
      }
   ]
}
```

Run the following command from the AWS CLI.

```
aws ssm get-inventory --cli-input-json file://input.json
```

The command returns information like the following.

```
{
   "Entities":[
      {
         "Data":{
            "Application":{
               "Content":[
                  {
                     "notMatchingCount":"2"
                  },
                  {
                     "matchingCount":"2"
                  }
               ]
            },
            "File":{
               "Content":[
                  {
                     "notMatchingCount":"2"
                  },
                  {
                     "matchingCount":"2"
                  }
               ]
            },
            "Network":{
               "Content":[
                  {
                     "notMatchingCount":"3"
                  },
                  {
                     "matchingCount":"1"
                  }
               ]
            }
         }
      }
   ]
}
```

# Working with custom inventory


You can assign any metadata you want to your nodes by creating AWS Systems Manager Inventory *custom inventory*. For example, let's say you manage a large number of servers in racks in your data center, and these servers have been configured as Systems Manager managed nodes. Currently, you store information about server rack location in a spreadsheet. With custom inventory, you can specify the rack location of each node as metadata on the node. When you collect inventory by using Systems Manager, the metadata is collected with other inventory metadata. You can then port all inventory metadata to a central Amazon S3 bucket by using [resource data sync](inventory-resource-data-sync.html) and query the data.

**Note**  
Systems Manager supports a maximum of 20 custom inventory types per AWS account.

To assign custom inventory to a node, you can either use the Systems Manager [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) API operation, as described in [Assigning custom inventory metadata to a managed node](inventory-custom-metadata.md). Or, you can create a custom inventory JSON file and upload it to the node. This section describes how to create the JSON file.

The following example JSON file with custom inventory specifies rack information about an on-premises server. This examples specifies one type of custom inventory data (`"TypeName": "Custom:RackInformation"`), with multiple entries under `Content` that describe the data.

```
{
    "SchemaVersion": "1.0",
    "TypeName": "Custom:RackInformation",
    "Content": {
        "Location": "US-EAST-02.CMH.RACK1",
        "InstalledTime": "2016-01-01T01:01:01Z",
        "vendor": "DELL",
        "Zone" : "BJS12",
        "TimeZone": "UTC-8"
      }
 }
```

You can also specify distinct entries in the `Content` section, as shown in the following example.

```
{
"SchemaVersion": "1.0",
"TypeName": "Custom:PuppetModuleInfo",
    "Content": [{
        "Name": "puppetlabs/aws",
        "Version": "1.0"
      },
      {
        "Name": "puppetlabs/dsc",
        "Version": "2.0"
      }
    ]
}
```

The JSON schema for custom inventory requires `SchemaVersion`, `TypeName`, and `Content` sections, but you can define the information in those sections.

```
{
    "SchemaVersion": "user_defined",
    "TypeName": "Custom:user_defined",
    "Content": {
        "user_defined_attribute1": "user_defined_value1",
        "user_defined_attribute2": "user_defined_value2",
        "user_defined_attribute3": "user_defined_value3",
        "user_defined_attribute4": "user_defined_value4"
      }
 }
```

The value of `TypeName` is limited to 100 characters. Also, the `TypeName` value must begin with the capitalized word `Custom`. For example, `Custom:PuppetModuleInfo`. Therefore, the following examples would result in an exception: `CUSTOM:PuppetModuleInfo`, `custom:PuppetModuleInfo`. 

The `Content` section includes attributes and *data*. These items aren't case-sensitive. However, if you define an attribute (for example: "`Vendor`": "DELL"), then you must consistently reference this attribute in your custom inventory files. If you specify "`Vendor`": "DELL" (using a capital “V” in `vendor`) in one file, and then you specify "`vendor`": "DELL" (using a lowercase “v” in `vendor`) in another file, the system returns an error.

**Note**  
You must save the file with a `.json` extension and the inventory you define must consist only of string values.

After you create the file, you must save it on the node. The following table shows the location where custom inventory JSON files must be stored on the node.


****  

| Operating system | Path | 
| --- | --- | 
|  Linux  |  /var/lib/amazon/ssm/*node-id*/inventory/custom  | 
|  macOS  |  `/opt/aws/ssm/data/node-id/inventory/custom`  | 
|  Windows Server  |  %SystemDrive%\$1ProgramData\$1Amazon\$1SSM\$1InstanceData\$1*node-id*\$1inventory\$1custom  | 

For an example of how to use custom inventory, see [Get Disk Utilization of Your Fleet Using EC2 Systems Manager Custom Inventory Types](https://aws.amazon.com/blogs/mt/get-disk-utilization-of-your-fleet-using-ec2-systems-manager-custom-inventory-types/).

## Deleting custom inventory


You can use the [DeleteInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteInventory.html) API operation to delete a custom inventory type and the data associated with that type. You call the delete-inventory command by using the AWS Command Line Interface (AWS CLI) to delete all data for an inventory type. You call the delete-inventory command with the `SchemaDeleteOption` to delete a custom inventory type.

**Note**  
An inventory type is also called an inventory schema.

The `SchemaDeleteOption` parameter includes the following options:
+ **DeleteSchema**: This option deletes the specified custom type and all data associated with it. You can recreate the schema later, if you want.
+ **DisableSchema**: If you choose this option, the system turns off the current version, deletes all data for it, and ignores all new data if the version is less than or equal to the turned off version. You can allow this inventory type again by calling the [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) action for a version greater than the turned off version.

**To delete or turn off custom inventory by using the AWS CLI**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to use the `dry-run` option to see which data will be deleted from the system. This command doesn't delete any data.

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name" --dry-run
   ```

   The system returns information like the following.

   ```
   {
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"Custom:custom_type_name"
   }
   ```

   For information about how to understand the delete inventory summary, see [Understanding the delete inventory summary](#delete-custom-inventory-summary).

1. Run the following command to delete all data for a custom inventory type.

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name"
   ```
**Note**  
The output of this command doesn't show the deletion progress. For this reason, TotalCount and Remaining Count are always the same because the system hasn't deleted anything yet. You can use the describe-inventory-deletions command to show the deletion progress, as described later in this topic.

   The system returns information like the following.

   ```
   {
      "DeletionId":"system_generated_deletion_ID",
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"custom_type_name"
   }
   ```

   The system deletes all data for the specified custom inventory type from the Systems Manager Inventory service. 

1. Run the following command. The command performs the following actions for the current version of the inventory type: turns off the current version, deletes all data for it, and ignores all new data if the version is less than or equal to the turned off version. 

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name" --schema-delete-option "DisableSchema"
   ```

   The system returns information like the following.

   ```
   {
      "DeletionId":"system_generated_deletion_ID",
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"Custom:custom_type_name"
   }
   ```

   You can view a turned off inventory type by using the following command.

   ```
   aws ssm get-inventory-schema --type-name Custom:custom_type_name
   ```

1. Run the following command to delete an inventory type.

   ```
   aws ssm delete-inventory --type-name "Custom:custom_type_name" --schema-delete-option "DeleteSchema"
   ```

   The system deletes the schema and all inventory data for the specified custom type.

   The system returns information like the following.

   ```
   {
      "DeletionId":"system_generated_deletion_ID",
      "DeletionSummary":{
         "RemainingCount":3,
         "SummaryItems":[
            {
               "Count":2,
               "RemainingCount":2,
               "Version":"1.0"
            },
            {
               "Count":1,
               "RemainingCount":1,
               "Version":"2.0"
            }
         ],
         "TotalCount":3
      },
      "TypeName":"Custom:custom_type_name"
   }
   ```

### Viewing the deletion status


You can check the status of a delete operation by using the `describe-inventory-deletions` AWS CLI command. You can specify a deletion ID to view the status of a specific delete operation. Or, you can omit the deletion ID to view a list of all deletions run in the last 30 days.

****

1. Run the following command to view the status of a deletion operation. The system returned the deletion ID in the delete-inventory summary.

   ```
   aws ssm describe-inventory-deletions --deletion-id system_generated_deletion_ID
   ```

   The system returns the latest status. The delete operation might not be finished yet. The system returns information like the following.

   ```
   {"InventoryDeletions": 
     [
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521744844, 
        "DeletionSummary": 
         {"RemainingCount": 1, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 1, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "InProgress", 
        "LastStatusMessage": "The Delete is in progress", 
        "LastStatusUpdateTime": 1521744844, 
        "TypeName": "Custom:custom_type_name"}
     ]
   }
   ```

   If the delete operation is successful, the `LastStatusMessage` states: Deletion is successful.

   ```
   {"InventoryDeletions": 
     [
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521744844, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521745253, 
        "TypeName": "Custom:custom_type_name"}
     ]
   }
   ```

1. Run the following command to view a list of all deletions run in the last 30 days.

   ```
   aws ssm describe-inventory-deletions --max-results a number
   ```

   ```
   {"InventoryDeletions": 
     [
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521682552, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521682852, 
        "TypeName": "Custom:custom_type_name"}, 
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521744844, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521745253, 
        "TypeName": "Custom:custom_type_name"}, 
       {"DeletionId": "system_generated_deletion_ID", 
        "DeletionStartTime": 1521680145, 
        "DeletionSummary": 
         {"RemainingCount": 0, 
          "SummaryItems": 
           [
             {"Count": 1, 
              "RemainingCount": 0, 
              "Version": "1.0"}
           ], 
          "TotalCount": 1}, 
        "LastStatus": "Complete", 
        "LastStatusMessage": "Deletion is successful", 
        "LastStatusUpdateTime": 1521680471, 
        "TypeName": "Custom:custom_type_name"}
     ], 
    "NextToken": "next-token"
   ```

### Understanding the delete inventory summary


To help you understand the contents of the delete inventory summary, consider the following example. A user assigned Custom:RackSpace inventory to three nodes. Inventory items 1 and 2 use custom type version 1.0 ("SchemaVersion":"1.0"). Inventory item 3 uses custom type version 2.0 ("SchemaVersion":"2.0").

RackSpace custom inventory 1

```
{
   "CaptureTime":"2018-02-19T10:48:55Z",
   "TypeName":"CustomType:RackSpace",
   "InstanceId":"i-1234567890",
   "SchemaVersion":"1.0"   "Content":[
      {
         content of custom type omitted
      }
   ]
}
```

RackSpace custom inventory 2

```
{
   "CaptureTime":"2018-02-19T10:48:55Z",
   "TypeName":"CustomType:RackSpace",
   "InstanceId":"i-1234567891",
   "SchemaVersion":"1.0"   "Content":[
      {
         content of custom type omitted
      }
   ]
}
```

RackSpace custom inventory 3

```
{
   "CaptureTime":"2018-02-19T10:48:55Z",
   "TypeName":"CustomType:RackSpace",
   "InstanceId":"i-1234567892",
   "SchemaVersion":"2.0"   "Content":[
      {
         content of custom type omitted
      }
   ]
}
```

The user runs the following command to preview which data will be deleted.

```
aws ssm delete-inventory --type-name "Custom:RackSpace" --dry-run
```

The system returns information like the following.

```
{
   "DeletionId":"1111-2222-333-444-66666",
   "DeletionSummary":{
      "RemainingCount":3,           
      "TotalCount":3,             
                TotalCount and RemainingCount are the number of items that would be deleted if this was not a dry run. These numbers are the same because the system didn't delete anything.
      "SummaryItems":[
         {
            "Count":2,             The system found two items that use SchemaVersion 1.0. Neither item was deleted.           
            "RemainingCount":2,
            "Version":"1.0"
         },
         {
            "Count":1,             The system found one item that uses SchemaVersion 1.0. This item was not deleted.
            "RemainingCount":1,
            "Version":"2.0"
         }
      ],

   },
   "TypeName":"Custom:RackSpace"
}
```

The user runs the following command to delete the Custom:RackSpace inventory. 

**Note**  
The output of this command doesn't show the deletion progress. For this reason, `TotalCount` and `RemainingCount` are always the same because the system hasn't deleted anything yet. You can use the `describe-inventory-deletions` command to show the deletion progress.

```
aws ssm delete-inventory --type-name "Custom:RackSpace"
```

The system returns information like the following.

```
{
   "DeletionId":"1111-2222-333-444-7777777",
   "DeletionSummary":{
      "RemainingCount":3,           There are three items to delete
      "SummaryItems":[
         {
            "Count":2,              The system found two items that use SchemaVersion 1.0.
            "RemainingCount":2,     
            "Version":"1.0"
         },
         {
            "Count":1,              The system found one item that uses SchemaVersion 2.0.
            "RemainingCount":1,     
            "Version":"2.0"
         }
      ],
      "TotalCount":3                
   },
   "TypeName":"RackSpace"
}
```

### Viewing inventory delete actions in EventBridge


You can configure Amazon EventBridge to create an event anytime a user deletes custom inventory. EventBridge offers three types of events for custom inventory delete operations:
+ **Delete action for an instance**: If the custom inventory for a specific managed node was successfully deleted or not. 
+ **Delete action summary**: A summary of the delete action.
+ **Warning for turned off custom inventory type**: A warning event if a user called the [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) API operation for a custom inventory type version that was previously turned off.

Here are examples of each event.

**Delete action for an instance**

```
{
   "version":"0",
   "id":"998c9cde-56c0-b38b-707f-0411b3ff9d11",
   "detail-type":"Inventory Resource State Change",
   "source":"aws.ssm",
   "account":"478678815555",
   "time":"2018-05-24T22:24:34Z",
   "region":"us-east-1",
   "resources":[
      "arn:aws:ssm:us-east-1:478678815555:managed-instance/i-0a5feb270fc3f0b97"
   ],
   "detail":{
      "action-status":"succeeded",
      "action":"delete",
      "resource-type":"managed-instance",
      "resource-id":"i-0a5feb270fc3f0b97",
      "action-reason":"",
      "type-name":"Custom:MyInfo"
   }
}
```

**Delete action summary**

```
{
   "version":"0",
   "id":"83898300-f576-5181-7a67-fb3e45e4fad4",
   "detail-type":"Inventory Resource State Change",
   "source":"aws.ssm",
   "account":"478678815555",
   "time":"2018-05-24T22:28:25Z",
   "region":"us-east-1",
   "resources":[

   ],
   "detail":{
      "action-status":"succeeded",
      "action":"delete-summary",
      "resource-type":"managed-instance",
      "resource-id":"",
      "action-reason":"The delete for type name Custom:MyInfo was completed. The deletion summary is: {\"totalCount\":2,\"remainingCount\":0,\"summaryItems\":[{\"version\":\"1.0\",\"count\":2,\"remainingCount\":0}]}",
      "type-name":"Custom:MyInfo"
   }
}
```

**Warning for turned off custom inventory type**

```
{
   "version":"0",
   "id":"49c1855c-9c57-b5d7-8518-b64aeeef5e4a",
   "detail-type":"Inventory Resource State Change",
   "source":"aws.ssm",
   "account":"478678815555",
   "time":"2018-05-24T22:46:58Z",
   "region":"us-east-1",
   "resources":[
      "arn:aws:ssm:us-east-1:478678815555:managed-instance/i-0ee2d86a2cfc371f6"
   ],
   "detail":{
      "action-status":"failed",
      "action":"put",
      "resource-type":"managed-instance",
      "resource-id":"i-0ee2d86a2cfc371f6",
      "action-reason":"The inventory item with type name Custom:MyInfo was sent with a disabled schema version 1.0. You must send a version greater than 1.0",
      "type-name":"Custom:MyInfo"
   }
}
```

Use the following procedure to create an EventBridge rule for custom inventory delete operations. This procedure shows you how to create a rule that sends notifications for custom inventory delete operations to an Amazon SNS topic. Before you begin, verify that you have an Amazon SNS topic, or create a new one. For more information, see [Getting Started](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) in the *Amazon Simple Notification Service Developer Guide*.

**To configure EventBridge for delete inventory operations**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to respond to matching events that come from your own AWS account, select **default**. When an AWS service in your account emits an event, it always goes to your account’s default event bus.

1. For **Rule type**, choose **Rule with an event pattern**.

1. Choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. In the **Event pattern** section, choose **Event pattern form**.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **Systems Manager**.

1. For **Event type**, choose **Inventory**.

1. For **Specific detail type(s)**, choose **Inventory Resource State Change**.

1. Choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **SNS topic**, and then for **Topic**, choose your topic.

1. In the **Additional settings** section, for **Configure target input**, verify that **Matched event** is selected.

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Tagging Your Amazon EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Review the details of the rule and choose **Create rule**.

# Viewing inventory history and change tracking


You can view AWS Systems Manager Inventory history and change tracking for all of your managed nodes by using [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/). AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. To view inventory history and change tracking, you must turn on the following resources in AWS Config: 
+ SSM:ManagedInstanceInventory
+ SSM:PatchCompliance
+ SSM:AssociationCompliance
+ SSM:FileData

**Note**  
Note the following important details about Inventory history and change tracking:  
If you use AWS Config to track changes in your system, you must configure Systems Manager Inventory to collect `AWS:File` metadata so that you can view file changes in AWS Config (`SSM:FileData`). If you don't, then AWS Config doesn't track file changes on your system.
By turning on SSM:PatchCompliance and SSM:AssociationCompliance, you can view Systems Manager Patch Manager patching and Systems Manager State Manager association compliance history and change tracking. For more information about compliance management for these resources, see [Learn details about Compliance](compliance-about.md). 

The following procedure describes how to turn on inventory history and change-track recording in AWS Config by using the AWS Command Line Interface (AWS CLI). For more information about how to choose and configure these resources in AWS Config, see [Selecting Which Resources AWS Config Records](https://docs.aws.amazon.com/config/latest/developerguide/select-resources.html) in the *AWS Config Developer Guide*. For information about AWS Config pricing, see [Pricing](https://aws.amazon.com/config/pricing/).

**Before you begin**

AWS Config requires AWS Identity and Access Management (IAM) permissions to get configuration details about Systems Manager resources. In the following procedure, you must specify an Amazon Resource Name (ARN) for an IAM role that gives AWS Config permission to Systems Manager resources. You can attach the `AWS_ConfigRole` managed policy to the IAM role that you assign to AWS Config. For more information about this role, see [AWS managed policy: AWS\$1ConfigRole](https://docs.aws.amazon.com/config/latest/developerguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AWS_ConfigRole) in the *AWS Config Developer Guide*. For information about how to create an IAM role and assign the `AWS_ConfigRole` managed policy to that role, see [Creating a role to delegate permissions to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html) in the *IAM User Guide*. 

**To turn on inventory history and change-track recording in AWS Config**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Copy and paste the following JSON sample into a simple text file and save it as recordingGroup.json.

   ```
   {
      "allSupported":false,
      "includeGlobalResourceTypes":false,
      "resourceTypes":[
         "AWS::SSM::AssociationCompliance",
         "AWS::SSM::PatchCompliance",
         "AWS::SSM::ManagedInstanceInventory",
         "AWS::SSM::FileData"
      ]
   }
   ```

1. Run the following command to load the recordingGroup.json file into AWS Config.

   ```
   aws configservice put-configuration-recorder --configuration-recorder name=myRecorder,roleARN=arn:aws:iam::123456789012:role/myConfigRole --recording-group file://recordingGroup.json
   ```

1. Run the following command to start recording inventory history and change tracking.

   ```
   aws configservice start-configuration-recorder --configuration-recorder-name myRecorder
   ```

After you configure history and change tracking, you can drill down into the history for a specific managed node by choosing the **AWS Config** button in the Systems Manager console. You can access the **AWS Config** button from either the **Managed Instances** page or the **Inventory** page. Depending on your monitor size, you might need to scroll to the right side of the page to see the button.

# Stopping data collection and deleting inventory data


If you no longer want to use AWS Systems Manager Inventory to view metadata about your AWS resources, you can stop data collection and delete data that has already been collected. This section includes the following information.

**Topics**
+ [

## Stopping data collection
](#systems-manager-inventory-delete-association)
+ [

## Deleting an Inventory resource data sync
](#systems-manager-inventory-delete-RDS)

## Stopping data collection


When you initially configure Systems Manager to collect inventory data, the system creates a State Manager association that defines the schedule and the resources from which to collect metadata. You can stop data collection by deleting any State Manager associations that use the `AWS-GatherSoftwareInventory` document.

**To delete an Inventory association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose an association that uses the `AWS-GatherSoftwareInventory` document and then choose **Delete**.

1. Repeat step three for any remaining associations that use the `AWS-GatherSoftwareInventory` document.

## Deleting an Inventory resource data sync


If you no longer want to use AWS Systems Manager Inventory to view metadata about your AWS resources, then we also recommend deleting resource data syncs used for inventory data collection.

**To delete an Inventory resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Inventory**.

1. Choose **Resource Data Syncs**.

1. Choose a sync in the list.
**Important**  
Make sure you choose the sync used for Inventory. Systems Manager supports resource data sync for multiple tools. If you choose the wrong sync, you could disrupt data aggregation for Systems Manager Explorer or Systems Manager Compliance.

1. Choose **Delete**

1. Repeat these steps for any remaining resource data syncs you want to delete.

1. Delete the Amazon Simple Storage Service (Amazon S3) bucket where the data was stored. For information about deleting an Amazon S3 bucket, see [Deleting a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html).

# Assigning custom inventory metadata to a managed node


The following procedure walks you through the process of using the AWS Systems Manager [PutInventory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutInventory.html) API operation to assign custom inventory metadata to a managed node. This example assigns rack location information to a node. For more information about custom inventory, see [Working with custom inventory](inventory-custom.md).

**To assign custom inventory metadata to a node**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to assign rack location information to a node.

   **Linux**

   ```
   aws ssm put-inventory --instance-id "ID" --items '[{"CaptureTime": "2016-08-22T10:01:01Z", "TypeName": "Custom:RackInfo", "Content":[{"RackLocation": "Bay B/Row C/Rack D/Shelf E"}], "SchemaVersion": "1.0"}]'
   ```

   **Windows**

   ```
   aws ssm put-inventory --instance-id "ID" --items "TypeName=Custom:RackInfo,SchemaVersion=1.0,CaptureTime=2021-05-22T10:01:01Z,Content=[{RackLocation='Bay B/Row C/Rack D/Shelf F'}]"
   ```

1. Run the following command to view custom inventory entries for this node.

   ```
   aws ssm list-inventory-entries --instance-id ID --type-name "Custom:RackInfo"
   ```

   The system responds with information like the following.

   ```
   {
       "InstanceId": "ID", 
       "TypeName": "Custom:RackInfo", 
       "Entries": [
           {
               "RackLocation": "Bay B/Row C/Rack D/Shelf E"
           }
       ], 
       "SchemaVersion": "1.0", 
       "CaptureTime": "2016-08-22T10:01:01Z"
   }
   ```

1. Run the following command to view the custom inventory schema.

   ```
   aws ssm get-inventory-schema --type-name Custom:RackInfo
   ```

   The system responds with information like the following.

   ```
   {
       "Schemas": [
           {
               "TypeName": "Custom:RackInfo",
               "Version": "1.0",
               "Attributes": [
                   {
                       "DataType": "STRING",
                       "Name": "RackLocation"
                   }
               ]
           }
       ]
   }
   ```

# Using the AWS CLI to configure inventory data collection


The following procedures walk you through the process of configuring AWS Systems Manager Inventory to collect metadata from your managed nodes. When you configure inventory collection, you start by creating a Systems Manager State Manager association. Systems Manager collects the inventory data when the association is run. If you don't create the association first, and attempt to invoke the `aws:softwareInventory` plugin by using, for example, Systems Manager Run Command, the system returns the following error:

`The aws:softwareInventory plugin can only be invoked via ssm-associate`.

**Note**  
A node can have only one inventory association configured at a time. If you configure a node with two or more inventory associations, the association doesn't run and no inventory data is collected.

## Quickly configure all of your managed nodes for Inventory (CLI)


You can quickly configure all managed nodes in your AWS account and in the current Region to collect inventory data. This is called creating a global inventory association. To create a global inventory association by using the AWS CLI, use the wildcard option for the `instanceIds` value, as shown in the following procedure.

**To configure inventory for all managed nodes in your AWS account and in the current Region (CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name AWS-GatherSoftwareInventory \
   --targets Key=InstanceIds,Values=* \
   --schedule-expression "rate(1 day)" \
   --parameters applications=Enabled,awsComponents=Enabled,customInventory=Enabled,instanceDetailedInformation=Enabled,networkConfig=Enabled,services=Enabled,windowsRoles=Enabled,windowsUpdates=Enabled
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name AWS-GatherSoftwareInventory ^
   --targets Key=InstanceIds,Values=* ^
   --schedule-expression "rate(1 day)" ^
   --parameters applications=Enabled,awsComponents=Enabled,customInventory=Enabled,instanceDetailedInformation=Enabled,networkConfig=Enabled,services=Enabled,windowsRoles=Enabled,windowsUpdates=Enabled
   ```

------

**Note**  
This command doesn't allow Inventory to collect metadata for the Windows Registry or files. To inventory these datatypes, use the next procedure.

## Manually configuring Inventory on your managed nodes (CLI)


Use the following procedure to manually configure AWS Systems Manager Inventory on your managed nodes by using node IDs or tags.

**To manually configure your managed nodes for inventory (CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create a State Manager association that runs Systems Manager Inventory on the node. Replace each *example resource placeholder* with your own information. This command configures the service to run every six hours and to collect network configuration, Windows Update, and application metadata from a node.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name "AWS-GatherSoftwareInventory" \
   --targets "Key=instanceids,Values=an_instance_ID" \
   --schedule-expression "rate(240 minutes)" \
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"region_ID, for example us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" \
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name "AWS-GatherSoftwareInventory" ^
   --targets "Key=instanceids,Values=an_instance_ID" ^
   --schedule-expression "rate(240 minutes)" ^
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"region_ID, for example us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" ^
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------

   The system responds with information like the following.

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "rate(240 minutes)",
           "OutputLocation": {
               "S3Location": {
                   "OutputS3KeyPrefix": "Test",
                   "OutputS3BucketName": "Test bucket",
                   "OutputS3Region": "us-east-2"
               }
           },
           "Name": "The name you specified",
           "Parameters": {
               "applications": [
                   "Enabled"
               ],
               "networkConfig": [
                   "Enabled"
               ],
               "windowsUpdates": [
                   "Enabled"
               ]
           },
           "Overview": {
               "Status": "Pending",
               "DetailedStatus": "Creating"
           },
           "AssociationId": "1a2b3c4d5e6f7g-1a2b3c-1a2b3c-1a2b3c-1a2b3c4d5e6f7g",
           "DocumentVersion": "$DEFAULT",
           "LastUpdateAssociationDate": 1480544990.06,
           "Date": 1480544990.06,
           "Targets": [
               {
                   "Values": [
                      "i-02573cafcfEXAMPLE"
                   ],
                   "Key": "InstanceIds"
               }
           ]
       }
   }
   ```

   You can target large groups of nodes by using the `Targets` parameter with EC2 tags. See the following example.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name "AWS-GatherSoftwareInventory" \
   --targets "Key=tag:Environment,Values=Production" \
   --schedule-expression "rate(240 minutes)" \
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" \
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name "AWS-GatherSoftwareInventory" ^
   --targets "Key=tag:Environment,Values=Production" ^
   --schedule-expression "rate(240 minutes)" ^
   --output-location "{ \"S3Location\": { \"OutputS3Region\": \"us-east-2\", \"OutputS3BucketName\": \"amzn-s3-demo-bucket\", \"OutputS3KeyPrefix\": \"Test\" } }" ^
   --parameters "networkConfig=Enabled,windowsUpdates=Enabled,applications=Enabled"
   ```

------

   You can also inventory files and Windows Registry keys on a Windows Server node by using the `files` and `windowsRegistry` inventory types with expressions. For more information about these inventory types, see [Working with file and Windows registry inventory](inventory-file-and-registry.md).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --name "AWS-GatherSoftwareInventory" \
   --targets "Key=instanceids,Values=i-0704358e3a3da9eb1" \
   --schedule-expression "rate(240 minutes)" \
   --parameters '{"files":["[{\"Path\": \"C:\\Program Files\", \"Pattern\": [\"*.exe\"], \"Recursive\": true}]"], "windowsRegistry": ["[{\"Path\":\"HKEY_LOCAL_MACHINE\\Software\\Amazon\", \"Recursive\":true}]"]}' \
   --profile dev-pdx
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --name "AWS-GatherSoftwareInventory" ^
   --targets "Key=instanceids,Values=i-0704358e3a3da9eb1" ^
   --schedule-expression "rate(240 minutes)" ^
   --parameters '{"files":["[{\"Path\": \"C:\\Program Files\", \"Pattern\": [\"*.exe\"], \"Recursive\": true}]"], "windowsRegistry": ["[{\"Path\":\"HKEY_LOCAL_MACHINE\\Software\\Amazon\", \"Recursive\":true}]"]}' ^
   --profile dev-pdx
   ```

------

1. Run the following command to view the association status.

   ```
   aws ssm describe-instance-associations-status --instance-id an_instance_ID
   ```

   The system responds with information like the following.

   ```
   {
   "InstanceAssociationStatusInfos": [
            {
               "Status": "Pending",
               "DetailedStatus": "Associated",
               "Name": "reInvent2016PolicyDocumentTest",
               "InstanceId": "i-1a2b3c4d5e6f7g",
               "AssociationId": "1a2b3c4d5e6f7g-1a2b3c-1a2b3c-1a2b3c-1a2b3c4d5e6f7g",
               "DocumentVersion": "1"
           }
   ]
   }
   ```

# Walkthrough: Using resource data sync to aggregate inventory data


The following walkthrough describes how to create a resource data sync configuration for AWS Systems Manager Inventory by using the AWS Command Line Interface (AWS CLI). A resource data sync automatically ports inventory data from all of your managed nodes to a central Amazon Simple Storage Service (Amazon S3) bucket. The sync automatically updates the data in the central Amazon S3 bucket whenever new inventory data is discovered. 

This walkthrough also describes how to use Amazon Athena and Amazon Quick to query and analyze the aggregated data. For information about creating a resource data sync by using Systems Manager in the AWS Management Console, see [Walkthrough: Using resource data sync to aggregate inventory data](#inventory-resource-data-sync). For information about querying inventory from multiple AWS Regions and accounts by using Systems Manager in the AWS Management Console, see [Querying inventory data from multiple Regions and accounts](systems-manager-inventory-query.md).

**Note**  
This walkthrough includes information about how to encrypt the sync by using AWS Key Management Service (AWS KMS). Inventory doesn't collect any user-specific, proprietary, or sensitive data so encryption is optional. For more information about AWS KMS, see [AWS Key Management Service Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/).

**Before you begin**  
Review or complete the following tasks before you begin the walkthrough in this section:
+ Collect inventory data from your managed nodes. For the purpose of the Amazon Athena and Amazon Quick sections in this walkthrough, we recommend that you collect Application data. For more information about how to collect inventory data, see [Configuring inventory collection](inventory-collection.md) or [Using the AWS CLI to configure inventory data collection](inventory-collection-cli.md).
+ (Optional) If the inventory data is stored in an Amazon Simple Storage Service (Amazon S3) bucket that uses AWS Key Management Service (AWS KMS) encryption, you must also configure your IAM account and the `Amazon-GlueServiceRoleForSSM` service role for AWS KMS encryption. If you don't configure your IAM account and this role, Systems Manager displays `Cannot load Glue tables` when you choose the **Detailed View** tab in the console. For more information, see [(Optional) Configure permissions for viewing AWS KMS encrypted data](systems-manager-inventory-query.md#systems-manager-inventory-query-kms).
+ (Optional) If you want to encrypt the resource data sync by using AWS KMS, then you must either create a new key that includes the following policy, or you must update an existing key and add this policy to it.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Id": "ssm-access-policy",
      "Statement": [
          {
              "Sid": "ssm-access-policy-statement",
              "Action": [
                  "kms:GenerateDataKey"
              ],
              "Effect": "Allow",
              "Principal": {
                  "Service": "ssm.amazonaws.com"
              },
              "Resource": "arn:aws:kms:us-east-1:123456789012:key/KMS_key_id",
              "Condition": {
                  "StringLike": {
                      "aws:SourceAccount": "123456789012"
                  },
                  "ArnLike": {
                      "aws:SourceArn": "arn:aws:ssm:*:123456789012:resource-data-sync/*"
                  }
              }
          }
      ]
  }
  ```

------

**To create a resource data sync for Inventory**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Create a bucket to store your aggregated inventory data. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon Simple Storage Service User Guide*. Make a note of the bucket name and the AWS Region where you created it.

1. After you create the bucket, choose the **Permissions** tab, and then choose **Bucket Policy**.

1. Copy and paste the following bucket policy into the policy editor. Replace amzn-s3-demo-bucket and *account-id* with the name of the Amazon S3 bucket you created and a valid AWS account ID. When adding multiple accounts, add an additional condition string and ARN for each account. Remove the additional placeholders from the example when adding one account. Optionally, replace *bucket-prefix* with the name of an Amazon S3 prefix (subdirectory). If you didn't created a prefix, remove *bucket-prefix/* from the ARN in the policy. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": " SSMBucketDelivery",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "s3:PutObject",
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket/bucket-prefix/*/accountid=111122223333/*"
               ],
               "Condition": {
                   "StringEquals": {
                       "s3:x-amz-acl": "bucket-owner-full-control",
                       "aws:SourceAccount": [
                           "111122223333",
                           "444455556666",
                           "123456789012",
                           "777788889999"
                       ]
                   },
                   "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:ssm:*:111122223333:resource-data-sync/*",
                           "arn:aws:ssm:*:444455556666:resource-data-sync/*",
                           "arn:aws:ssm:*:123456789012:resource-data-sync/*",
                           "arn:aws:ssm:*:777788889999:resource-data-sync/*"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. (Optional) If you want to encrypt the sync, then you must add the following conditions to the policy listed in the previous step. Add these in the `StringEquals` section.

   ```
   "s3:x-amz-server-side-encryption":"aws:kms",
   "s3:x-amz-server-side-encryption-aws-kms-key-id":"arn:aws:kms:region:account_ID:key/KMS_key_ID"
   ```

   Here is an example:

   ```
   "StringEquals": {
             "s3:x-amz-acl": "bucket-owner-full-control",
             "aws:SourceAccount": "account-id",
             "s3:x-amz-server-side-encryption":"aws:kms",
             "s3:x-amz-server-side-encryption-aws-kms-key-id":"arn:aws:kms:region:account_ID:key/KMS_key_ID"
           }
   ```

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. (Optional) If you want to encrypt the sync, run the following command to verify that the bucket policy is enforcing the AWS KMS key requirement. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws s3 cp ./A_file_in_the_bucket s3://amzn-s3-demo-bucket/prefix/ \
   --sse aws:kms \
   --sse-kms-key-id "arn:aws:kms:region:account_ID:key/KMS_key_id" \
   --region region, for example, us-east-2
   ```

------
#### [ Windows ]

   ```
   aws s3 cp ./A_file_in_the_bucket s3://amzn-s3-demo-bucket/prefix/ ^ 
       --sse aws:kms ^
       --sse-kms-key-id "arn:aws:kms:region:account_ID:key/KMS_key_id" ^
       --region region, for example, us-east-2
   ```

------

1. Run the following command to create a resource data sync configuration with the Amazon S3 bucket you created at the start of this procedure. This command creates a sync from the AWS Region you're logged into.
**Note**  
If the sync and the target Amazon S3 bucket are located in different regions, you might be subject to data transfer pricing. For more information, see [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-resource-data-sync \
   --sync-name a_name \
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,Region=bucket_region"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-resource-data-sync ^
   --sync-name a_name ^
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,Region=bucket_region"
   ```

------

   You can use the `region` parameter to specify where the sync configuration should be created. In the following example, inventory data from the us-west-1 Region, will be synchronized in the Amazon S3 bucket in the us-west-2 Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-resource-data-sync \
       --sync-name InventoryDataWest \
       --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=HybridEnv,SyncFormat=JsonSerDe,Region=us-west-2" 
       --region us-west-1
   ```

------
#### [ Windows ]

   ```
   aws ssm create-resource-data-sync ^ 
   --sync-name InventoryDataWest ^
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=HybridEnv,SyncFormat=JsonSerDe,Region=us-west-2" ^ --region us-west-1
   ```

------

   (Optional) If you want to encrypt the sync by using AWS KMS, run the following command to create the sync. If you encrypt the sync, then the AWS KMS key and the Amazon S3 bucket must be in the same Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-resource-data-sync \
   --sync-name sync_name \
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,AWSKMSKeyARN=arn:aws:kms:region:account_ID:key/KMS_key_ID,Region=bucket_region" \
   --region region
   ```

------
#### [ Windows ]

   ```
   aws ssm create-resource-data-sync ^
   --sync-name sync_name ^
   --s3-destination "BucketName=amzn-s3-demo-bucket,Prefix=prefix_name, if_specified,SyncFormat=JsonSerDe,AWSKMSKeyARN=arn:aws:kms:region:account_ID:key/KMS_key_ID,Region=bucket_region" ^
   --region region
   ```

------

1. Run the following command to view the status of sync configuration. 

   ```
   aws ssm list-resource-data-sync 
   ```

   If you created the sync configuration in a different Region, then you must specify the `region` parameter, as shown in the following example.

   ```
   aws ssm list-resource-data-sync --region us-west-1
   ```

1. After the sync configuration is created successfully, examine the target bucket in Amazon S3. Inventory data should be displayed within a few minutes.

**Working with the Data in Amazon Athena**

The following section describes how to view and query the data in Amazon Athena. Before you begin, we recommend that you learn about Athena. For more information, see [What is Amazon Athena?](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) and [Working with Data](https://docs.aws.amazon.com/athena/latest/ug/work-with-data.html) in the *Amazon Athena User Guide*.

**To view and query the data in Amazon Athena**

1. Open the Athena console at [https://console.aws.amazon.com/athena/](https://console.aws.amazon.com/athena/home).

1. Copy and paste the following statement into the query editor and then choose **Run Query**.

   ```
   CREATE DATABASE ssminventory
   ```

   The system creates a database called ssminventory.

1. Copy and paste the following statement into the query editor and then choose **Run Query**. Replace amzn-s3-demo-bucket and *bucket\$1prefix* with the name and prefix of the Amazon S3 target.

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_Application (
   Name string,
   ResourceId string,
   ApplicationType string,
   Publisher string,
   Version string,
   InstalledTime string,
   Architecture string,
   URL string,
   Summary string,
   PackageId string
   ) 
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket_prefix/AWS:Application/'
   ```

1. Copy and paste the following statement into the query editor and then choose **Run Query**.

   ```
   MSCK REPAIR TABLE ssminventory.AWS_Application
   ```

   The system partitions the table.
**Note**  
If you create resource data syncs from additional AWS Regions or AWS accounts, then you must run this command again to update the partitions. You might also need to update your Amazon S3 bucket policy.

1. To preview your data, choose the view icon next to the `AWS_Application` table.  
![\[The preview data icon in Amazon Athena.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-inventory-resource-data-sync-walk.png)

1. Copy and paste the following statement into the query editor and then choose **Run Query**.

   ```
   SELECT a.name, a.version, count( a.version) frequency 
   from aws_application a where
   a.name = 'aws-cfn-bootstrap'
   group by a.name, a.version
   order  by frequency desc
   ```

   The query returns a count of different versions of `aws-cfn-bootstrap`, which is an AWS application present on Amazon Elastic Compute Cloud (Amazon EC2) instances for Linux, macOS, and Windows Server.

1. Individually copy and paste the following statements into the query editor, replace amzn-s3-demo-bucket and *bucket-prefix* with information for Amazon S3, and then choose **Run Query**. These statements set up additional inventory tables in Athena.

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_AWSComponent (
    `ResourceId` string,
     `Name` string,
     `ApplicationType` string,
     `Publisher` string,
     `Version` string,
     `InstalledTime` string,
     `Architecture` string,
     `URL` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:AWSComponent/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_AWSComponent
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_WindowsUpdate (
     `ResourceId` string,
     `HotFixId` string,
     `Description` string,
     `InstalledTime` string,
     `InstalledBy` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:WindowsUpdate/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_WindowsUpdate
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_InstanceInformation (
     `AgentType` string,
     `AgentVersion` string,
     `ComputerName` string,
     `IamRole` string,
     `InstanceId` string,
     `IpAddress` string,
     `PlatformName` string,
     `PlatformType` string,
     `PlatformVersion` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:InstanceInformation/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_InstanceInformation
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_Network (
     `ResourceId` string,
     `Name` string,
     `SubnetMask` string,
     `Gateway` string,
     `DHCPServer` string,
     `DNSServer` string,
     `MacAddress` string,
     `IPV4` string,
     `IPV6` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:Network/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_Network
   ```

   ```
   CREATE EXTERNAL TABLE IF NOT EXISTS ssminventory.AWS_PatchSummary (
     `ResourceId` string,
     `PatchGroup` string,
     `BaselineId` string,
     `SnapshotId` string,
     `OwnerInformation` string,
     `InstalledCount` int,
     `InstalledOtherCount` int,
     `NotApplicableCount` int,
     `MissingCount` int,
     `FailedCount` int,
     `OperationType` string,
     `OperationStartTime` string,
     `OperationEndTime` string
   )
   PARTITIONED BY (AccountId string, Region string, ResourceType string)
   ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
   WITH SERDEPROPERTIES (
     'serialization.format' = '1'
   ) LOCATION 's3://amzn-s3-demo-bucket/bucket-prefix/AWS:PatchSummary/'
   ```

   ```
   MSCK REPAIR TABLE ssminventory.AWS_PatchSummary
   ```

**Working with the Data in Amazon Quick**

The following section provides an overview with links for building a visualization in Amazon Quick.

**To build a visualization in Amazon Quick**

1. Sign up for [Amazon Quick](https://quicksight.aws/) and then log in to the Quick console.

1. Create a data set from the `AWS_Application` table and any other tables you created. For more information, see [Creating a dataset using Amazon Athena data](https://docs.aws.amazon.com/quicksuite/latest/userguide/create-a-data-set-athena.html) in the *Amazon Quick User Guide*.

1. Join tables. For example, you could join the `instanceid` column from `AWS_InstanceInformation` because it matches the `resourceid` column in other inventory tables. For more information about joining tables, see [Joining data](https://docs.aws.amazon.com/quicksuite/latest/userguide/joining-data.html) in the *Amazon Quick User Guide*.

1. Build a visualization. For more information, see [Analyses and reports: Visualizing data in Amazon Quick Sight](https://docs.aws.amazon.com/quicksuite/latest/userguide/working-with-visuals.html) in the *Amazon Quick User Guide*.

# Troubleshooting problems with Systems Manager Inventory
Troubleshooting Inventory

This topic includes information about how to troubleshoot common errors or problems with AWS Systems Manager Inventory. If you're having trouble viewing your nodes in Systems Manager, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md).

**Topics**
+ [

## Multiple apply all associations with document '`AWS-GatherSoftwareInventory`' are not supported
](#systems-manager-inventory-troubleshooting-multiple)
+ [

## Inventory execution status never exits pending
](#inventory-troubleshooting-pending)
+ [

## The `AWS-ListWindowsInventory` document fails to run
](#inventory-troubleshooting-ListWindowsInventory)
+ [

## Console doesn't display Inventory Dashboard \$1 Detailed View \$1 Settings tabs
](#inventory-troubleshooting-tabs)
+ [

## UnsupportedAgent
](#inventory-troubleshooting-unsupported-agent)
+ [

## Skipped
](#inventory-troubleshooting-skipped)
+ [

## Failed
](#inventory-troubleshooting-failed)
+ [

## Inventory compliance failed for an Amazon EC2 instance
](#inventory-troubleshooting-ec2-compliance)
+ [

## S3 bucket object contains old data
](#systems-manager-inventory-troubleshooting-s3)

## Multiple apply all associations with document '`AWS-GatherSoftwareInventory`' are not supported


An error that `Multiple apply all associations with document 'AWS-GatherSoftwareInventory' are not supported` means that one or more AWS Regions where you're trying to configure an Inventory association *for all nodes* are already configured with an inventory association for all nodes. If necessary, you can delete the existing inventory association for all nodes and then create a new one. To view existing inventory associations, choose **State Manager** in the Systems Manager console and then locate associations that use the `AWS-GatherSoftwareInventory` SSM document. If the existing inventory association for all nodes was created across multiple Regions, and you want to create a new one, you must delete the existing association from each Region where it exists.

## Inventory execution status never exits pending


There are two reasons why inventory collection never exits the `Pending` status:
+ No nodes in the selected AWS Region:

  If you create a global inventory association by using Systems Manager Quick Setup, the status of the inventory association (`AWS-GatherSoftwareInventory` document) shows `Pending` if there are no nodes available in the selected Region.****
+ Insufficient permissions:

  An inventory association shows `Pending` if one or more nodes don't have permission to run Systems Manager Inventory. Verify that the AWS Identity and Access Management (IAM) instance profile includes the **AmazonSSMManagedInstanceCore** managed policy. For information about how to add this policy to an instance profile, see [Alternative configuration for EC2 instance permissions](setup-instance-permissions.md#instance-profile-add-permissions).

  At a minimum, the instance profile must have the following IAM permissions.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "ssm:DescribeAssociation",
                  "ssm:ListAssociations",
                  "ssm:ListInstanceAssociations",
                  "ssm:PutInventory",
                  "ssm:PutComplianceItems",
                  "ssm:UpdateAssociationStatus",
                  "ssm:UpdateInstanceAssociationStatus",
                  "ssm:UpdateInstanceInformation",
                  "ssm:GetDocument",
                  "ssm:DescribeDocument"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

------

## The `AWS-ListWindowsInventory` document fails to run


The `AWS-ListWindowsInventory` document is deprecated. Don't use this document to collect inventory. Instead, use one of the processes described in [Configuring inventory collection](inventory-collection.md). 

## Console doesn't display Inventory Dashboard \$1 Detailed View \$1 Settings tabs


The Inventory **Detailed View ** page is only available in AWS Regions that offer Amazon Athena. If the following tabs aren't displayed on the Inventory page, it means Athena isn't available in the Region and you can't use the **Detailed View** to query data.

![\[Displaying Inventory Dashboard | Detailed View | Settings tabs\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/inventory-detailed-view-for-error.png)


## UnsupportedAgent


If the detailed status of an inventory association shows **UnsupportedAgent**, and the **Association status** shows **Failed**, then the version of AWS Systems Manager SSM Agent on the managed node isn't correct. To create a global inventory association (to inventory all nodes in your AWS account) for example, you must use SSM Agent version 2.0.790.0 or later. You can view the agent version running on each of your nodes on the **Managed Instances** page in the **Agent version** column. For information about how to update SSM Agent on your nodes, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).

## Skipped


If the status of the inventory association for a node shows **Skipped**, this means that a higher-priority inventory association is already running on that node. Systems Manager follows a specific priority order when multiple inventory associations could apply to the same managed node.

### Inventory association priority order


Systems Manager applies inventory associations in the following priority order:

1. **Quick Setup inventory associations** - Associations created using Quick Setup and the unified console. These associations have names that start with `AWS-QuickSetup-SSM-CollectInventory-` and target all managed nodes.

1. **Explicit inventory associations** - Associations that target specific managed nodes using:
   + Instance IDs
   + Tag key-value pairs
   + AWS resource groups

1. **Global inventory associations** - Associations that target all managed nodes (using `--targets "Key=InstanceIds,Values=*"`) but were **not** created through Quick Setup.

### Common scenarios


**Scenario 1: Quick Setup association overrides explicit association**
+ You have a Quick Setup inventory association targeting all instances
+ You create a manual association targeting specific managed nodes by tag
+ Result: The manual association shows `Skipped` with detailed status `OverriddenByExplicitInventoryAssociation`
+ The Quick Setup association continues to collect inventory from all instances

**Scenario 2: Explicit association overrides global association**
+ You have a global inventory association targeting all instances (not created by Quick Setup)
+ You create an association targeting specific instances
+ Result: The global association shows `Skipped` for the specifically targeted instances
+ The explicit association runs on the targeted instances

### Resolution steps


**If you want to use your own inventory association instead of Quick Setup:**

1. **Identify Quick Setup associations**: In the Systems Manager console, go to State Manager and look for associations with names starting with `AWS-QuickSetup-SSM-CollectInventory-`.

1. **Remove Quick Setup configuration**:
   + Go to Quick Setup in the Systems Manager console.
   + Find your inventory collection configuration.
   + Delete the Quick Setup configuration (this removes the associated inventory association).
**Note**  
You don't need to manually delete the association created by Quick Setup.

1. **Verify your association runs**: After removing the Quick Setup configuration, your explicit inventory association should start running successfully.

**If you want to modify existing behavior:**
+ To view all existing inventory associations, choose **State Manager** in the Systems Manager console and locate associations that use the `AWS-GatherSoftwareInventory` SSM document.
+ Remember that each managed node can only have one active inventory association at a time.

**Important**  
Inventory data is still collected from skipped nodes when their assigned (higher-priority) inventory association runs.
Quick Setup inventory associations take precedence over all other types, even those with explicit targeting.
The detailed status message `OverriddenByExplicitInventoryAssociation` appears when any association is overridden by a higher-priority one, regardless of the association type.

## Failed


If the status of the inventory association for a node shows **Failed**, this could mean that the node has multiple inventory associations assigned to it. A node can only have one inventory association assigned at a time. An inventory association uses the `AWS-GatherSoftwareInventory` AWS Systems Manager document (SSM document). You can run the following command by using the AWS Command Line Interface (AWS CLI) to view a list of associations for a node.

```
aws ssm describe-instance-associations-status --instance-id instance-ID
```

## Inventory compliance failed for an Amazon EC2 instance


Inventory compliance for an Amazon Elastic Compute Cloud (Amazon EC2) instance can fail if you assign multiple inventory associations to the instance. 

 To resolve this issue, delete one or more inventory associations assigned to the instance. For more information, see [Deleting an association](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state-manager-delete-association.html). 

**Note**  
Be aware of the following behavior if you create multiple inventory associations for a managed node:  
Each node can be assigned an inventory association that targets *all* nodes (--targets "Key=InstanceIds,Values=\$1").
Each node can also be assigned a specific association that uses either tag key-value pairs or an AWS resource group.
If a node is assigned multiple inventory associations, the status shows *Skipped* for the association that hasn't run. The association that ran most recently displays the actual status of the inventory association. 
If a node is assigned multiple inventory associations and each uses a tag key-value pair, then those inventory associations fail to run on the node because of the tag conflict. The association still runs on nodes that don't have the tag key-value conflict. 

## S3 bucket object contains old data


Data inside the Amazon S3 bucket object is updated when the inventory association is successful and new data is discovered. The Amazon S3 bucket object is updated for each node when the association runs and fails, but the data inside the object is not updated in this case. Data inside the Amazon S3 bucket object will update only when the association runs successfully. When the inventory association fails, you will see old data in the Amazon S3 bucket object.

# AWS Systems Manager Patch Manager
Patch Manager

Patch Manager, a tool in AWS Systems Manager, automates the process of patching managed nodes with both security-related updates and other types of updates.

**Note**  
Systems Manager provides support for *patch policies* in Quick Setup, a tool in AWS Systems Manager. Using patch policies is the recommended method for configuring your patching operations. Using a single patch policy configuration, you can define patching for all accounts in all Regions in your organization; for only the accounts and Regions you choose; or for a single account-Region pair. For more information, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

You can use Patch Manager to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.) You can use Patch Manager to install Service Packs on Windows nodes and perform minor version upgrades on Linux nodes. You can patch fleets of Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs) by operating system type. This includes supported versions of several operating systems, as listed in [Patch Manager prerequisites](patch-manager-prerequisites.md). You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches. To get started with Patch Manager, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/patch-manager). In the navigation pane, choose **Patch Manager**.

AWS doesn't test patches before making them available in Patch Manager. Also, Patch Manager doesn't support upgrading major versions of operating systems, such as Windows Server 2016 to Windows Server 2019, or Red Hat Enterprise Linux (RHEL) 7.0 to RHEL 8.0.

For Linux-based operating system types that report a severity level for patches, Patch Manager uses the severity level reported by the software publisher for the update notice or individual patch. Patch Manager doesn't derive severity levels from third-party sources, such as the [Common Vulnerability Scoring System](https://www.first.org/cvss/) (CVSS), or from metrics released by the [National Vulnerability Database](https://nvd.nist.gov/vuln) (NVD).

## How can Patch Manager benefit my organization?


Patch Manager automates the process of patching managed nodes with both security-related updates and other types of updates. It provides several key benefits:
+ **Centralized patching control** –Using patch policies, you can set up recurring patching operations for all accounts in all Regions in your organization, specific accounts and Regions, or a single account-Region pair.
+ **Flexible patching operations** – You can choose to scan instances to see only a report of missing patches, or scan and automatically install all missing patches.
+ **Comprehensive compliance reporting** – After scanning operations, you can view detailed information about which managed nodes are out of patch compliance and which patches are missing.
+ **Cross-platform support** – Patch Manager supports multiple operating systems including various Linux distributions, macOS, and Windows Server.
+ **Custom patch baselines** – You can define what constitutes patch compliance for your organization through custom patch baselines that specify which patches are approved for installation.
+ **Integration with other AWS services** – Patch Manager integrates with AWS Organizations, AWS Security Hub CSPM, AWS CloudTrail, and AWS Config for enhanced management and security.
+ **Deterministic upgrades** – Support for deterministic upgrades through versioned repositories for operating systems like Amazon Linux 2023.

## Who should use Patch Manager?


Patch Manager is designed for the following:
+ IT administrators who need to maintain patch compliance across their fleet of managed nodes
+ Operations managers who need visibility into patch compliance status across their infrastructure
+ Cloud architects who want to implement automated patching solutions at scale
+ DevOps engineers who need to integrate patching into their operational workflows
+ Organizations with multi-account/multi-Region deployments who need centralized patch management
+ Anyone responsible for maintaining the security posture and operational health of AWS managed nodes, edge devices, on-premises servers, and virtual machines

## What are the main features of Patch Manager?


Patch Manager offers several key features:
+ **Patch policies** – Configure patching operations across multiple AWS accounts and Regions using a single policy through integration with AWS Organizations.
+ **Custom patch baselines** – Define rules for auto-approving patches within days of their release, along with approved and rejected patch lists.
+ **Multiple patching methods** – Choose from patch policies, maintenance windows, or on-demand "Patch now" operations to meet your specific needs.
+ **Compliance reporting** – Generate detailed reports on patch compliance status that can be sent to an Amazon S3 bucket in CSV format.
+ **Cross-platform support** – Patch both operating systems and applications across Windows Server, various Linux distributions, and macOS.
+ **Scheduling flexibility** – Set different schedules for scanning and installing patches using custom CRON or Rate expressions.
+ **Lifecycle hooks** – Run custom scripts before and after patching operations using Systems Manager documents.
+ **Security focus** – By default, Patch Manager focuses on security-related updates rather than installing all available patches.
+ **Rate control** – Configure concurrency and error thresholds for patching operations to minimize operational impact.

## What is compliance in Patch Manager?


The benchmark for what constitutes *patch compliance* for the managed nodes in your Systems Manager fleets is not defined by AWS, by operating system (OS) vendors, or by third parties such as security consulting firms.

Instead, you define what patch compliance means for managed nodes in your organization or account in a *patch baseline*. A patch baseline is a configuration that specifies rules for which patches must be installed on a managed node. A managed node is patch compliant when it is up to date with all the patches that meet the approval criteria that you specify in the patch baseline. 

Note that being *compliant* with a patch baseline doesn't mean that a managed node is necessarily *secure*. Compliant means that the patches defined by the patch baseline that are both *available* and *approved* have been installed on the node. The overall security of a managed node is determined by many factors outside the scope of Patch Manager. For more information, see [Security in AWS Systems Manager](security.md).

Each patch baseline is a configuration for a specific supported operating system (OS) type, such as Red Hat Enterprise Linux (RHEL), macOS, or Windows Server. A patch baseline can define patching rules for all supported versions of an OS or be limited to only those you specify, such as RHEL 7.8. and RHEL 9.3.

In a patch baseline, you could specify that all patches of certain classifications and severity levels are approved for installation. For example, you might include all patches classified as `Security` but exclude other classifications, such as `Bugfix` or `Enhancement`. And you could include all patches with a severity of `Critical` and exclude others, such as `Important` and `Moderate`.

You can also define patches explicitly in a patch baseline by adding their IDs to lists of specific patches to approve or reject, such as `KB2736693` for Windows Server or `dbus.x86_64:1:1.12.28-1.amzn2023.0.1` for Amazon Linux 2023 (AL2023). You can optionally specify a certain number of days to wait for patching after a patch becomes available. For Linux and macOS, you have the option of specifying an external list of patches for compliance (an Install Override list) instead of those defined by the patch baseline rules.

When a patching operation runs, Patch Manager compares the patches currently applied to a managed node to those that should be applied according to the rules set up in the patch baseline or an Install Override list. You can choose for Patch Manager to show you only a report of missing patches (a `Scan` operation), or you can choose for Patch Manager to automatically install all patches it find are missing from a managed node (a `Scan and install` operation).

**Note**  
Patch compliance data represents a point-in-time snapshot from the latest successful patching operation. Each compliance report contains a capture time that identifies when the compliance status was calculated. When reviewing compliance data, consider the capture time to determine if the operation was executed as expected.

Patch Manager provides predefined patch baselines that you can use for your patching operations; however, these predefined configurations are provided as examples and not as recommended best practices. We recommend that you create custom patch baselines of your own to exercise greater control over what constitutes patch compliance for your fleet.

For more information about patch baselines, see the following topics:
+ [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md)
+ [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md)
+ [Viewing AWS predefined patch baselines](patch-manager-view-predefined-patch-baselines.md)
+ [Working with custom patch baselines](patch-manager-manage-patch-baselines.md)
+ [Working with patch compliance reports](patch-manager-compliance-reports.md)

## Primary components


Before you start working with the Patch Manager tool, you should familiarize yourself with some major components and features of the tool's patching operations.

**Patch baselines**  
Patch Manager uses *patch baselines*, which include rules for auto-approving patches within days of their release, in addition to optional lists of approved and rejected patches. When a patching operation runs, Patch Manager compares the patches currently applied to a managed node to those that should be applied according to the rules set up in the patch baseline. You can choose for Patch Manager to show you only a report of missing patches (a `Scan` operation), or you can choose for Patch Manager to automatically install all patches it find are missing from a managed node (a `Scan and install` operation).

**Patching operation methods**  
Patch Manager currently offers four methods for running `Scan` and `Scan and install` operations:
+ **(Recommended) A patch policy configured in Quick Setup** – Based on integration with AWS Organizations, a single patch policy can define patching schedules and patch baselines for an entire organization, including multiple AWS accounts and all AWS Regions those accounts operate in. A patch policy can also target only some organizational units (OUs) in an organization. You can use a single patch policy to scan and install on different schedules. For more information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md) and [Patch policy configurations in Quick Setup](patch-manager-policies.md).
+ **A Host Management option configured in Quick Setup** – Host Management configurations are also supported by integration with AWS Organizations, making it possible to run a patching operation for up to an entire Organization. However, this option is limited to scanning for missing patches using the current default patch baseline and providing results in compliance reports. This operation method can't install patches. For more information, see [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md).
+ **A maintenance window to run a patch `Scan` or `Install` task** – A maintenance window, which you set up in the Systems Manager tool called Maintenance Windows, can be configured to run different types of tasks on a schedule you define. A Run Command-type task can be used to run `Scan` or `Scan and install` tasks a set of managed nodes that you choose. Each maintenance window task can target managed nodes in only a single AWS account-AWS Region pair. For more information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).
+ **An on-demand **Patch now** operation in Patch Manager** – The **Patch now** option lets you bypass schedule setups when you need to patch managed nodes as quickly as possible. Using **Patch now**, you specify whether to run `Scan` or `Scan and install` operation and which managed nodes to run the operation on. You can also choose to running Systems Manager documents (SSM documents) as lifecycle hooks during the patching operation. Each **Patch now** operation can target managed nodes in only a single AWS account-AWS Region pair. For more information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

**Compliance reporting**  
After a `Scan` operation, you can use the Systems Manager console to view information about which of your managed nodes are out of patch compliance, and which patches are missing from each of those nodes. You can also generate patch compliance reports in .csv format that are sent to an Amazon Simple Storage Service (Amazon S3) bucket of your choice. You can generate one-time reports, or generate reports on a regular schedule. For a single managed node, reports include details of all patches for the node. For a report on all managed nodes, only a summary of how many patches are missing is provided. After a report is generated, you can use a tool like Amazon Quick to import and analyze the data. For more information, see [Working with patch compliance reports](patch-manager-compliance-reports.md).

**Note**  
A compliance item generated through the use of a patch policy has an execution type of `PatchPolicy`. A compliance item not generated in a patch policy operation has an execution type of `Command`.

**Integrations**  
Patch Manager integrates with the following other AWS services: 
+ **AWS Identity and Access Management (IAM)** – Use IAM to control which users, groups, and roles have access to Patch Manager operations. For more information, see [How AWS Systems Manager works with IAM](security_iam_service-with-iam.md) and [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). 
+ **AWS CloudTrail** – Use CloudTrail to record an auditable history of patching operation events initiated by users, roles, or groups. For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).
+ **AWS Security Hub CSPM** – Patch compliance data from Patch Manager can be sent to AWS Security Hub CSPM. Security Hub CSPM gives you a comprehensive view of your high-priority security alerts and compliance status. It also monitors the patching status of your fleet. For more information, see [Integrating Patch Manager with AWS Security Hub CSPM](patch-manager-security-hub-integration.md). 
+ **AWS Config** – Set up recording in AWS Config to view Amazon EC2 instance management data in the Patch Manager Dashboard. For more information, see [Viewing patch Dashboard summaries](patch-manager-view-dashboard-summaries.md).

**Topics**
+ [

## How can Patch Manager benefit my organization?
](#how-can-patch-manager-benefit-my-organization)
+ [

## Who should use Patch Manager?
](#who-should-use-patch-manager)
+ [

## What are the main features of Patch Manager?
](#what-are-the-main-features-of-patch-manager)
+ [

## What is compliance in Patch Manager?
](#patch-manager-definition-of-compliance)
+ [

## Primary components
](#primary-components)
+ [

# Patch policy configurations in Quick Setup
](patch-manager-policies.md)
+ [

# Patch Manager prerequisites
](patch-manager-prerequisites.md)
+ [

# How Patch Manager operations work
](patch-manager-patching-operations.md)
+ [

# SSM Command documents for patching managed nodes
](patch-manager-ssm-documents.md)
+ [

# Patch baselines
](patch-manager-patch-baselines.md)
+ [

# Using Kernel Live Patching on Amazon Linux 2 managed nodes
](patch-manager-kernel-live-patching.md)
+ [

# Working with Patch Manager resources and compliance using the console
](patch-manager-console.md)
+ [

# Working with Patch Manager resources using the AWS CLI
](patch-manager-cli-commands.md)
+ [

# AWS Systems Manager Patch Manager tutorials
](patch-manager-tutorials.md)
+ [

# Troubleshooting Patch Manager
](patch-manager-troubleshooting.md)

# Patch policy configurations in Quick Setup


AWS recommends the use of *patch policies* to configure patching for your organization and AWS accounts. Patch policies were introduced in Patch Manager in December, 2022. 

A patch policy is a configuration you set up using Quick Setup, a tool in AWS Systems Manager. Patch policies provide more extensive and more centralized control over your patching operations than is available with previous methods of configuring patching. Patch policies can be used with [all operating systems supported by Patch Manager](patch-manager-prerequisites.md#pm-prereqs), including supported versions of Linux, macOS, and Windows Server. For instructions for creating a patch policy, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).

## Major features of patch policies


Instead of using other methods of patching your nodes, use a patch policy to take advantage of these major features:
+ **Single setup** – Setting up patching operations using a maintenance window or State Manager association can require multiple tasks in different parts of the Systems Manager console. Using a patch policy, all your patching operations can be set up in a single wizard.
+ **Multi-account/Multi-Region support** – Using a maintenance window, a State Manager association, or the **Patch now** feature in Patch Manager, you're limited to targeting managed nodes in a single AWS account-AWS Region pair. If you use multiple accounts and multiple Regions, your setup and maintenance tasks can require a great deal of time, because you must perform setup tasks in each account-Region pair. However, if you use AWS Organizations, you can set up one patch policy that applies to all your managed nodes in all AWS Regions in all your AWS accounts. Or, if you choose, a patch policy can apply to only some organizational units (OUs) in the accounts and Regions you choose. A patch policy can also apply to a single local account, if you choose.
+ **Installation support at the organizational level** – The existing Host Management configuration option in Quick Setup provides support for a daily scan of your managed nodes for patch compliance. However, this scan is done at a predetermined time and results in patch compliance information only. No patch installations are performed. Using a patch policy, you can specify different schedules for scanning and installing. You can also choose the frequency and time of these operations by using custom CRON or Rate expressions. For example, you could scan for missing patches every day to provide you with regularly updated compliance information. But, your installation schedule could be just once a week to avoid unwanted downtime.
+ **Simplified patch baseline selection** – Patch policies still incorporate patch baselines, and there are no changes to the way patch baselines are configured. However, when you create or update a patch policy, you can select the AWS managed or custom baseline you want to use for each operating system (OS) type in a single list. It’s not necessary to specify the default baseline for each OS type in separate tasks.

**Note**  
When patching operations based on a patch policy run, they use the `AWS-RunPatchBaseline` SSM document. For more information, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

**Related information**  
[Centrally deploy patching operations across your AWS Organization using Systems Manager Quick Setup](https://aws.amazon.com/blogs/mt/centrally-deploy-patching-operations-across-your-aws-organization-using-systems-manager-quick-setup/) (AWS Cloud Operations and Migrations Blog)

## Other differences with patch policies


Here are some other differences to note when using patch policies instead of previous methods of configuring patching:
+ **No patch groups required** – In previous patching operations, you could tag multiple nodes to belong to a patch group, and then specify the patch baseline to use for that patch group. If no patch group was defined, Patch Manager patched instances with the current default patch baseline for the OS type. Using patch policies, it’s no longer necessary to set up and maintain patch groups. 
**Note**  
Patch group functionality is not supported in the console for account-Region pairs that did not already use patch groups before patch policy support was released on December 22, 2022. Patch group functionality is still available in account-Region pairs that began using patch groups before this date.
+ **‘Configure patching’ page removed** – Before the release of patch policies, you could specify defaults for which nodes to patch, a patching schedule, and a patching operation on a **Configure patching** page. This page has been removed from Patch Manager. These options are now specified in patch policies. 
+ **No ‘Patch now’ support** – The ability to patch nodes on demand is still limited to a single AWS account-AWS Region pair at a time. For information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).
+ **Patch policies and compliance information** – When your managed nodes are scanned for compliance according to a patching policy configuration, compliance data is made available to you. You can view and work with the data in the same way as with other methods of compliance scanning. Although you can set up a patch policy for an entire organization or multiple organizational units, compliance information is reported individually for each AWS account-AWS Region pair. For more information, see [Working with patch compliance reports](patch-manager-compliance-reports.md).
+ **Association compliance status and patch policies** – The patching status for a managed node that's under a Quick Setup patch policy matches the status of the State Manager association execution for that node. If the association execution status is `Compliant`, the patching status for the managed node is also marked `Compliant`. If the association execution status is `Non-Compliant`, the patching status for the managed node is also marked `Non-Compliant`. 

## AWS Regions supported for patch policies


Patch policy configurations in Quick Setup are currently supported in the following Regions:
+ US East (Ohio) (us-east-2)
+ US East (N. Virginia) (us-east-1)
+ US West (N. California) (us-west-1)
+ US West (Oregon) (us-west-2)
+ Asia Pacific (Mumbai) (ap-south-1)
+ Asia Pacific (Seoul) (ap-northeast-2)
+ Asia Pacific (Singapore) (ap-southeast-1)
+ Asia Pacific (Sydney) (ap-southeast-2)
+ Asia Pacific (Tokyo) (ap-northeast-1)
+ Canada (Central) (ca-central-1)
+ Europe (Frankfurt) (eu-central-1)
+ Europe (Ireland) (eu-west-1)
+ Europe (London) (eu-west-2)
+ Europe (Paris) (eu-west-3)
+ Europe (Stockholm) (eu-north-1)
+ South America (São Paulo) (sa-east-1)

# Patch Manager prerequisites


Make sure that you have met the required prerequisites before using Patch Manager, a tool in AWS Systems Manager. 

**Topics**
+ [

## SSM Agent version
](#agent-versions)
+ [

## Python version
](#python-version)
+ [

## Additional package requirements
](#additional-package-requirements)
+ [

## Connectivity to the patch source
](#source-connectivity)
+ [

## S3 endpoint access
](#s3-endpoint-access)
+ [

## Permissions to install patches locally
](#local-installation-permissions)
+ [

## Supported operating systems for Patch Manager
](#supported-os)

## SSM Agent version


Version 2.0.834.0 or later of SSM Agent is running on the managed node you want to manage with Patch Manager.

**Note**  
An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

## Python version


For macOS and most Linux operating systems (OSs), Patch Manager currently supports Python versions 2.6 - 3.12. The AlmaLinux, Debian Server, and Ubuntu Server OSs require a supported version of Python 3 (3.0 - 3.12).

## Additional package requirements


For DNF-based operating systems, the `zstd`, `xz`, and `unzip` utilities may be required for decompressing repository information and patch files. DNF-based operating systems include Amazon Linux 2023, Red Hat Enterprise Linux 8 and later versions, Oracle Linux8 and later versions, Rocky Linux, AlmaLinux, & CentOS 8 and later versions. If you see an error similar to `No such file or directory: b'zstd'`, `No such file or directory: b'unxz'`, or patching failures due to missing `unzip`, then you need to install these utilities. `zstd`, `xz`, and `unzip` can be installed by running the following:

```
dnf install zstd xz unzip
```

## Connectivity to the patch source


If your managed nodes don't have a direct connection to the Internet and you're using an Amazon Virtual Private Cloud (Amazon VPC) with a VPC endpoint, you must ensure that the nodes have access to the source patch repositories (repos). On Linux nodes, patch updates are typically downloaded from the remote repos configured on the node. Therefore, the node must be able to connect to the repos so the patching can be performed. For more information, see [How security patches are selected](patch-manager-selecting-patches.md).

When patching a node that is running in an IPv6 only environment, ensure that the node has connectivity to the patch source. You can check the Run Command output from the patching execution to check for warnings about inaccessible repositories. For DNF-based operating systems, it is possible to configure unavailable repositories to be skipped during patching if the `skip_if_unavailable` option is set to `True` under `/etc/dnf/dnf.conf`. DNF-based operating systems include Amazon Linux 2023, Red Hat Enterprise Linux 8 and later versions, Oracle Linux 8 and later versions, Rocky Linux, AlmaLinux, & CentOS 8 and later versions. On Amazon Linux 2023, the `skip_if_unavailable` option is set to `True` by default.

**CentOS Stream: Enable the `EnableNonSecurity` flag**  
CentOS Stream nodes uses DNF as the package manager, which uses the concept of an update notice. An update notice is simply a collection of packages that fix specific problems. 

However, CentOS Stream default repos aren't configured with an update notice. This means that Patch Manager doesn't detect packages on default CentOS Stream repos. To allow Patch Manager to process packages that aren't contained in an update notice, you must turn on the `EnableNonSecurity` flag in the patch baseline rules.

**Windows Server: Ensure connectivity to Windows Update Catalog or Windows Server Update Services (WSUS)**  
Windows Server managed nodes must be able to connect to the Windows Update Catalog or Windows Server Update Services (WSUS). Confirm that your nodes have connectivity to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx) through an internet gateway, NAT gateway, or NAT instance. If you are using WSUS, confirm that the node has connectivity to the WSUS server in your environment. For more information, see [Issue: managed node doesn't have access to Windows Update Catalog or WSUS](patch-manager-troubleshooting.md#patch-manager-troubleshooting-instance-access).

## S3 endpoint access


Whether your managed nodes operate in a private or public network, without access to the required AWS managed Amazon Simple Storage Service (Amazon S3) buckets, patching operations fail. For information about the S3 buckets your managed nodes must be able to access, see [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions) and [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).

## Permissions to install patches locally


On Windows Server and Linux operating systems, Patch Manager assumes the Administrator and root user accounts, respectively, to install patches.

On macOS, however, for Brew and Brew Cask, Homebrew doesn't support its commands running under the root user account. As a result, Patch Manager queries for and runs Homebrew commands as either the owner of the Homebrew directory, or as a valid user belonging to the Homebrew directory’s owner group. Therefore, in order to install patches, the owner of the `homebrew` directory also needs recursive owner permissions for the `/usr/local` directory.

**Tip**  
The following command provides this permission for the specified user:  

```
sudo chown -R $USER:admin /usr/local
```

## Supported operating systems for Patch Manager


The Patch Manager tool might not support all the same operating systems versions that are supported by other Systems Manager tools. (For the full list of Systems Manager-supported operating systems, see [Supported operating systems for Systems Manager](operating-systems-and-machine-types.md#prereqs-operating-systems).) Therefore, ensure that the managed nodes you want to use with Patch Manager are running one of the operating systems listed in the following table.

**Note**  
Patch Manager relies on the patch repositories that are configured on a managed node, such as Windows Update Catalog and Windows Server Update Services for Windows, to retrieve available patches to install. Therefore, for end of life (EOL) operating system versions, if no new updates are available, Patch Manager might not be able to report on the new updates. This can be because no new updates are released by the Linux distribution maintainer, Microsoft, or Apple, or because the managed node does not have the proper license to access the new updates.  
We strongly recommend that you avoid using OS versions that have reached End-of-Life (EOL). OS vendors including AWS typically don't provide security patches or other updates for versions that have reached EOL. Continuing to use an EOL system greatly increases the risk of not being able to apply upgrades, including security fixes, and other operational problems. AWS does not test Systems Manager functionality on OS versions that have reached EOL.  
Patch Manager reports compliance status against the available patches on the managed node. Therefore, if an instance is running an EOL operating system, and no updates are available, Patch Manager might report the node as Compliant, depending on the patch baselines configured for the patching operation.


| Operating system | Details | 
| --- | --- | 
|  Linux  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-prerequisites.html)  | 
| macOS |  *macOS is supported for Amazon EC2 instances only.* 13.0–13.7 (Ventura) 14*.x* (Sonoma) 15.*x* (Sequoia)  macOS OS updates Patch Manager doesn't support operating system (OS) updates or upgrades for macOS, such as from 13.1 to 13.2. To perform OS version updates on macOS, we recommend using Apple's built-in OS upgrade mechanisms. For more information, see [Device Management](https://developer.apple.com/documentation/devicemanagement) on the Apple Developer Documentation website.   Homebrew support Patch Manager requires Homebrew, the open-source software package management system, to be installed at either of the following default install locations:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-prerequisites.html) Patching operations using Patch Manager fail to function correctly when Homebrew is not installed.  Region support macOS is not supported in all AWS Regions. For more information about Amazon EC2 support for macOS, see [Amazon EC2 Mac instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html) in the *Amazon EC2 User Guide*.   macOS edge devices SSM Agent for AWS IoT Greengrass core devices is not supported on macOS. You can't use Patch Manager to patch macOS edge devices.   | 
|  Windows  |  Windows Server 2012 through Windows Server 2025, including R2 versions.  SSM Agent for AWS IoT Greengrass core devices is not supported on Windows 10. You can't use Patch Manager to patch Windows 10 edge devices.   Windows Server 2012 and 2012 R2 support Windows Server 2012 and 2012 R2 reached end of support on October 10, 2023. To use Patch Manager with these versions, we recommend also using Extended Security Updates (ESUs) from Microsoft. For more information, see [Windows Server 2012 and 2012 R2 reaching end of support](https://learn.microsoft.com/en-us/lifecycle/announcements/windows-server-2012-r2-end-of-support) on the Microsoft website.   | 

# How Patch Manager operations work


This section provides technical details that explain how Patch Manager, a tool in AWS Systems Manager, determines which patches to install and how it installs them on each supported operating system. For Linux operating systems, it also provides information about specifying a source repository, in a custom patch baseline, for patches other than the default configured on a managed node. This section also provides details about how patch baseline rules work on different distributions of the Linux operating system.

**Note**  
The information in the following topics applies no matter which method or type of configuration you are using for your patching operations:  
A patch policy configured in Quick Setup
A Host Management option configured in Quick Setup
A maintenance window to run a patch `Scan` or `Install` task
An on-demand **Patch now** operation

**Topics**
+ [

# How package release dates and update dates are calculated
](patch-manager-release-dates.md)
+ [

# How security patches are selected
](patch-manager-selecting-patches.md)
+ [

# How to specify an alternative patch source repository (Linux)
](patch-manager-alternative-source-repository.md)
+ [

# How patches are installed
](patch-manager-installing-patches.md)
+ [

# How patch baseline rules work on Linux-based systems
](patch-manager-linux-rules.md)
+ [

# Patching operation differences between Linux and Windows Server
](patch-manager-windows-and-linux-differences.md)

# How package release dates and update dates are calculated


**Important**  
The information on this page applies to the Amazon Linux 2 and Amazon Linux 2023 operating systems (OSs) for Amazon Elastic Compute Cloud (Amazon EC2) instances. The packages for these OS types are created and maintained by Amazon Web Services. How the manufacturers of other operating systems manage their packages and repositories affect how their release dates and update dates are calculated. For OSs besides Amazon Linux 2 and Amazon Linux 2023, such as Red Hat Enterprise Linux, refer to the manufacturer's documentation for information about how their packages are updated and maintained.

In the settings for [custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom) you create, for most OS types, you can specify that patches are auto-approved for installation after a certain number of days. AWS provides several predefined patch baselines that include auto-approval dates of 7 days.

An *auto-approval delay* is the number of days to wait after the patch was released, before the patch is automatically approved for patching. For example, you create a rule using the `CriticalUpdates` classification and configure it for 7 days auto-approval delay. As a result, a new critical patch with a release date or last update date of July 7 is automatically approved on July 14.

To avoid unexpected results with auto-approval delays on Amazon Linux 2 and Amazon Linux 2023, it's important to understand how their release dates and update dates are calculated.

**Note**  
If an Amazon Linux 2 or Amazon Linux 2023 repository does not provide release date information for packages, Patch Manager uses the build time of the package as the date for auto-approval date specifications. If the build time of the package can't be determined, Patch Manager uses a default date of January 1st, 1970. This results in Patch Manager bypassing any auto-approval date specifications in patch baselines that are configured to approve patches for any date after January 1st, 1970.

In most cases, the auto-approval wait time before patches are installed is calculated from an `Updated Date` value in `updateinfo.xml`, not a `Release Date` value. The following are important details about these date calculations: 
+ The `Release Date` is the date a *notice* is released. It does not mean the package is necessarily available in the associated repositories yet. 
+ The `Update Date` is the last date the notice was updated. An update to a notice can represent something as small as a text or description update. It does not mean the package was released from that date or is necessarily available in the associated repositories yet. 

  This means that a package could have an `Update Date` value of July 7 but not be available for installation until (for example) July 13. Suppose for this case that a patch baseline that specifies a 7-day auto-approval delay runs in an `Install` operation on July 14. Because the `Update Date` value is 7 days prior to the run date, the patches and updates in the package are installed on July 14. The installation happens even though only 1 day has passed since the package became available for actual installation.
+ A package containing operating system or application patches can be updated more than once after initial release.
+ A package can be released into the AWS managed repositories but then rolled back if issues are later discovered with it.

In some patching operations, these factors might not be important. For example, if a patch baseline is configured to install a patch with severity values of `Low` and `Medium`, and a classification of `Recommended`, any auto-approval delay might have little impact on your operations.

However, in cases where the timing of critical or high-severity patches is more important, you might want to exercise more control over when patches are installed. The recommended method for doing this is to use alternative patch source repositories instead of the default repositories for patching operations on a managed node. 

You can specify alternative patch source repositories when you create a custom patch baseline. In each custom patch baseline, you can specify patch source configurations for up to 20 versions of a supported Linux operating system. For more information, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

# How security patches are selected


The primary focus of Patch Manager, a tool in AWS Systems Manager, is on installing operating systems security-related updates on managed nodes. By default, Patch Manager doesn't install all available patches, but rather a smaller set of patches focused on security.

By default, Patch Manager doesn't replace a package that has been marked as obsolete in a package repository with any differently named replacement packages unless this replacement is required by the installation of another package update. Instead, for commands that update a package, Patch Manager only reports and installs missing updates for the package that is installed but obsolete. This is because replacing an obsolete package typically requires uninstalling an existing package and installing its replacement. Replacing the obsolete package could introduce breaking changes or additional functionality that you didn't intend.

This behavior is consistent with YUM and DNF's `update-minimal` command, which focuses on security updates rather than feature upgrades. For more information, see [How patches are installed](patch-manager-installing-patches.md).

**Note**  
When you use the `ApproveUntilDate` parameter or `ApproveAfterDays` parameter in a patch baseline rule, Patch Manager evaluates patch release dates using Coordinated Universal Time (UTC).   
For example, for `ApproveUntilDate`, if you specify a date such as `2025-11-16`, patches released between `2025-11-16T00:00:00Z` and `2025-11-16T23:59:59Z` will be approved.   
Be aware that patch release dates displayed by native package managers on your managed nodes may show different times based on your system's local timezone, but Patch Manager always uses the UTC datetime for approval calculations. This ensures consistency with patch release dates published on official security advisory websites.

For Linux-based operating system types that report a severity level for patches, Patch Manager uses the severity level reported by the software publisher for the update notice or individual patch. Patch Manager doesn't derive severity levels from third-party sources, such as the [Common Vulnerability Scoring System](https://www.first.org/cvss/) (CVSS), or from metrics released by the [National Vulnerability Database](https://nvd.nist.gov/vuln) (NVD).

**Note**  
On all Linux-based systems supported by Patch Manager, you can choose a different source repository configured for the managed node, typically to install nonsecurity updates. For information, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

Choose from the following tabs to learn how Patch Manager selects security patches for your operating system.

------
#### [ Amazon Linux 2 and Amazon Linux 2023 ]

Preconfigured repositories are handled differently on Amazon Linux 2 than on Amazon Linux 2023.

On Amazon Linux 2, the Systems Manager patch baseline service uses preconfigured repositories on the managed node. There are usually two preconfigured repositories (repos) on a node:

**On Amazon Linux 2**
+ **Repo ID**: `amzn2-core/2/architecture`

  **Repo name**: `Amazon Linux 2 core repository`
+ **Repo ID**: `amzn2extra-docker/2/architecture`

  **Repo name**: `Amazon Extras repo for docker`

**Note**  
*architecture* can be x86\$164 or (for Graviton processors) aarch64.

When you create an Amazon Linux 2023 (AL2023) instance, it contains the updates that were available in the version of AL2023 and the specific AMI you selected. Your AL2023 instance doesn't automatically receive additional critical and important security updates at launch time. Instead, with the *deterministic upgrades through versioned repositories* feature supported for AL2023, which is turned on by default, you can apply updates based on a schedule that meets your specific needs. For more information, see [Deterministic upgrades through versioned repositories](https://docs.aws.amazon.com/linux/al2023/ug/deterministic-upgrades.html) in the *Amazon Linux 2023 User Guide*. 

On AL2023, the preconfigured repository is the following:
+ **Repo ID**: `amazonlinux`

  **Repo name**: Amazon Linux 2023 repository

On Amazon Linux 2023 (preview release), the preconfigured repositories are tied to *locked versions* of package updates. When new Amazon Machine Images (AMIs) for AL2023 are released, they are locked to a specific version. For patch updates, Patch Manager retrieves the latest locked version of the patch update repository and then updates packages on the managed node based on the content of that locked version.

**Package managers**  
Amazon Linux 2 managed nodes use Yum as the package manager. Amazon Linux 2023 use DNF as the package manager. 

Both package managers use the concept of an *update notice* as a file named `updateinfo.xml`. An update notice is simply a collection of packages that fix specific problems. All packages that are in an update notice are considered Security by Patch Manager. Individual packages aren't assigned classifications or severity levels. For this reason, Patch Manager assigns the attributes of an update notice to the related packages.

**Note**  
If you select the **Include non-security updates** check box in the **Create patch baseline** page, then packages that aren't classified in an `updateinfo.xml` file (or a package that contains a file without properly formatted Classification, Severity, and Date values) can be included in the prefiltered list of patches. However, in order for a patch to be applied, the patch must still meet the user-specified patch baseline rules.  
For more information about the **Include non-security updates** option, see [How patches are installed](patch-manager-installing-patches.md) and [How patch baseline rules work on Linux-based systems](patch-manager-linux-rules.md).

------
#### [  CentOS Stream ]

On CentOS Stream, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. The following list provides examples for a fictitious CentOS 9.2 Amazon Machine Image (AMI):
+ **Repo ID**: `example-centos-9.2-base`

  **Repo name**: `Example CentOS-9.2 - Base`
+ **Repo ID**: `example-centos-9.2-extras` 

  **Repo name**: `Example CentOS-9.2 - Extras`
+ **Repo ID**: `example-centos-9.2-updates`

  **Repo name**: `Example CentOS-9.2 - Updates`
+ **Repo ID**: `example-centos-9.x-examplerepo`

  **Repo name**: `Example CentOS-9.x – Example Repo Packages`

**Note**  
All updates are downloaded from the remote repos configured on the managed node. Therefore, the node must have outbound access to the internet in order to connect to the repos so the patching can be performed.

CentOS Stream nodes use DNF as the package manager. The package manager uses the concept of an update notice. An update notice is simply a collection of packages that fix specific problems. 

However, CentOS Stream default repos aren't configured with an update notice. This means that Patch Manager doesn't detect packages on default CentOS Stream repos. To allow Patch Manager to process packages that aren't contained in an update notice, you must turn on the `EnableNonSecurity` flag in the patch baseline rules.

**Note**  
CentOS Stream update notices are supported. Repos with update notices can be downloaded after launch.

------
#### [ Debian Server ]

On Debian Server, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the instance. These preconfigured repos are used to pull an updated list of available package upgrades. For this, Systems Manager performs the equivalent of a `sudo apt-get update` command. 

Packages are then filtered from `debian-security codename` repos. This means that on each version of Debian Server, Patch Manager only identifies upgrades that are part of the associated repo for that version, as follows:
+  Debian Server 11: `debian-security bullseye`
+ Debian Server 12: `debian-security bookworm`

------
#### [ Oracle Linux ]

On Oracle Linux, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. There are usually two preconfigured repos on a node.

**Oracle Linux 7**:
+ **Repo ID**: `ol7_UEKR5/x86_64`

  **Repo name**: `Latest Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7Server (x86_64)`
+ **Repo ID**: `ol7_latest/x86_64`

  **Repo name**: `Oracle Linux 7Server Latest (x86_64)` 

**Oracle Linux 8**:
+ **Repo ID**: `ol8_baseos_latest` 

  **Repo name**: `Oracle Linux 8 BaseOS Latest (x86_64)`
+ **Repo ID**: `ol8_appstream`

  **Repo name**: `Oracle Linux 8 Application Stream (x86_64)` 
+ **Repo ID**: `ol8_UEKR6`

  **Repo name**: `Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)`

**Oracle Linux 9**:
+ **Repo ID**: `ol9_baseos_latest` 

  **Repo name**: `Oracle Linux 9 BaseOS Latest (x86_64)`
+ **Repo ID**: `ol9_appstream`

  **Repo name**: `Oracle Linux 9 Application Stream Packages(x86_64)` 
+ **Repo ID**: `ol9_UEKR7`

  **Repo name**: `Oracle Linux UEK Release 7 (x86_64)`

**Note**  
All updates are downloaded from the remote repos configured on the managed node. Therefore, the node must have outbound access to the internet in order to connect to the repos so the patching can be performed.

Oracle Linux managed nodes use Yum as the package manager, and Yum uses the concept of an update notice as a file named `updateinfo.xml`. An update notice is simply a collection of packages that fix specific problems. Individual packages aren't assigned classifications or severity levels. For this reason, Patch Manager assigns the attributes of an update notice to the related packages and installs packages based on the Classification filters specified in the patch baseline.

**Note**  
If you select the **Include non-security updates** check box in the **Create patch baseline** page, then packages that aren't classified in an `updateinfo.xml` file (or a package that contains a file without properly formatted Classification, Severity, and Date values) can be included in the prefiltered list of patches. However, in order for a patch to be applied, the patch must still meet the user-specified patch baseline rules.

------
#### [ AlmaLinux, RHEL, and Rocky Linux  ]

On AlmaLinux, Red Hat Enterprise Linux, and Rocky Linux the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. There are usually three preconfigured repos on a node.

All updates are downloaded from the remote repos configured on the managed node. Therefore, the node must have outbound access to the internet in order to connect to the repos so the patching can be performed.

**Note**  
If you select the **Include non-security updates** check box in the **Create patch baseline** page, then packages that aren't classified in an `updateinfo.xml` file (or a package that contains a file without properly formatted Classification, Severity, and Date values) can be included in the prefiltered list of patches. However, in order for a patch to be applied, the patch must still meet the user-specified patch baseline rules.

Red Hat Enterprise Linux 7 managed nodes use Yum as the package manager. AlmaLinux, Red Hat Enterprise Linux 8, and Rocky Linux managed nodes use DNF as the package manager. Both package managers use the concept of an update notice as a file named `updateinfo.xml`. An update notice is simply a collection of packages that fix specific problems. Individual packages aren't assigned classifications or severity levels. For this reason, Patch Manager assigns the attributes of an update notice to the related packages and installs packages based on the Classification filters specified in the patch baseline.

RHEL 7  
The following repo IDs are associated with RHUI 2. RHUI 3 launched in December 2019 and introduced a different naming scheme for Yum repository IDs. Depending on the RHEL-7 AMI you create your managed nodes from, you might need to update your commands. For more information, see [Repository IDs for RHEL 7 in AWS Have Changed](https://access.redhat.com/articles/4599971) on the *Red Hat Customer Portal*.
+ **Repo ID**: `rhui-REGION-client-config-server-7/x86_64`

  **Repo name**: `Red Hat Update Infrastructure 2.0 Client Configuration Server 7`
+ **Repo ID**: `rhui-REGION-rhel-server-releases/7Server/x86_64`

  **Repo name**: `Red Hat Enterprise Linux Server 7 (RPMs)`
+ **Repo ID**: `rhui-REGION-rhel-server-rh-common/7Server/x86_64`

  **Repo name**: `Red Hat Enterprise Linux Server 7 RH Common (RPMs)`

AlmaLinux, 8 RHEL 8, and Rocky Linux 8  
+ **Repo ID**: `rhel-8-appstream-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (RPMs)`
+ **Repo ID**: `rhel-8-baseos-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 8 for x86_64 - BaseOS from RHUI (RPMs)`
+ **Repo ID**: `rhui-client-config-server-8`

  **Repo name**: `Red Hat Update Infrastructure 3 Client Configuration Server 8`

AlmaLinux 9, RHEL 9, and Rocky Linux 9  
+ **Repo ID**: `rhel-9-appstream-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 9 for x86_64 - AppStream from RHUI (RPMs)`
+ **Repo ID**: `rhel-9-baseos-rhui-rpms`

  **Repo name**: `Red Hat Enterprise Linux 9 for x86_64 - BaseOS from RHUI (RPMs)`
+ **Repo ID**: `rhui-client-config-server-9`

  **Repo name**: `Red Hat Enterprise Linux 9 Client Configuration`

------
#### [ SLES ]

On SUSE Linux Enterprise Server (SLES) managed nodes, the ZYPP library gets the list of available patches (a collection of packages) from the following locations:
+ List of repositories: `etc/zypp/repos.d/*`
+ Package information: `/var/cache/zypp/raw/*`

SLES managed nodes use Zypper as the package manager, and Zypper uses the concept of a patch. A patch is simply a collection of packages that fix a specific problem. Patch Manager handles all packages referenced in a patch as security-related. Because individual packages aren't given classifications or severity, Patch Manager assigns the packages the attributes of the patch that they belong to.

------
#### [ Ubuntu Server ]

On Ubuntu Server, the Systems Manager patch baseline service uses preconfigured repositories (repos) on the managed node. These preconfigured repos are used to pull an updated list of available package upgrades. For this, Systems Manager performs the equivalent of a `sudo apt-get update` command. 

Packages are then filtered from `codename-security` repos, where the codename is unique to the release version, such as `trusty` for Ubuntu Server 14. Patch Manager only identifies upgrades that are part of these repos: 
+ Ubuntu Server 16.04 LTS: `xenial-security`
+ Ubuntu Server 18.04 LTS: `bionic-security`
+ Ubuntu Server 20.04 LTS: `focal-security`
+ Ubuntu Server 22.04 LTS (`jammy-security`)
+ Ubuntu Server 24.04 LTS (`noble-security`)
+ Ubuntu Server 25.04 (`plucky-security`)

------
#### [ Windows Server ]

On Microsoft Windows operating systems, Patch Manager retrieves a list of available updates that Microsoft publishes to Microsoft Update and are automatically available to Windows Server Update Services (WSUS).

**Note**  
Patch Manager only makes available patches for Windows Server operating system versions that are supported for Patch Manager. For example, Patch Manager can't be used to patch Windows RT.

Patch Manager continually monitors for new updates in every AWS Region. The list of available updates is refreshed in each Region at least once per day. When the patch information from Microsoft is processed, Patch Manager removes updates that were replaced by later updates from its patch list . Therefore, only the most recent update is displayed and made available for installation. For example, if `KB4012214` replaces `KB3135456`, only `KB4012214` is made available as an update in Patch Manager.

Similarly, Patch Manager can install only patches that are available on the managed node during the time of the patching operation. By default, Windows Server 2019 and Windows Server 2022 remove updates that are replaced by later updates. As a result, if you use the `ApproveUntilDate` parameter in a Windows Server patch baseline, but the date selected in the `ApproveUntilDate` parameter is *before* the date of the latest patch, then the following scenario occurs:
+ The superseded patch is removed from the node and therefore can't be installed using Patch Manager.
+ The latest, replacement patch is present on the node but not yet approved for installation per the `ApproveUntilDate` parameter's specified date. 

This means that the managed node is compliant in terms of Systems Manager operations, even though a critical patch from the previous month might not be installed. This same scenario can occur when using the `ApproveAfterDays` parameter. Because of the Microsoft superseded patch behavior, it is possible to set a number (generally greater than 30 days) so that patches for Windows Server are never installed if the latest available patch from Microsoft is released before the number of days in `ApproveAfterDays` has elapsed. Note that this system behavior doesn't apply if you have modified your Windows Group Policy Object (GPO) settings to make the superseded patch available on your managed nodes.

**Note**  
In some cases, Microsoft releases patches for applications that don't specify an updated date and time. In these cases, an updated date and time of `01/01/1970` is supplied by default.

------

# How to specify an alternative patch source repository (Linux)


When you use the default repositories configured on a managed node for patching operations, Patch Manager, a tool in AWS Systems Manager, scans for or installs security-related patches. This is the default behavior for Patch Manager. For complete information about how Patch Manager selects and installs security patches, see [How security patches are selected](patch-manager-selecting-patches.md).

On Linux systems, however, you can also use Patch Manager to install patches that aren't related to security, or that are in a different source repository than the default one configured on the managed node. You can specify alternative patch source repositories when you create a custom patch baseline. In each custom patch baseline, you can specify patch source configurations for up to 20 versions of a supported Linux operating system. 

For example, suppose that your Ubuntu Server fleet includes both Ubuntu Server 25.04 managed nodes. In this case, you can specify alternate repositories for each version in the same custom patch baseline. For each version, you provide a name, specify the operating system version type (product), and provide a repository configuration. You can also specify a single alternative source repository that applies to all versions of a supported operating system.

**Note**  
Running a custom patch baseline that specifies alternative patch repositories for a managed node doesn't make them the new default repositories on the operating system. After the patching operation is complete, the repositories previously configured as the defaults for the node's operating system remain the defaults.

For a list of example scenarios for using this option, see [Sample uses for alternative patch source repositories](#patch-manager-alternative-source-repository-examples) later in this topic.

For information about default and custom patch baselines, see [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md).

**Example: Using the console**  
To specify alternative patch source repositories when you're working in the Systems Manager console, use the **Patch sources** section on the **Create patch baseline** page. For information about using the **Patch sources** options, see [Creating a custom patch baseline for Linux](patch-manager-create-a-patch-baseline-for-linux.md).

**Example: Using the AWS CLI**  
For an example of using the `--sources` option with the AWS Command Line Interface (AWS CLI), see [Create a patch baseline with custom repositories for different OS versions](patch-manager-cli-commands.md#patch-manager-cli-commands-create-patch-baseline-mult-sources).

**Topics**
+ [

## Important considerations for alternative repositories
](#alt-source-repository-important)
+ [

## Sample uses for alternative patch source repositories
](#patch-manager-alternative-source-repository-examples)

## Important considerations for alternative repositories


Keep in mind the following points as you plan your patching strategy using alternative patch repositories.

**Enforce repo update verifications (YUM and DNF)**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository if connection to the repository cannot be established. To enforce repository update verification, add `skip_if_unavailable=False` to the repository configuration.

For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

**Only specified repositories are used for patching**  
Specifying alternative repositories doesn't mean specifying *additional* repositories. You can choose to specify repositories other than those configured as defaults on a managed node. However, you must also specify the default repositories as part of the alternative patch source configuration if you want their updates to be applied.

For example, on Amazon Linux 2 managed nodes, the default repositories are `amzn2-core` and `amzn2extra-docker`. If you want to include the Extra Packages for Enterprise Linux (EPEL) repository in your patching operations, you must specify all three repositories as alternative repositories.

**Note**  
Running a custom patch baseline that specifies alternative patch repositories for a managed node doesn't make them the new default repositories on the operating system. After the patching operation is complete, the repositories previously configured as the defaults for the node's operating system remain the defaults.

**Patching behavior for YUM-based distributions depends on the updateinfo.xml manifest**  
When you specify alternative patch repositories for YUM-based distributions, such as Amazon Linux 2, or Red Hat Enterprise Linux, patching behavior depends on whether the repository includes an update manifest in the form of a complete and correctly formatted `updateinfo.xml` file. This file specifies the release date, classifications, and severities of the various packages. Any of the following will affect the patching behavior:
+ If you filter on **Classification** and **Severity**, but they aren't specified in `updateinfo.xml`, the package won't be included by the filter. This also means that packages without an `updateinfo.xml` file won't be included in patching.
+ If you filter on **ApprovalAfterDays**, but the package release date isn't in Unix Epoch format (or has no release date specified), the package won't be included by the filter.
+ There is an exception if you select the **Include non-security updates** check box in the **Create patch baseline** page. In this case, packages without an `updateinfo.xml` file (or that contains this file without properly formatted **Classification**, **Severity**, and **Date** values) *will* be included in the prefiltered list of patches. (They must still meet the other patch baseline rule requirements in order to be installed.)

## Sample uses for alternative patch source repositories


**Example 1 – Nonsecurity Updates for Ubuntu Server**  
You're already using Patch Manager to install security patches on a fleet of Ubuntu Server managed nodes using the AWS-provided predefined patch baseline `AWS-UbuntuDefaultPatchBaseline`. You can create a new patch baseline that is based on this default, but specify in the approval rules that you want nonsecurity related updates that are part of the default distribution to be installed as well. When this patch baseline is run against your nodes, patches for both security and nonsecurity issues are applied. You can also choose to approve nonsecurity patches in the patch exceptions you specify for a baseline.

**Example 2 - Personal Package Archives (PPA) for Ubuntu Server**  
Your Ubuntu Server managed nodes are running software that is distributed through a [Personal Package Archives (PPA) for Ubuntu](https://launchpad.net/ubuntu/+ppas). In this case, you create a patch baseline that specifies a PPA repository that you have configured on the managed node as the source repository for the patching operation. Then use Run Command to run the patch baseline document on the nodes.

**Example 3 – Internal Corporate Applications on supported Amazon Linux versions**  
You need to run some applications needed for industry regulatory compliance on your Amazon Linux managed nodes. You can configure a repository for these applications on the nodes, use YUM to initially install the applications, and then update or create a new patch baseline to include this new corporate repository. After this you can use Run Command to run the `AWS-RunPatchBaseline` document with the `Scan` option to see if the corporate package is listed among the installed packages and is up to date on the managed node. If it isn't up to date, you can run the document again using the `Install` option to update the applications. 

# How patches are installed


Patch Manager, a tool in AWS Systems Manager, uses the operating system built-in package manager to install updates on managed nodes. For example, it uses the Windows Update API on Windows Server and `DNF` on Amazon Linux 2023. Patch Manager respects existing package manager and repository configurations on the nodes, including settings such as repository status, mirror URLs, GPG verification, and options like `skip_if_unavailable`.

Patch Manager doesn't install a new package that replaces an obsolete package that's currently installed. (Exceptions: The new package is a dependency of another package updating being installed, or the new package has the same name as the obsolete package.) Instead, Patch Manager reports on and installs available updates to installed packages. This approach helps prevent unexpected changes to your system functionality that might occur when one package replaces another.

If you need to uninstall a package that has been made obsolete and install its replacement, you might need to use a custom script or use package manager commands outside of Patch Manager's standard operations.

Choose from the following tabs to learn how Patch Manager installs patches on an operating system.

------
#### [ Amazon Linux 2 and Amazon Linux 2023 ]

On Amazon Linux 2 and Amazon Linux 2023 managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The YUM update API (Amazon Linux 2) or the DNF update API (Amazon Linux 2023) is applied to approved patches as follows:
   + For predefined default patch baselines provided by AWS, only patches specified in `updateinfo.xml` are applied (security updates only). This is because the **Include nonsecurity updates** check box is not selected. The predefined baselines are equivalent to a custom baseline with the following:
     + The **Include nonsecurity updates** check box is not selected
     + A SEVERITY list of `[Critical, Important]`
     + A CLASSIFICATION list of `[Security, Bugfix]`

     For Amazon Linux 2, the equivalent yum command for this workflow is:

     ```
     sudo yum update-minimal --sec-severity=Critical,Important --bugfix -y
     ```

     For Amazon Linux 2023, the equivalent dnf command for this workflow is:

     ```
     sudo dnf upgrade-minimal --sec-severity=Critical --sec-severity=Important --bugfix -y
     ```

     If the **Include nonsecurity updates** check box is selected, patches in `updateinfo.xml` and those not in `updateinfo.xml` are all applied (security and nonsecurity updates).

     For Amazon Linux 2, if a baseline with **Include nonsecurity updates** is selected, has a SEVERITY list of `[Critical, Important]` and a CLASSIFICATION list of `[Security, Bugfix]`, the equivalent yum command is:

     ```
     sudo yum update --security --sec-severity=Critical,Important --bugfix -y
     ```

     For Amazon Linux 2023, the equivalent dnf command is:

     ```
     sudo dnf upgrade --security --sec-severity=Critical --sec-severity=Important --bugfix -y
     ```
**Note**  
New packages that replace now-obsolete packages with different names are installed if you run these `yum` or `dnf` commands outside of Patch Manager. However, they are *not* installed by the equivalent Patch Manager operations.

**Additional patching details for Amazon Linux 2023**  
Support for severity level 'None'  
Amazon Linux 2023 also supports the patch severity level `None`, which is recognized by the DNF package manager.   
Support for severity level 'Medium'  
For Amazon Linux 2023, a patch severity level of `Medium` is equivalent to a severity of `Moderate` that might be defined in some external repositories. If you include `Medium` severity patches in the patch baseline, `Moderate` severity patches from external patches are also installed on the instances.  
When you query for compliance data using the API action [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatches.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatches.html), filtering for the severity level `Medium` reports patches with severity levels of both `Medium` and `Moderate`.  
Transitive dependency handling for Amazon Linux 2023  
For Amazon Linux 2023, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal --security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the **latest available versions of the same transitive dependencies.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

**Note**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository without error. In such cases, the related patching operation proceeds without installing updates from the repository and concludes with success. To enforce repository updates, add `skip_if_unavailable=False` to the repository configuration.  
For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

------
#### [ CentOS Stream ]

On CentOS Stream managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

   Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The DNF update on CentOS Stream is applied to approved patches.
**Note**  
For CentOS Stream, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal ‐‐security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the *latest available versions* of the same transitive dependencies.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Debian Server ]

On Debian Server instances, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved. 
**Note**  
Because it isn't possible to reliably determine the release dates of update packages for Debian Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.
**Note**  
For Debian Server, patch candidate versions are limited to patches included in `debian-security`.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. The APT library is used to upgrade packages.
**Note**  
Patch Manager does not support using the APT `Pin-Priority` option to assign priorities to packages. Patch Manager aggregates available updates from all enabled repositories and selects the most recent update that matches the baseline for each installed package.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ macOS ]

On macOS managed nodes, the patch installation workflow is as follows:

1. The `/Library/Receipts/InstallHistory.plist` property list is a record of software that has been installed and upgraded using the `softwareupdate` and `installer` package managers. Using the `pkgutil` command line tool (for `installer`) and the `softwareupdate` package manager, CLI commands are run to parse this list. 

   For `installer`, the response to the CLI commands includes `package name`, `version`, `volume`, `location`, and `install-time` details, but only the `package name` and `version` are used by Patch Manager.

   For `softwareupdate`, the response to the CLI commands includes the package name (`display name`), `version`, and `date`, but only the package name and version are used by Patch Manager.

   For Brew and Brew Cask, Homebrew doesn't support its commands running under the root user. As a result, Patch Manager queries for and runs Homebrew commands as either the owner of the Homebrew directory or as a valid user belonging to the Homebrew directory’s owner group. The commands are similar to `softwareupdate` and `installer` and are run through a Python subprocess to gather package data, and the output is parsed to identify package names and versions.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. Invokes the appropriate package CLI on the managed node to process approved patches as follows:
**Note**  
`installer` lacks the functionality to check for and install updates. Therefore, for `installer`, Patch Manager only reports which packages are installed. As a result, `installer` packages are never reported as `Missing`.
   + For predefined default patch baselines provided by AWS, and for custom patch baselines where the **Include non-security updates** check box is *not* selected, only security updates are applied.
   + For custom patch baselines where the **Include non-security updates** check box *is* selected, both security and nonsecurity updates are applied.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Oracle Linux ]

On Oracle Linux managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. On version 7 managed nodes, the YUM update API is applied to approved patches as follows:
   + For predefined default patch baselines provided by AWS, and for custom patch baselines where the **Include non-security updates** check box is *not* selected, only patches specified in `updateinfo.xml` are applied (security updates only).

     The equivalent yum command for this workflow is:

     ```
     sudo yum update-minimal --sec-severity=Important,Moderate --bugfix -y
     ```
   + For custom patch baselines where the **Include non-security updates** check box *is* selected, both patches in `updateinfo.xml` and those not in `updateinfo.xml` are applied (security and nonsecurity updates).

     The equivalent yum command for this workflow is:

     ```
     sudo yum update --security --bugfix -y
     ```

     On version 8 and 9 managed nodes, the DNF update API is applied to approved patches as follows:
     + For predefined default patch baselines provided by AWS, and for custom patch baselines where the **Include non-security updates** check box is *not* selected, only patches specified in `updateinfo.xml` are applied (security updates only).

       The equivalent yum command for this workflow is:

       ```
       sudo dnf upgrade-minimal --security --sec-severity=Moderate --sec-severity=Important
       ```
**Note**  
For Oracle Linux, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal --security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the *latest available versions* of the same transitive dependencies.
     + For custom patch baselines where the **Include non-security updates** check box *is* selected, both patches in `updateinfo.xml` and those not in `updateinfo.xml` are applied (security and nonsecurity updates).

       The equivalent yum command for this workflow is:

       ```
       sudo dnf upgrade --security --bugfix
       ```
**Note**  
New packages that replace now-obsolete packages with different names are installed if you run these `yum` or `dnf` commands outside of Patch Manager. However, they are *not* installed by the equivalent Patch Manager operations.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

**Note**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository without error. In such cases, the related patching operation proceeds without installing updates from the repository and concludes with success. To enforce repository updates, add `skip_if_unavailable=False` to the repository configuration.  
For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

------
#### [ AlmaLinux, RHEL, and Rocky Linux  ]

On AlmaLinux, Red Hat Enterprise Linux, and Rocky Linux managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The YUM update API (on RHEL 7) or the DNF update API (on AlmaLinux 8 and 9, RHEL 8, 9, and 10, and Rocky Linux 8 and 9) is applied to approved patches according to the following rules:

    

**Scenario 1: Non-security updates excluded**
   + **Applies to**: Predefined default patch baselines provided by AWS and custom patch baselines.
   + **Include non-security updates** check box: *Not* selected.
   + **Patches applied**: Patches specified in `updateinfo.xml` (security updates only) are applied *only* if they both match the patch baseline configuration and are found in the configured repos.

     In some cases, a patch specified in `updateinfo.xml` might no longer be available in a configured repo. Configured repos usually have only the latest version of a patch, which is a cumulative roll-up of all prior updates, but the latest version might not match the patch baseline rules and is omitted from the patching operation.
   + **Commands**: For RHEL 7, the equivalent yum command for this workflow is: 

     ```
     sudo yum update-minimal --sec-severity=Critical,Important --bugfix -y
     ```

     For AlmaLinux, RHEL 8, and Rocky Linux , the equivalent dnf commands for this workflow are:

     ```
     sudo dnf update-minimal --sec-severity=Critical --bugfix -y ; \
     sudo dnf update-minimal --sec-severity=Important --bugfix -y
     ```
**Note**  
For AlmaLinux, RHEL, and Rocky LinuxRocky Linux, Patch Manager might install different versions of transitive dependencies than the equivalent `dnf` commands install. Transitive dependencies are packages that are automatically installed to satisfy the requirements of other packages (dependencies of dependencies).   
For example, `dnf upgrade-minimal --security` installs the *minimal* versions of transitive dependencies needed to resolve known security issues, while Patch Manager installs the *latest available versions* of the same transitive dependencies.

**Scenario 2: Non-security updates included**
   + **Apples to**: Custom patch baselines.
   + **Include non-security updates** check box: Selected.
   + **Patches applied**: Patches in `updateinfo.xml` *and* those not in `updateinfo.xml` are applied (security and nonsecurity updates).
   + **Commands**: For RHEL 7, the equivalent yum command for this workflow is:

     ```
     sudo yum update --security --bugfix -y
     ```

     For AlmaLinux 8 and 9, RHEL 8 and 9, and Rocky Linux 8 and 9, the equivalent dnf command for this workflow is:

     ```
     sudo dnf update --security --bugfix -y
     ```
**Note**  
New packages that replace now-obsolete packages with different names are installed if you run these `yum` or `dnf` commands outside of Patch Manager. However, they are *not* installed by the equivalent Patch Manager operations.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

**Note**  
A default configuration for a package manager on a Linux distribution might be set to skip an unreachable package repository without error. In such cases, the related patching operation proceeds without installing updates from the repository and concludes with success. To enforce repository updates, add `skip_if_unavailable=False` to the repository configuration.  
For more information about the `skip_if_available` option, see [Connectivity to the patch source](patch-manager-prerequisites.md#source-connectivity).

------
#### [ SLES ]

On SUSE Linux Enterprise Server (SLES) managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. If multiple versions of a patch are approved, the latest version is applied.

1. The Zypper update API is applied to approved patches.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Ubuntu Server ]

On Ubuntu Server managed nodes, the patch installation workflow is as follows:

1. If a list of patches is specified using an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL using the `InstallOverrideList` parameter for the `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` documents, the listed patches are installed and steps 2-7 are skipped.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) as specified in the patch baseline, keeping only the qualified packages for further processing. 

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) as specified in the patch baseline. Each approval rule can define a package as approved. 
**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. 

   If nonsecurity updates are included, patches from other repositories are considered as well.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.
**Note**  
For each version of Ubuntu Server, patch candidate versions are limited to patches that are part of the associated repo for that version, as follows:  
Ubuntu Server 16.04 LTS: `xenial-security`
Ubuntu Server 18.04 LTS: `bionic-security`
Ubuntu Server 20.04 LTS): `focal-security`
Ubuntu Server 22.04 LTS: `jammy-security`
Ubuntu Server 24.04 LTS (`noble-security`)
Ubuntu Server 25.04 (`plucky-security`)

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) as specified in the patch baseline. The approved patches are approved for update even if they're discarded by [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters) or if no approval rule specified in [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules) grants it approval.

1. Apply [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) as specified in the patch baseline. The rejected patches are removed from the list of approved patches and won't be applied.

1. The APT library is used to upgrade packages.
**Note**  
Patch Manager does not support using the APT `Pin-Priority` option to assign priorities to packages. Patch Manager aggregates available updates from all enabled repositories and selects the most recent update that matches the baseline for each installed package.

1. The managed node is rebooted if any updates were installed. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)

------
#### [ Windows Server ]

When a patching operation is performed on a Windows Server managed node, the node requests a snapshot of the appropriate patch baseline from Systems Manager. This snapshot contains the list of all updates available in the patch baseline that were approved for deployment. This list of updates is sent to the Windows Update API, which determines which of the updates are applicable to the managed node and installs them as needed. Windows allows only the latest available version of a KB to be installed. Patch Manager installs the latest version of a KB when it, or any previous version of the KB, matches the applied patch baseline. If any updates are installed, the managed node is rebooted afterwards, as many times as necessary to complete all necessary patching. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).) The summary of the patching operation can be found in the output of the Run Command request. Additional logs can be found on the managed node in the `%PROGRAMDATA%\Amazon\PatchBaselineOperations\Logs` folder.

Because the Windows Update API is used to download and install KBs, all Group Policy settings for Windows Update are respected. No Group Policy settings are required to use Patch Manager, but any settings that you have defined will be applied, such as to direct managed nodes to a Windows Server Update Services (WSUS) server.

**Note**  
By default, Windows downloads all KBs from Microsoft's Windows Update site because Patch Manager uses the Windows Update API to drive the download and installation of KBs. As a result, the managed node must be able to reach the Microsoft Windows Update site or patching will fail. Alternatively, you can configure a WSUS server to serve as a KB repository and configure your managed nodes to target that WSUS server using Group Policies.  
Patch Manager might reference KB IDs when creating custom patch baselines for Windows Server, such as when an **Approved patches** list or **Rejected patches** list is included the the baseline configuration. Only updates that are assigned a KB ID in Microsoft Windows Update or a WSUS server are installed by Patch Manager. Updates that lack a KB ID are not included in patching operations.   
For information about creating custom patch baselines, see the following topics:  
 [Creating a custom patch baseline for Windows Server](patch-manager-create-a-patch-baseline-for-windows.md)
 [Create a patch baseline (CLI)](patch-manager-create-a-patch-baseline-for-windows.md)
[Package name formats for Windows operating systems](patch-manager-approved-rejected-package-name-formats.md#patch-manager-approved-rejected-package-name-formats-windows)

------

# How patch baseline rules work on Linux-based systems


The rules in a patch baseline for Linux distributions operate differently based on the distribution type. Unlike patch updates on Windows Server managed nodes, rules are evaluated on each node to take the configured repos on the instance into consideration. Patch Manager, a tool in AWS Systems Manager, uses the native package manager to drive the installation of patches approved by the patch baseline.

For Linux-based operating system types that report a severity level for patches, Patch Manager uses the severity level reported by the software publisher for the update notice or individual patch. Patch Manager doesn't derive severity levels from third-party sources, such as the [Common Vulnerability Scoring System](https://www.first.org/cvss/) (CVSS), or from metrics released by the [National Vulnerability Database](https://nvd.nist.gov/vuln) (NVD).

**Topics**
+ [

## How patch baseline rules work on Amazon Linux 2 and Amazon Linux 2023
](#linux-rules-amazon-linux)
+ [

## How patch baseline rules work on CentOS Stream
](#linux-rules-centos)
+ [

## How patch baseline rules work on Debian Server
](#linux-rules-debian)
+ [

## How patch baseline rules work on macOS
](#linux-rules-macos)
+ [

## How patch baseline rules work on Oracle Linux
](#linux-rules-oracle)
+ [

## How patch baseline rules work on AlmaLinux, RHEL, and Rocky Linux
](#linux-rules-rhel)
+ [

## How patch baseline rules work on SUSE Linux Enterprise Server
](#linux-rules-sles)
+ [

## How patch baseline rules work on Ubuntu Server
](#linux-rules-ubuntu)

## How patch baseline rules work on Amazon Linux 2 and Amazon Linux 2023


**Note**  
Amazon Linux 2023 (AL2023) uses versioned repositories that can be locked to a specific version through one or more system settings. For all patching operations on AL2023 EC2 instances, Patch Manager uses the latest repository versions, independent of the system configuration. For more information, see [Deterministic upgrades through versioned repositories](https://docs.aws.amazon.com/linux/al2023/ug/deterministic-upgrades.html) in the *Amazon Linux 2023 User Guide*.

On Amazon Linux 2 and Amazon Linux 2023, the patch selection process is as follows:

1. On the managed node, the YUM library (Amazon Linux 2) or the DNF library (Amazon Linux 2023) accesses the `updateinfo.xml` file for each configured repo. 

   If no `updateinfo.xml` file is found, whether patches are installed depend on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. Each update notice in `updateinfo.xml` includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on CentOS Stream


The CentOS Stream default repositories do not include an `updateinfo.xml` file. However, custom repositories that you create or use might include this file. In this topic, references to `updateinfo.xml` apply only to these custom repositories.

On CentOS Stream, the patch selection process is as follows:

1. On the managed node, the DNF library accesses the `updateinfo.xml` file, if it exists in a custom repository, for each configured repo.

   If there is no `updateinfo.xml` found, which always includes the default repos, whether patches are installed depends on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. If `updateinfo.xml` is present, each update notice in the file includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. In all cases, the product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on Debian Server


On Debian Server , the patch baseline service offers filtering on the *Priority* and *Section *fields. These fields are typically present for all Debian Server packages. To determine whether a patch is selected by the patch baseline, Patch Manager does the following:

1. On Debian Server systems, the equivalent of `sudo apt-get update` is run to refresh the list of available packages. Repos aren't configured and the data is pulled from repos configured in a `sources` list.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Next, the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) and [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) lists are applied.
**Note**  
Because it isn't possible to reliably determine the release dates of update packages for Debian Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. In this case, for Debian Server, patch candidate versions are limited to patches included in the following repos:

   These repos are named as follows:
   + Debian Server 11: `debian-security bullseye`
   + Debian Server 12: `debian-security bookworm`

   If nonsecurity updates are included, patches from other repositories are considered as well.

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

To view the contents of the *Priority* and *Section *fields, run the following `aptitude` command: 

**Note**  
You might need to first install Aptitude on Debian Server systems.

```
aptitude search -F '%p %P %s %t %V#' '~U'
```

In the response to this command, all upgradable packages are reported in this format: 

```
name, priority, section, archive, candidate version
```

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on macOS


On macOS, the patch selection process is as follows:

1. On the managed node, Patch Manager accesses the parsed contents of the `InstallHistory.plist` file and identifies package names and versions. 

   For details about the parsing process, see the **macOS** tab in [How patches are installed](patch-manager-installing-patches.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on Oracle Linux


On Oracle Linux, the patch selection process is as follows:

1. On the managed node, the YUM library accesses the `updateinfo.xml` file for each configured repo.
**Note**  
The `updateinfo.xml` file might not be available if the repo isn't one managed by Oracle. If there is no `updateinfo.xml` found, whether patches are installed depend on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. Each update notice in `updateinfo.xml` includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on AlmaLinux, RHEL, and Rocky Linux


On AlmaLinux, Red Hat Enterprise Linux (RHEL), and Rocky Linux, the patch selection process is as follows:

1. On the managed node, the YUM library (RHEL 7) or the DNF library (AlmaLinux 8 and 9, RHEL 8, 9, and 10, and Rocky Linux 8 and 9) accesses the `updateinfo.xml` file for each configured repo.
**Note**  
The `updateinfo.xml` file might not be available if the repo isn't one managed by Red Hat. If there is no `updateinfo.xml` found, whether patches are installed depend on settings for **Include non-security updates** and **Auto-approval**. For example, if non-security updates are permitted, they're installed when the auto-approval time arrives.

1. Each update notice in `updateinfo.xml` includes several attributes that denote the properties of the packages in the notice, as described in the following table.  
**Update notice attributes**    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

1. The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the Product key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type.

1. Packages are selected for the update according to the following guidelines.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-linux-rules.html)

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

## How patch baseline rules work on SUSE Linux Enterprise Server


On SLES, each patch includes the following attributes that denote the properties of the packages in the patch:
+ **Category**: Corresponds to the value of the **Classification** key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type. Denotes the type of patch included in the update notice.

  You can view the list of supported values by using the AWS CLI command **[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html)** or the API operation **[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html)**. You can also view the list in the **Approval rules** area of the **Create patch baseline** page or **Edit patch baseline** page in the Systems Manager console.
+ **Severity**: Corresponds to the value of the **Severity** key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type. Denotes the severity of the patches.

  You can view the list of supported values by using the AWS CLI command **[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html)** or the API operation **[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html)**. You can also view the list in the **Approval rules** area of the **Create patch baseline** page or **Edit patch baseline** page in the Systems Manager console.

The product of the managed node is determined by SSM Agent. This attribute corresponds to the value of the **Product** key attribute in the patch baseline's [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html) data type. 

For each patch, the patch baseline is used as a filter, allowing only the qualified packages to be included in the update. If multiple packages are applicable after applying the patch baseline definition, the latest version is used. 

For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

## How patch baseline rules work on Ubuntu Server


On Ubuntu Server, the patch baseline service offers filtering on the *Priority* and *Section *fields. These fields are typically present for all Ubuntu Server packages. To determine whether a patch is selected by the patch baseline, Patch Manager does the following:

1. On Ubuntu Server systems, the equivalent of `sudo apt-get update` is run to refresh the list of available packages. Repos aren't configured and the data is pulled from repos configured in a `sources` list.

1. If an update is available for `python3-apt` (a Python library interface to `libapt`), it is upgraded to the latest version. (This nonsecurity package is upgraded even if you did not select the **Include nonsecurity updates** option.)

1. Next, the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-GlobalFilters), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovalRules), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-ApprovedPatches) and [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#EC2-CreatePatchBaseline-request-RejectedPatches) lists are applied.
**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.

   Approval rules, however, are also subject to whether the **Include nonsecurity updates** check box was selected when creating or last updating a patch baseline.

   If nonsecurity updates are excluded, an implicit rule is applied in order to select only packages with upgrades in security repos. For each package, the candidate version of the package (which is typically the latest version) must be part of a security repo. In this case, for Ubuntu Server, patch candidate versions are limited to patches included in the following repos:
   + Ubuntu Server 16.04 LTS: `xenial-security`
   + Ubuntu Server 18.04 LTS: `bionic-security`
   + Ubuntu Server 20.04 LTS: `focal-security`
   + Ubuntu Server 22.04 LTS (`jammy-security`)
   + Ubuntu Server 24.04 LTS (`noble-security`)
   + Ubuntu Server 25.04 (`plucky-security`)

   If nonsecurity updates are included, patches from other repositories are considered as well.

   For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

To view the contents of the *Priority* and *Section *fields, run the following `aptitude` command: 

**Note**  
You might need to first install Aptitude on Ubuntu Server 16 systems.

```
aptitude search -F '%p %P %s %t %V#' '~U'
```

In the response to this command, all upgradable packages are reported in this format: 

```
name, priority, section, archive, candidate version
```

For information about patch compliance status values, see [Patch compliance state values](patch-manager-compliance-states.md).

# Patching operation differences between Linux and Windows Server


This topic describes important differences between Linux and Windows Server patching in Patch Manager, a tool in AWS Systems Manager.

**Note**  
To patch Linux managed nodes, your nodes must be running SSM Agent version 2.0.834.0 or later.  
An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates.

## Difference 1: Patch evaluation


Patch Manager uses different processes on Windows managed nodes and Linux managed nodes in order to evaluate which patches should be present. 

**Linux**  
For Linux patching, Systems Manager evaluates patch baseline rules and the list of approved and rejected patches on *each* managed node. Systems Manager must evaluate patching on each node because the service retrieves the list of known patches and updates from the repositories that are configured on the managed node.

**Windows**  
For Windows patching, Systems Manager evaluates patch baseline rules and the list of approved and rejected patches *directly in the service*. It can do this because Windows patches are pulled from a single repository (Windows Update).

## Difference 2: `Not Applicable` patches


Due to the large number of available packages for Linux operating systems, Systems Manager doesn't report details about patches in the *Not Applicable* state. A `Not Applicable` patch is, for example, a patch for Apache software when the instance doesn't have Apache installed. Systems Manager does report the number of `Not Applicable` patches in the summary, but if you call the [DescribeInstancePatches](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatches.html) API for a managed node, the returned data doesn't include patches with a state of `Not Applicable`. This behavior is different from Windows.

## Difference 3: SSM document support


The `AWS-ApplyPatchBaseline` Systems Manager document (SSM document) doesn't support Linux managed nodes. For applying patch baselines to Linux, macOS, and Windows Server managed nodes, the recommended SSM document is `AWS-RunPatchBaseline`. For more information, see [SSM Command documents for patching managed nodes](patch-manager-ssm-documents.md) and [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

## Difference 4: Application patches


The primary focus of Patch Manager is applying patches to operating systems. However, you can also use Patch Manager to apply patches to some applications on your managed nodes.

**Linux**  
On Linux operating systems, Patch Manager uses the configured repositories for updates, and doesn't differentiate between operating systems and application patches. You can use Patch Manager to define which repositories to fetch updates from. For more information, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

**Windows**  
On Windows Server managed nodes, you can apply approval rules, as well as *Approved* and *Rejected* patch exceptions, for applications released by Microsoft, such as Microsoft Word 2016 and Microsoft Exchange Server 2016. For more information, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

## Difference 5: Rejected patch list options in custom patch baselines


When you create a custom patch baseline, you can specify one or more patches for a **Rejected patches** list. For Linux managed nodes, you can also choose to allow them to be installed if they're dependencies for another patch allowed by the baseline.

Windows Server, however, doesn't support the concept of patch dependencies. You can add a patch to the **Rejected patches** list in a custom baseline for Windows Server, but the result depends on (1) whether or not the rejected patch is already installed on a managed node, and (2) which option you choose for **Rejected patches action**.

Refer to the following table for details about rejected patch options on Windows Server.


| Installation status | Option: "Allow as dependency" | Option: "Block" | 
| --- | --- | --- | 
| Patch is already installed | Reported status: INSTALLED\$1OTHER | Reported status: INSTALLED\$1REJECTED | 
| Patch is not already installed | Patch skipped | Patch skipped | 

Each patch for Windows Server that Microsoft releases typically contains all the information needed for the installation to succeed. Occasionally, however, a prerequisite package might be required, which you must install manually. Patch Manager doesn't report information about these prerequisites. For related information, see [Windows Update issues troubleshooting](https://learn.microsoft.com/en-us/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting) on the Microsoft website.

# SSM Command documents for patching managed nodes


This topic describes the nine Systems Manager documents (SSM documents) available to help you keep your managed nodes patched with the latest security-related updates. 

We recommend using just five of these documents in your patching operations. Together, these five SSM documents provide you with a full range of patching options using AWS Systems Manager. Four of these documents were released later than the four legacy SSM documents they replace and represent expansions or consolidations of functionality.

**Recommended SSM documents for patching**  
We recommend using the following five SSM documents in your patching operations.
+ `AWS-ConfigureWindowsUpdate`
+ `AWS-InstallWindowsUpdates`
+ `AWS-RunPatchBaseline`
+ `AWS-RunPatchBaselineAssociation`
+ `AWS-RunPatchBaselineWithHooks`

**Legacy SSM documents for patching**  
The following four legacy SSM documents remain available in some AWS Regions but are no longer updated or supported. These documents aren't guaranteed to work in IPv6 environments and support only IPv4. They can't be guaranteed to work in all scenarios and might lose support in the future. We recommend that you don't use these documents for patching operations.
+ `AWS-ApplyPatchBaseline`
+ `AWS-FindWindowsUpdates`
+ `AWS-InstallMissingWindowsUpdates`
+ `AWS-InstallSpecificWindowsUpdates`

For steps to set up patching operations in an environment that supportsonly IPv6, see [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md).

Refer to the following sections for more information about using these SSM documents in your patching operations.

**Topics**
+ [

## SSM documents recommended for patching managed nodes
](#patch-manager-ssm-documents-recommended)
+ [

## Legacy SSM documents for patching managed nodes
](#patch-manager-ssm-documents-legacy)
+ [

## Known limitations of the SSM documents for patching managed nodes
](#patch-manager-ssm-documents-known-limitations)
+ [

# SSM Command document for patching: `AWS-RunPatchBaseline`
](patch-manager-aws-runpatchbaseline.md)
+ [

# SSM Command document for patching: `AWS-RunPatchBaselineAssociation`
](patch-manager-aws-runpatchbaselineassociation.md)
+ [

# SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`
](patch-manager-aws-runpatchbaselinewithhooks.md)
+ [

# Sample scenario for using the InstallOverrideList parameter in `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation`
](patch-manager-override-lists.md)
+ [

# Using the BaselineOverride parameter
](patch-manager-baselineoverride-parameter.md)

## SSM documents recommended for patching managed nodes


The following five SSM documents are recommended for use in your managed node patching operations.

**Topics**
+ [

### `AWS-ConfigureWindowsUpdate`
](#patch-manager-ssm-documents-recommended-AWS-ConfigureWindowsUpdate)
+ [

### `AWS-InstallWindowsUpdates`
](#patch-manager-ssm-documents-recommended-AWS-InstallWindowsUpdates)
+ [

### `AWS-RunPatchBaseline`
](#patch-manager-ssm-documents-recommended-AWS-RunPatchBaseline)
+ [

### `AWS-RunPatchBaselineAssociation`
](#patch-manager-ssm-documents-recommended-AWS-RunPatchBaselineAssociation)
+ [

### `AWS-RunPatchBaselineWithHooks`
](#patch-manager-ssm-documents-recommended-AWS-RunPatchBaselineWithHooks)

### `AWS-ConfigureWindowsUpdate`


Supports configuring basic Windows Update functions and using them to install updates automatically (or to turn off automatic updates). Available in all AWS Regions.

This SSM document prompts Windows Update to download and install the specified updates and reboot managed nodes as needed. Use this document with State Manager, a tool in AWS Systems Manager, to ensure Windows Update maintains its configuration. You can also run it manually using Run Command, a tool in AWS Systems Manager, to change the Windows Update configuration. 

The available parameters in this document support specifying a category of updates to install (or whether to turn off automatic updates), as well as specifying the day of the week and time of day to run patching operations. This SSM document is most useful if you don't need strict control over Windows updates and don't need to collect compliance information. 

**Replaces legacy SSM documents: **
+ *None*

### `AWS-InstallWindowsUpdates`


Installs updates on a Windows Server managed node. Available in all AWS Regions.

This SSM document provides basic patching functionality in cases where you either want to install a specific update (using the `Include Kbs` parameter), or want to install patches with specific classifications or categories but don't need patch compliance information. 

**Replaces legacy SSM documents:**
+ `AWS-FindWindowsUpdates`
+ `AWS-InstallMissingWindowsUpdates`
+ `AWS-InstallSpecificWindowsUpdates`

The three legacy documents perform different functions, but you can achieve the same results by using different parameter settings with the newer SSM document `AWS-InstallWindowsUpdates`. These parameter settings are described in [Legacy SSM documents for patching managed nodes](#patch-manager-ssm-documents-legacy).

### `AWS-RunPatchBaseline`


Installs patches on your managed nodes or scans nodes to determine whether any qualified patches are missing. Available in all AWS Regions.

`AWS-RunPatchBaseline` allows you to control patch approvals using the patch baseline specified as the "default" for an operating system type. Reports patch compliance information that you can view using the Systems Manager Compliance tools. These tools provide you with insights on the patch compliance state of your managed nodes, such as which nodes are missing patches and what those patches are. When you use `AWS-RunPatchBaseline`, patch compliance information is recorded using the `PutInventory` API command. For Linux operating systems, compliance information is provided for patches from both the default source repository configured on a managed node and from any alternative source repositories you specify in a custom patch baseline. For more information about alternative source repositories, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md). For more information about the Systems Manager Compliance tools, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

 **Replaces legacy documents:**
+ `AWS-ApplyPatchBaseline`

The legacy document `AWS-ApplyPatchBaseline` applies only to Windows Server managed nodes, and doesn't provide support for application patching. The newer `AWS-RunPatchBaseline` provides the same support for both Windows and Linux systems. Version 2.0.834.0 or later of SSM Agent is required in order to use the `AWS-RunPatchBaseline` document. 

For more information about the `AWS-RunPatchBaseline` SSM document, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

### `AWS-RunPatchBaselineAssociation`


Installs patches on your instances or scans instances to determine whether any qualified patches are missing. Available in all commercial AWS Regions. 

`AWS-RunPatchBaselineAssociation` differs from `AWS-RunPatchBaseline` in a few important ways:
+ `AWS-RunPatchBaselineAssociation` is intended for use primarily with State Manager associations created using Quick Setup, a tool in AWS Systems Manager. Specifically, when you use the Quick Setup Host Management configuration type, if you choose the option **Scan instances for missing patches daily**, the system uses `AWS-RunPatchBaselineAssociation` for the operation.

  In most cases, however, when setting up your own patching operations, you should choose [`AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md) or [`AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md) instead of `AWS-RunPatchBaselineAssociation`. 

  For more information, see the following topics:
  + [AWS Systems Manager Quick Setup](systems-manager-quick-setup.md)
  + [SSM Command document for patching: `AWS-RunPatchBaselineAssociation`](patch-manager-aws-runpatchbaselineassociation.md)
+ `AWS-RunPatchBaselineAssociation` supports the use of tags to identify which patch baseline to use with a set of targets when it runs. 
+ For patching operations that use `AWS-RunPatchBaselineAssociation`, patch compliance data is compiled in terms of a specific State Manager association. The patch compliance data collected when `AWS-RunPatchBaselineAssociation` runs is recorded using the `PutComplianceItems` API command instead of the `PutInventory` command. This prevents compliance data that isn't associated with this particular association from being overwritten.

  For Linux operating systems, compliance information is provided for patches from both the default source repository configured on an instance and from any alternative source repositories you specify in a custom patch baseline. For more information about alternative source repositories, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md). For more information about the Systems Manager Compliance tools, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

 **Replaces legacy documents:**
+ **None**

For more information about the `AWS-RunPatchBaselineAssociation` SSM document, see [SSM Command document for patching: `AWS-RunPatchBaselineAssociation`](patch-manager-aws-runpatchbaselineassociation.md).

### `AWS-RunPatchBaselineWithHooks`


Installs patches on your managed nodes or scans nodes to determine whether any qualified patches are missing, with optional hooks you can use to run SSM documents at three points during the patching cycle. Available in all commercial AWS Regions. Not supported on macOS.

`AWS-RunPatchBaselineWithHooks` differs from `AWS-RunPatchBaseline` in its `Install` operation.

`AWS-RunPatchBaselineWithHooks` supports lifecycle hooks that run at designated points during managed node patching. Because patch installations sometimes require managed nodes to reboot, the patching operation is divided into two events, for a total of three hooks that support custom functionality. The first hook is before the `Install with NoReboot` operation. The second hook is after the `Install with NoReboot` operation. The third hook is available after the reboot of the node.

 **Replaces legacy documents:**
+ **None**

For more information about the `AWS-RunPatchBaselineWithHooks` SSM document, see [SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md).

## Legacy SSM documents for patching managed nodes


The following four SSM documents are still available in some AWS Regions. However, they are no longer updated and might be no longer supported in the future, so we don't recommend their use. Instead, use the documents described in [SSM documents recommended for patching managed nodes](#patch-manager-ssm-documents-recommended).

**Topics**
+ [

### `AWS-ApplyPatchBaseline`
](#patch-manager-ssm-documents-legacy-AWS-ApplyPatchBaseline)
+ [

### `AWS-FindWindowsUpdates`
](#patch-manager-ssm-documents-legacy-AWS-AWS-FindWindowsUpdates)
+ [

### `AWS-InstallMissingWindowsUpdates`
](#patch-manager-ssm-documents-legacy-AWS-InstallMissingWindowsUpdates)
+ [

### `AWS-InstallSpecificWindowsUpdates`
](#patch-manager-ssm-documents-legacy-AWS-InstallSpecificWindowsUpdates)

### `AWS-ApplyPatchBaseline`


Supports only Windows Server managed nodes, but doesn't include support for patching applications that is found in its replacement, `AWS-RunPatchBaseline`. Not available in AWS Regions launched after August 2017.

**Note**  
The replacement for this SSM document, `AWS-RunPatchBaseline`, requires version 2.0.834.0 or a later version of SSM Agent. You can use the `AWS-UpdateSSMAgent` document to update your managed nodes to the latest version of the agent. 

### `AWS-FindWindowsUpdates`


Replaced by `AWS-InstallWindowsUpdates`, which can perform all the same actions. Not available in AWS Regions launched after April 2017.

To achieve the same result that you would from this legacy SSM document, use the following parameter configuration with the recommended replacement document, `AWS-InstallWindowsUpdates`:
+ `Action` = `Scan`
+ `Allow Reboot` = `False`

### `AWS-InstallMissingWindowsUpdates`


Replaced by `AWS-InstallWindowsUpdates`, which can perform all the same actions. Not available in any AWS Regions launched after April 2017.

To achieve the same result that you would from this legacy SSM document, use the following parameter configuration with the recommended replacement document, `AWS-InstallWindowsUpdates`:
+ `Action` = `Install`
+ `Allow Reboot` = `True`

### `AWS-InstallSpecificWindowsUpdates`


Replaced by `AWS-InstallWindowsUpdates`, which can perform all the same actions. Not available in any AWS Regions launched after April 2017.

To achieve the same result that you would from this legacy SSM document, use the following parameter configuration with the recommended replacement document, `AWS-InstallWindowsUpdates`:
+ `Action` = `Install`
+ `Allow Reboot` = `True`
+ `Include Kbs` = *comma-separated list of KB articles*

## Known limitations of the SSM documents for patching managed nodes


### External reboot interruptions


If a reboot is initiated by the system on the node during patch installation (for example, to apply updates to firmware or features like SecureBoot), the patching document execution may be interrupted and marked as failed even though patches were successfully installed. This occurs because the SSM Agent cannot persist and resume the document execution state across external reboots.

To verify patch installation status after a failed execution, run a `Scan` patching operation, then check the patch compliance data in Patch Manager to assess the current compliance state.

# SSM Command document for patching: `AWS-RunPatchBaseline`
AWS-RunPatchBaseline

AWS Systems Manager supports `AWS-RunPatchBaseline`, a Systems Manager document (SSM document) for Patch Manager, a tool in AWS Systems Manager. This SSM document performs patching operations on managed nodes for both security related and other types of updates. When the document is run, it uses the patch baseline specified as the "default" for an operating system type if no patch group is specified. Otherwise, it uses the patch baseline that is associated with the patch group. For information about patch groups, see [Patch groups](patch-manager-patch-groups.md). 

You can use the document `AWS-RunPatchBaseline` to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This document supports Linux, macOS, and Windows Server managed nodes. The document will perform the appropriate actions for each platform. 

**Note**  
Patch Manager also supports the legacy SSM document `AWS-ApplyPatchBaseline`. However, this document supports patching on Windows managed nodes only. We encourage you to use `AWS-RunPatchBaseline` instead because it supports patching on Linux, macOS, and Windows Server managed nodes. Version 2.0.834.0 or later of SSM Agent is required in order to use the `AWS-RunPatchBaseline` document.

------
#### [ Windows Server ]

On Windows Server managed nodes, the `AWS-RunPatchBaseline` document downloads and invokes a PowerShell module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot contains a list of approved patches that is compiled by querying the patch baseline against a Windows Server Update Services (WSUS) server. This list is passed to the Windows Update API, which controls downloading and installing the approved patches as appropriate. 

------
#### [ Linux ]

On Linux managed nodes, the `AWS-RunPatchBaseline` document invokes a Python module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot uses the defined rules and lists of approved and blocked patches to drive the appropriate package manager for each node type: 
+ Amazon Linux 2, Oracle Linux, and RHEL 7 managed nodes use YUM. For YUM operations, Patch Manager requires `Python 2.6` or a later supported version (2.6 - 3.12). Amazon Linux 2023 uses DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12).
+ RHEL 8 managed nodes use DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12). (Neither version is installed by default on RHEL 8. You must install one or the other manually.)
+ Debian Server, and Ubuntu Server instances use APT. For APT operations, Patch Manager requires a supported version of `Python 3` (3.0 - 3.12).

------
#### [ macOS ]

On macOS managed nodes, the `AWS-RunPatchBaseline` document invokes a Python module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. Next, a Python subprocess invokes the AWS Command Line Interface (AWS CLI) on the node to retrieve the installation and update information for the specified package managers and to drive the appropriate package manager for each update package.

------

Each snapshot is specific to an AWS account, patch group, operating system, and snapshot ID. The snapshot is delivered through a presigned Amazon Simple Storage Service (Amazon S3) URL, which expires 24 hours after the snapshot is created. After the URL expires, however, if you want to apply the same snapshot content to other managed nodes, you can generate a new presigned Amazon S3 URL up to 3 days after the snapshot was created. To do this, use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html) command. 

After all approved and applicable updates have been installed, with reboots performed as necessary, patch compliance information is generated on a managed node and reported back to Patch Manager. 

**Note**  
If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaseline-parameters-norebootoption).

For information about viewing patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch). 

## `AWS-RunPatchBaseline` parameters


`AWS-RunPatchBaseline` supports six parameters. The `Operation` parameter is required. The `InstallOverrideList`, `BaselineOverride`, and `RebootOption` parameters are optional. `Snapshot-ID` is technically optional, but we recommend that you supply a custom value for it when you run `AWS-RunPatchBaseline` outside of a maintenance window. Patch Manager can supply the custom value automatically when the document is run as part of a maintenance window operation.

**Topics**
+ [

### Parameter name: `Operation`
](#patch-manager-aws-runpatchbaseline-parameters-operation)
+ [

### Parameter name: `AssociationId`
](#patch-manager-aws-runpatchbaseline-parameters-association-id)
+ [

### Parameter name: `Snapshot ID`
](#patch-manager-aws-runpatchbaseline-parameters-snapshot-id)
+ [

### Parameter name: `InstallOverrideList`
](#patch-manager-aws-runpatchbaseline-parameters-installoverridelist)
+ [

### Parameter name: `RebootOption`
](#patch-manager-aws-runpatchbaseline-parameters-norebootoption)
+ [

### Parameter name: `BaselineOverride`
](#patch-manager-aws-runpatchbaseline-parameters-baselineoverride)
+ [

### Parameter name: `StepTimeoutSeconds`
](#patch-manager-aws-runpatchbaseline-parameters-steptimeoutseconds)

### Parameter name: `Operation`


**Usage**: Required.

**Options**: `Scan` \$1 `Install`. 

Scan  
When you choose the `Scan` option, `AWS-RunPatchBaseline` determines the patch compliance state of the managed node and reports this information back to Patch Manager. `Scan` doesn't prompt updates to be installed or managed nodes to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the node. 

Install  
When you choose the `Install` option, `AWS-RunPatchBaseline` attempts to install the approved and applicable updates that are missing from the managed node. Patch compliance information generated as part of an `Install` operation doesn't list any missing updates, but might report updates that are in a failed state if the installation of the update didn't succeed for any reason. Whenever an update is installed on a managed node, the node is rebooted to ensure the update is both installed and active. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaseline-parameters-norebootoption).)  
If a patch specified by the baseline rules is installed *before* Patch Manager updates the managed node, the system might not reboot as expected. This can happen when a patch is installed manually by a user or installed automatically by another program, such as the `unattended-upgrades` package on Ubuntu Server.

### Parameter name: `AssociationId`


**Usage**: Optional.

`AssociationId` is the ID of an existing association in State Manager, a tool in AWS Systems Manager. It's used by Patch Manager to add compliance data to a specified association. This association is related to a patching operation that's [set up in a patch policy in Quick Setup](quick-setup-patch-manager.md). 

**Note**  
With the `AWS-RunPatchBaseline`, if an `AssociationId` value is provided along with a patch policy baseline override, patching is done as a `PatchPolicy` operation and the `ExecutionType` value reported in `AWS:ComplianceItem` is also `PatchPolicy`. If no `AssociationId` value is provided, patching is done as a `Command` operation and the `ExecutionType` value report in on the `AWS:ComplianceItem` submitted is also `Command`. 

If you don't already have an association you want to use, you can create one by running [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) the command. 

### Parameter name: `Snapshot ID`


**Usage**: Optional.

`Snapshot ID` is a unique ID (GUID) used by Patch Manager to ensure that a set of managed nodes that are patched in a single operation all have the exact same set of approved patches. Although the parameter is defined as optional, our best practice recommendation depends on whether or not you're running `AWS-RunPatchBaseline` in a maintenance window, as described in the following table.


**`AWS-RunPatchBaseline` best practices**  

| Mode | Best practice | Details | 
| --- | --- | --- | 
| Running AWS-RunPatchBaseline inside a maintenance window | Don't supply a Snapshot ID. Patch Manager will supply it for you. |  If you use a maintenance window to run `AWS-RunPatchBaseline`, you shouldn't provide your own generated Snapshot ID. In this scenario, Systems Manager provides a GUID value based on the maintenance window execution ID. This ensures that a correct ID is used for all the invocations of `AWS-RunPatchBaseline` in that maintenance window.  If you do specify a value in this scenario, note that the snapshot of the patch baseline might not remain in place for more than 3 days. After that, a new snapshot will be generated even if you specify the same ID after the snapshot expires.   | 
| Running AWS-RunPatchBaseline outside of a maintenance window | Generate and specify a custom GUID value for the Snapshot ID.¹ |  When you aren't using a maintenance window to run `AWS-RunPatchBaseline`, we recommend that you generate and specify a unique Snapshot ID for each patch baseline, particularly if you're running the `AWS-RunPatchBaseline` document on multiple managed nodes in the same operation. If you don't specify an ID in this scenario, Systems Manager generates a different Snapshot ID for each managed node the command is sent to. This might result in varying sets of patches being specified among the managed nodes. For instance, say that you're running the `AWS-RunPatchBaseline` document directly through Run Command, a tool in AWS Systems Manager, and targeting a group of 50 managed nodes. Specifying a custom Snapshot ID results in the generation of a single baseline snapshot that is used to evaluate and patch all the nodes, ensuring that they end up in a consistent state.   | 
|  ¹ You can use any tool capable of generating a GUID to generate a value for the Snapshot ID parameter. For example, in PowerShell, you can use the `New-Guid` cmdlet to generate a GUID in the format of `12345699-9405-4f69-bc5e-9315aEXAMPLE`.  | 

### Parameter name: `InstallOverrideList`


**Usage**: Optional.

Using `InstallOverrideList`, you specify an https URL or an Amazon S3 path-style URL to a list of patches to be installed. This patch installation list, which you maintain in YAML format, overrides the patches specified by the current default patch baseline. This provides you with more granular control over which patches are installed on your managed nodes. 

**Important**  
The `InstallOverrideList` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

The patching operation behavior when using the `InstallOverrideList` parameter differs between Linux & macOS managed nodes and Windows Server managed nodes. On Linux & macOS, Patch Manager attempts to apply patches included in the `InstallOverrideList` patch list that are present in any repository enabled on the node, whether or not the patches match the patch baseline rules. On Windows Server nodes, however, patches in the `InstallOverrideList` patch list are applied *only* if they also match the patch baseline rules.

On Linux & macOS managed nodes, patches specified in the `InstallOverrideList` are applied only as updates to packages that are already installed on the node. If the `InstallOverrideList` includes patches for packages that are not currently installed on the node, those patches are not installed.

Be aware that compliance reports reflect patch states according to what’s specified in the patch baseline, not what you specify in an `InstallOverrideList` list of patches. In other words, Scan operations ignore the `InstallOverrideList` parameter. This is to ensure that compliance reports consistently reflect patch states according to policy rather than what was approved for a specific patching operation. 

**Note**  
When you're patching a node that only uses IPv6, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided. See [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md) for more information on configuring the agent to use dualstack.

For a description of how you might use the `InstallOverrideList` parameter to apply different types of patches to a target group, on different maintenance window schedules, while still using a single patch baseline, see [Sample scenario for using the InstallOverrideList parameter in `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation`](patch-manager-override-lists.md).

**Valid URL formats**

**Note**  
If your file is stored in a publicly available bucket, you can specify either an https URL format or an Amazon S3 path-style URL. If your file is stored in a private bucket, you must specify an Amazon S3 path-style URL.
+ **https URL format**:

  ```
  https://s3.aws-api-domain/amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```
+ **Amazon S3 path-style URL**:

  ```
  s3://amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```

**Valid YAML content formats**

The formats you use to specify patches in your list depends on the operating system of your managed node. The general format, however, is as follows:

```
patches:
    - 
        id: '{patch-d}'
        title: '{patch-title}'
        {additional-fields}:{values}
```

Although you can provide additional fields in your YAML file, they're ignored during patch operations.

In addition, we recommend verifying that the format of your YAML file is valid before adding or updating the list in your S3 bucket. For more information about the YAML format, see [yaml.org](http://www.yaml.org). For validation tool options, perform a web search for "yaml format validators".

------
#### [ Linux ]

**id**  
The **id** field is required. Use it to specify patches using the package name and architecture. For example: `'dhclient.x86_64'`. You can use wildcards in id to indicate multiple packages. For example: `'dhcp*'` and `'dhcp*1.*'`.

**Title**  
The **title** field is optional, but on Linux systems it does provide additional filtering capabilities. If you use **title**, it should contain the package version information in the one of the following formats:

YUM/SUSE Linux Enterprise Server (SLES):

```
{name}.{architecture}:{epoch}:{version}-{release}
```

APT

```
{name}.{architecture}:{version}
```

For Linux patch titles, you can use one or more wildcards in any position to expand the number of package matches. For example: `'*32:9.8.2-0.*.rc1.57.amzn1'`. 

For example: 
+ apt package version 1.2.25 is currently installed on your managed node, but version 1.2.27 is now available. 
+ You add apt.amd64 version 1.2.27 to the patch list. It depends on apt utils.amd64 version 1.2.27, but apt-utils.amd64 version 1.2.25 is specified in the list. 

In this case, apt version 1.2.27 will be blocked from installation and reported as “Failed-NonCompliant.”

------
#### [ Windows Server ]

**id**  
The **id** field is required. Use it to specify patches using Microsoft Knowledge Base IDs (for example, KB2736693) and Microsoft Security Bulletin IDs (for example, MS17-023). 

Any other fields you want to provide in a patch list for Windows are optional and are for your own informational use only. You can use additional fields such as **title**, **classification**, **severity**, or anything else for providing more detailed information about the specified patches.

------
#### [ macOS ]

**id**  
The **id** field is required. The value for the **id** field can be supplied using either a `{package-name}.{package-version}` format or a \$1package\$1name\$1 format.

------

**Sample patch lists**
+ **Amazon Linux 2**

  ```
  patches:
      -
          id: 'kernel.x86_64'
      -
          id: 'bind*.x86_64'
          title: '39.11.4-26.P2.amzn2.5.2'
  
          id: 'glibc*'
      -
          id: 'dhclient*'
          title: '*4.2.5-58.amzn2'
      -
          id: 'dhcp*'
          title: '*4.2.5-77.amzn2'
  ```
+ **Debian Server**

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **macOS**

  ```
  patches:
      -
          id: 'XProtectPlistConfigData'
      -
          id: 'MRTConfigData.1.61'
      -
          id: 'Command Line Tools for Xcode.11.5'
      -
          id: 'Gatekeeper Configuration Data'
  ```
+ **Oracle Linux**

  ```
  patches:
      -
          id: 'audit-libs.x86_64'
          title: '*2.8.5-4.el7'
      -
          id: 'curl.x86_64'
          title: '*.el7'
      -
          id: 'grub2.x86_64'
          title: 'grub2.x86_64:1:2.02-0.81.0.1.el7'
      -
          id: 'grub2.x86_64'
          title: 'grub2.x86_64:1:*-0.81.0.1.el7'
  ```
+ **Red Hat Enterprise Linux (RHEL)**

  ```
  patches:
      -
          id: 'NetworkManager.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'NetworkManager-*.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'audit.x86_64'
          title: '*0:2.8.1-3.el7'
      -
          id: 'dhclient.x86_64'
          title: '*.el7_5.1'
      -
          id: 'dhcp*.x86_64'
          title: '*12:5.2.5-68.el7'
  ```
+ **SUSE Linux Enterprise Server (SLES)**

  ```
  patches:
      -
          id: 'amazon-ssm-agent.x86_64'
      -
          id: 'binutils'
          title: '*0:2.26.1-9.12.1'
      -
          id: 'glibc*.x86_64'
          title: '*2.19*'
      -
          id: 'dhcp*'
          title: '0:4.3.3-9.1'
      -
          id: 'lib*'
  ```
+ **Ubuntu Server **

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **Windows**

  ```
  patches:
      -
          id: 'KB4284819'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1709) for x64-based Systems (KB4284819)'
      -
          id: 'KB4284833'
      -
          id: 'KB4284835'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1803) for x64-based Systems (KB4284835)'
      -
          id: 'KB4284880'
      -
          id: 'KB4338814'
  ```

### Parameter name: `RebootOption`


**Usage**: Optional.

**Options**: `RebootIfNeeded` \$1 `NoReboot` 

**Default**: `RebootIfNeeded`

**Warning**  
The default option is `RebootIfNeeded`. Be sure to select the correct option for your use case. For example, if your managed nodes must reboot immediately to complete a configuration process, choose `RebootIfNeeded`. Or, if you need to maintain managed node availability until a scheduled reboot time, choose `NoReboot`.

**Important**  
We don’t recommend using Patch Manager for patching cluster instances in Amazon EMR (previously called Amazon Elastic MapReduce). In particular, don’t select the `RebootIfNeeded` option for the `RebootOption` parameter. (This option is available in the SSM Command documents for patching `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, and `AWS-RunPatchBaselineWithHooks`.)  
The underlying commands for patching using Patch Manager use `yum` and `dnf` commands. Therefore, the operations result in incompatibilities because of how packages are installed. For information about the preferred methods for updating software on Amazon EMR clusters, see [Using the default AMI for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-default-ami.html) in the *Amazon EMR Management Guide*.

RebootIfNeeded  
When you choose the `RebootIfNeeded` option, the managed node is rebooted in either of the following cases:   
+ Patch Manager installed one or more patches. 

  Patch Manager doesn't evaluate whether a reboot is *required* by the patch. The system is rebooted even if the patch doesn't require a reboot.
+ Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the `Install` operation. 

  The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the `Install` operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted.
Rebooting managed nodes in these two cases ensures that updated packages are flushed from memory and keeps patching and rebooting behavior consistent across all operating systems.

NoReboot  
When you choose the `NoReboot` option, Patch Manager doesn't reboot a managed node even if it installed patches during the `Install` operation. This option is useful if you know that your managed nodes don't require rebooting after patches are applied, or you have applications or processes running on a node that shouldn't be disrupted by a patching operation reboot. It's also useful when you want more control over the timing of managed node reboots, such as by using a maintenance window.  
If you choose the `NoReboot` option and a patch is installed, the patch is assigned a status of `InstalledPendingReboot`. The managed node itself, however, is marked as `Non-Compliant`. After a reboot occurs and a `Scan` operation is run, the managed node status is updated to `Compliant`.  
The `NoReboot` option only prevents operating system-level restarts. Service-level restarts can still occur as part of the patching process. For example, when Docker is updated, dependent services such as Amazon Elastic Container Service might automatically restart even with `NoReboot` enabled. If you have critical services that must not be disrupted, consider additional measures such as temporarily removing instances from service or scheduling patching during maintenance windows.

**Patch installation tracking file**: To track patch installation, especially patches that were installed since the last system reboot, Systems Manager maintains a file on the managed node.

**Important**  
Don't delete or modify the tracking file. If this file is deleted or corrupted, the patch compliance report for the managed node is inaccurate. If this happens, reboot the node and run a patch Scan operation to restore the file.

This tracking file is stored in the following locations on your managed nodes:
+ Linux operating systems: 
  + `/var/log/amazon/ssm/patch-configuration/patch-states-configuration.json`
  + `/var/log/amazon/ssm/patch-configuration/patch-inventory-from-last-operation.json`
+ Windows Server operating system:
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchStatesConfiguration.json`
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchInventoryFromLastOperation.json`

### Parameter name: `BaselineOverride`


**Usage**: Optional.

You can define patching preferences at runtime using the `BaselineOverride` parameter. This baseline override is maintained as a JSON object in an S3 bucket. It ensures patching operations use the provided baselines that match the host operating system instead of applying the rules from the default patch baseline

**Important**  
The `BaselineOverride` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

For more information about how to use the `BaselineOverride` parameter, see [Using the BaselineOverride parameter](patch-manager-baselineoverride-parameter.md).

### Parameter name: `StepTimeoutSeconds`


**Usage**: Required.

The time in seconds—between 1 and 36000 seconds (10 hours)—for a command to be completed before it is considered to have failed.

# SSM Command document for patching: `AWS-RunPatchBaselineAssociation`
AWS-RunPatchBaselineAssociation

Like the `AWS-RunPatchBaseline` document, `AWS-RunPatchBaselineAssociation` performs patching operations on instances for both security related and other types of updates. You can also use the document `AWS-RunPatchBaselineAssociation` to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This document supports Amazon Elastic Compute Cloud (Amazon EC2) instances for Linux, macOS, and Windows Server. It does not support non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. The document will perform the appropriate actions for each platform, invoking a Python module on Linux and macOS instances, and a PowerShell module on Windows instances.

`AWS-RunPatchBaselineAssociation`, however, differs from `AWS-RunPatchBaseline` in the following ways: 
+ `AWS-RunPatchBaselineAssociation` is intended for use primarily with State Manager associations created using [Quick Setup](systems-manager-quick-setup.md), a tool in AWS Systems Manager. Specifically, when you use the Quick Setup Host Management configuration type, if you choose the option **Scan instances for missing patches daily**, the system uses `AWS-RunPatchBaselineAssociation` for the operation.

  In most cases, however, when setting up your own patching operations, you should choose [`AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md) or [`AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md) instead of `AWS-RunPatchBaselineAssociation`.
+ When you use the `AWS-RunPatchBaselineAssociation` document, you can specify a tag key pair in the document's `BaselineTags` parameter field. If a custom patch baseline in your AWS account shares these tags, Patch Manager, a tool in AWS Systems Manager, uses that tagged baseline when it runs on the target instances instead of the currently specified "default" patch baseline for the operating system type.
**Note**  
If you choose to use `AWS-RunPatchBaselineAssociation` in patching operations other than those set up using Quick Setup, and you want to use its optional `BaselineTags` parameter, you must provide some additional permissions to the [instance profile](setup-instance-permissions.md) for Amazon Elastic Compute Cloud (Amazon EC2) instances. For more information, see [Parameter name: `BaselineTags`](#patch-manager-aws-runpatchbaselineassociation-parameters-baselinetags).

  Both of the following formats are valid for your `BaselineTags` parameter:

  `Key=tag-key,Values=tag-value`

  `Key=tag-key,Values=tag-value1,tag-value2,tag-value3`
**Important**  
Tag keys and values can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).
+ When `AWS-RunPatchBaselineAssociation` runs, the patch compliance data it collects is recorded using the `PutComplianceItems` API command instead of the `PutInventory` command, which is used by `AWS-RunPatchBaseline`. This difference means that the patch compliance information that is stored and reported per a specific *association*. Patch compliance data generated outside of this association isn't overwritten.
+ The patch compliance information reported after running `AWS-RunPatchBaselineAssociation` indicates whether or not an instance is in compliance. It doesn't include patch-level details, as demonstrated by the output of the following AWS Command Line Interface (AWS CLI) command. The command filters on `Association` as the compliance type:

  ```
  aws ssm list-compliance-items \
      --resource-ids "i-02573cafcfEXAMPLE" \
      --resource-types "ManagedInstance" \
      --filters "Key=ComplianceType,Values=Association,Type=EQUAL" \
      --region us-east-2
  ```

  The system returns information like the following.

  ```
  {
      "ComplianceItems": [
          {
              "Status": "NON_COMPLIANT", 
              "Severity": "UNSPECIFIED", 
              "Title": "MyPatchAssociation", 
              "ResourceType": "ManagedInstance", 
              "ResourceId": "i-02573cafcfEXAMPLE", 
              "ComplianceType": "Association", 
              "Details": {
                  "DocumentName": "AWS-RunPatchBaselineAssociation", 
                  "PatchBaselineId": "pb-0c10e65780EXAMPLE", 
                  "DocumentVersion": "1"
              }, 
              "ExecutionSummary": {
                  "ExecutionTime": 1590698771.0
              }, 
              "Id": "3e5d5694-cd07-40f0-bbea-040e6EXAMPLE"
          }
      ]
  }
  ```

If a tag key pair value has been specified as a parameter for the `AWS-RunPatchBaselineAssociation` document, Patch Manager searches for a custom patch baseline that matches the operating system type and has been tagged with that same tag-key pair. This search isn't limited to the current specified default patch baseline or the baseline assigned to a patch group. If no baseline is found with the specified tags, Patch Manager next looks for a patch group, if one was specified in the command that runs `AWS-RunPatchBaselineAssociation`. If no patch group is matched, Patch Manager falls back to the current default patch baseline for the operating system account. 

If more than one patch baseline is found with the tags specified in the `AWS-RunPatchBaselineAssociation` document, Patch Manager returns an error message indicating that only one patch baseline can be tagged with that key-value pair in order for the operation to proceed.

**Note**  
On Linux nodes, the appropriate package manager for each node type is used to install packages:   
Amazon Linux 2, Oracle Linux, and RHEL instances use YUM. For YUM operations, Patch Manager requires `Python 2.6` or a later supported version (2.6 - 3.12). Amazon Linux 2023 uses DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12)
Debian Server and Ubuntu Server instances use APT. For APT operations, Patch Manager requires a supported version of `Python 3` (3.0 - 3.12).

After a scan is complete, or after all approved and applicable updates have been installed, with reboots performed as necessary, patch compliance information is generated on an instance and reported back to the Patch Compliance service. 

**Note**  
If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineAssociation` document, the instance isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaselineassociation-parameters-norebootoption).

For information about viewing patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch). 

## `AWS-RunPatchBaselineAssociation` parameters


`AWS-RunPatchBaselineAssociation` supports five parameters. The `Operation` and `AssociationId` parameters are required. The `InstallOverrideList`, `RebootOption`, and `BaselineTags` parameters are optional. 

**Topics**
+ [

### Parameter name: `Operation`
](#patch-manager-aws-runpatchbaselineassociation-parameters-operation)
+ [

### Parameter name: `BaselineTags`
](#patch-manager-aws-runpatchbaselineassociation-parameters-baselinetags)
+ [

### Parameter name: `AssociationId`
](#patch-manager-aws-runpatchbaselineassociation-parameters-association-id)
+ [

### Parameter name: `InstallOverrideList`
](#patch-manager-aws-runpatchbaselineassociation-parameters-installoverridelist)
+ [

### Parameter name: `RebootOption`
](#patch-manager-aws-runpatchbaselineassociation-parameters-norebootoption)

### Parameter name: `Operation`


**Usage**: Required.

**Options**: `Scan` \$1 `Install`. 

Scan  
When you choose the `Scan` option, `AWS-RunPatchBaselineAssociation` determines the patch compliance state of the instance and reports this information back to Patch Manager. `Scan` doesn't prompt updates to be installed or instances to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the instance. 

Install  
When you choose the `Install` option, `AWS-RunPatchBaselineAssociation` attempts to install the approved and applicable updates that are missing from the instance. Patch compliance information generated as part of an `Install` operation doesn't list any missing updates, but might report updates that are in a failed state if the installation of the update didn't succeed for any reason. Whenever an update is installed on an instance, the instance is rebooted to ensure the update is both installed and active. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineAssociation` document, the instance isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaselineassociation-parameters-norebootoption).)  
If a patch specified by the baseline rules is installed *before* Patch Manager updates the instance, the system might not reboot as expected. This can happen when a patch is installed manually by a user or installed automatically by another program, such as the `unattended-upgrades` package on Ubuntu Server.

### Parameter name: `BaselineTags`


**Usage**: Optional. 

`BaselineTags` is a unique tag key-value pair that you choose and assign to an individual custom patch baseline. You can specify one or more values for this parameter. Both of the following formats are valid:

`Key=tag-key,Values=tag-value`

`Key=tag-key,Values=tag-value1,tag-value2,tag-value3`

**Important**  
Tag keys and values can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

The `BaselineTags` value is used by Patch Manager to ensure that a set of instances that are patched in a single operation all have the exact same set of approved patches. When the patching operation runs, Patch Manager checks to see if a patch baseline for the operating system type is tagged with the same key-value pair you specify for `BaselineTags`. If there is a match, this custom patch baseline is used. If there isn't a match, a patch baseline is identified according to any patch group specified for the patching operating. If there is none, the AWS managed predefined patch baseline for that operating system is used. 

**Additional permission requirements**  
If you use `AWS-RunPatchBaselineAssociation` in patching operations other than those set up using Quick Setup, and you want to use the optional `BaselineTags` parameter, you must add the following permissions to the [instance profile](setup-instance-permissions.md) for Amazon Elastic Compute Cloud (Amazon EC2) instances.

**Note**  
Quick Setup and `AWS-RunPatchBaselineAssociation` don't support on-premises servers and virtual machines (VMs).

```
{
    "Effect": "Allow",
    "Action": [
        "ssm:DescribePatchBaselines",
        "tag:GetResources"
    ],
    "Resource": "*"
},
{
    "Effect": "Allow",
    "Action": [
        "ssm:GetPatchBaseline",
        "ssm:DescribeEffectivePatchesForPatchBaseline"
    ],
    "Resource": "patch-baseline-arn"
}
```

Replace *patch-baseline-arn* with the Amazon Resource Name (ARN) of the patch baseline to which you want to provide access, in the format `arn:aws:ssm:us-east-2:123456789012:patchbaseline/pb-0c10e65780EXAMPLE`.

### Parameter name: `AssociationId`


**Usage**: Required.

`AssociationId` is the ID of an existing association in State Manager, a tool in AWS Systems Manager. It's used by Patch Manager to add compliance data to a specified association. This association is related to a patch `Scan` operation enabled in a [Host Management configuration created in Quick Setup](quick-setup-host-management.md). By sending patching results as association compliance data instead of inventory compliance data, existing inventory compliance information for your instances isn't overwritten after a patching operation, nor for other association IDs. If you don't already have an association you want to use, you can create one by running [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-association.html) the command. For example:

------
#### [ Linux & macOS ]

```
aws ssm create-association \
    --name "AWS-RunPatchBaselineAssociation" \
    --association-name "MyPatchHostConfigAssociation" \
    --targets "Key=instanceids,Values=[i-02573cafcfEXAMPLE,i-07782c72faEXAMPLE,i-07782c72faEXAMPLE]" \
    --parameters "Operation=Scan" \
    --schedule-expression "cron(0 */30 * * * ? *)" \
    --sync-compliance "MANUAL" \
    --region us-east-2
```

------
#### [ Windows Server ]

```
aws ssm create-association ^
    --name "AWS-RunPatchBaselineAssociation" ^
    --association-name "MyPatchHostConfigAssociation" ^
    --targets "Key=instanceids,Values=[i-02573cafcfEXAMPLE,i-07782c72faEXAMPLE,i-07782c72faEXAMPLE]" ^
    --parameters "Operation=Scan" ^
    --schedule-expression "cron(0 */30 * * * ? *)" ^
    --sync-compliance "MANUAL" ^
    --region us-east-2
```

------

### Parameter name: `InstallOverrideList`


**Usage**: Optional.

Using `InstallOverrideList`, you specify an https URL or an Amazon Simple Storage Service (Amazon S3) path-style URL to a list of patches to be installed. This patch installation list, which you maintain in YAML format, overrides the patches specified by the current default patch baseline. This provides you with more granular control over which patches are installed on your instances.

**Important**  
The `InstallOverrideList` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

The patching operation behavior when using the `InstallOverrideList` parameter differs between Linux & macOS managed nodes and Windows Server managed nodes. On Linux & macOS, Patch Manager attempts to apply patches included in the `InstallOverrideList` patch list that are present in any repository enabled on the node, whether or not the patches match the patch baseline rules. On Windows Server nodes, however, patches in the `InstallOverrideList` patch list are applied *only* if they also match the patch baseline rules.

On Linux & macOS managed nodes, patches specified in the `InstallOverrideList` are applied only as updates to packages that are already installed on the node. If the `InstallOverrideList` includes patches for packages that are not currently installed on the node, those patches are not installed.

Be aware that compliance reports reflect patch states according to what’s specified in the patch baseline, not what you specify in an `InstallOverrideList` list of patches. In other words, Scan operations ignore the `InstallOverrideList` parameter. This is to ensure that compliance reports consistently reflect patch states according to policy rather than what was approved for a specific patching operation. 

**Valid URL formats**

**Note**  
If your file is stored in a publicly available bucket, you can specify either an https URL format or an Amazon S3 path-style URL. If your file is stored in a private bucket, you must specify an Amazon S3 path-style URL.
+ **https URL format example**:

  ```
  https://s3.amazonaws.com/amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```
+ **Amazon S3 path-style URL example**:

  ```
  s3://amzn-s3-demo-bucket/my-windows-override-list.yaml
  ```

**Valid YAML content formats**

The formats you use to specify patches in your list depends on the operating system of your instance. The general format, however, is as follows:

```
patches:
    - 
        id: '{patch-d}'
        title: '{patch-title}'
        {additional-fields}:{values}
```

Although you can provide additional fields in your YAML file, they're ignored during patch operations.

In addition, we recommend verifying that the format of your YAML file is valid before adding or updating the list in your S3 bucket. For more information about the YAML format, see [yaml.org](http://www.yaml.org). For validation tool options, perform a web search for "yaml format validators".
+ Microsoft Windows

**id**  
The **id** field is required. Use it to specify patches using Microsoft Knowledge Base IDs (for example, KB2736693) and Microsoft Security Bulletin IDs (for example, MS17-023). 

  Any other fields you want to provide in a patch list for Windows are optional and are for your own informational use only. You can use additional fields such as **title**, **classification**, **severity**, or anything else for providing more detailed information about the specified patches.
+ Linux

**id**  
The **id** field is required. Use it to specify patches using the package name and architecture. For example: `'dhclient.x86_64'`. You can use wildcards in id to indicate multiple packages. For example: `'dhcp*'` and `'dhcp*1.*'`.

**title**  
The **title** field is optional, but on Linux systems it does provide additional filtering capabilities. If you use **title**, it should contain the package version information in the one of the following formats:

  YUM/Red Hat Enterprise Linux (RHEL):

  ```
  {name}.{architecture}:{epoch}:{version}-{release}
  ```

  APT

  ```
  {name}.{architecture}:{version}
  ```

  For Linux patch titles, you can use one or more wildcards in any position to expand the number of package matches. For example: `'*32:9.8.2-0.*.rc1.57.amzn1'`. 

  For example: 
  + apt package version 1.2.25 is currently installed on your instance, but version 1.2.27 is now available. 
  + You add apt.amd64 version 1.2.27 to the patch list. It depends on apt utils.amd64 version 1.2.27, but apt-utils.amd64 version 1.2.25 is specified in the list. 

  In this case, apt version 1.2.27 will be blocked from installation and reported as “Failed-NonCompliant.”

**Other fields**  
Any other fields you want to provide in a patch list for Linux are optional and are for your own informational use only. You can use additional fields such as **classification**, **severity**, or anything else for providing more detailed information about the specified patches.

**Sample patch lists**
+ **Windows**

  ```
  patches:
      -
          id: 'KB4284819'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1709) for x64-based Systems (KB4284819)'
      -
          id: 'KB4284833'
      -
          id: 'KB4284835'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1803) for x64-based Systems (KB4284835)'
      -
          id: 'KB4284880'
      -
          id: 'KB4338814'
  ```
+ **APT**

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **Amazon Linux 2**

  ```
  patches:
      -
          id: 'kernel.x86_64'
      -
          id: 'bind*.x86_64'
          title: '39.11.4-26.P2.amzn2.5.2'
  
          id: 'glibc*'
      -
          id: 'dhclient*'
          title: '*4.2.5-58.amzn2'
      -
          id: 'dhcp*'
          title: '*4.2.5-77.amzn2'
  ```
+ **Red Hat Enterprise Linux (RHEL)**

  ```
  patches:
      -
          id: 'NetworkManager.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'NetworkManager-*.x86_64'
          title: '*1:1.10.2-14.el7_5'
      -
          id: 'audit.x86_64'
          title: '*0:2.8.1-3.el7'
      -
          id: 'dhclient.x86_64'
          title: '*.el7_5.1'
      -
          id: 'dhcp*.x86_64'
          title: '*12:5.2.5-68.el7'
  ```
+ **SUSE Linux Enterprise Server (SLES)**

  ```
  patches:
      -
          id: 'amazon-ssm-agent.x86_64'
      -
          id: 'binutils'
          title: '*0:2.26.1-9.12.1'
      -
          id: 'glibc*.x86_64'
          title: '*2.19*'
      -
          id: 'dhcp*'
          title: '0:4.3.3-9.1'
      -
          id: 'lib*'
  ```
+ **Ubuntu Server **

  ```
  patches:
      -
          id: 'apparmor.amd64'
          title: '2.10.95-0ubuntu2.9'
      -
          id: 'cryptsetup.amd64'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'cryptsetup-bin.*'
          title: '*2:1.6.6-5ubuntu2.1'
      -
          id: 'apt.amd64'
          title: '*1.2.27'
      -
          id: 'apt-utils.amd64'
          title: '*1.2.25'
  ```
+ **Windows**

  ```
  patches:
      -
          id: 'KB4284819'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1709) for x64-based Systems (KB4284819)'
      -
          id: 'KB4284833'
      -
          id: 'KB4284835'
          title: '2018-06 Cumulative Update for Windows Server 2016 (1803) for x64-based Systems (KB4284835)'
      -
          id: 'KB4284880'
      -
          id: 'KB4338814'
  ```

### Parameter name: `RebootOption`


**Usage**: Optional.

**Options**: `RebootIfNeeded` \$1 `NoReboot` 

**Default**: `RebootIfNeeded`

**Warning**  
The default option is `RebootIfNeeded`. Be sure to select the correct option for your use case. For example, if your instances must reboot immediately to complete a configuration process, choose `RebootIfNeeded`. Or, if you need to maintain instances availability until a scheduled reboot time, choose `NoReboot`.

**Important**  
The `NoReboot` option only prevents operating system-level restarts. Service-level restarts can still occur as part of the patching process. For example, when Docker is updated, dependent services such as Amazon Elastic Container Service might automatically restart even with `NoReboot` enabled. If you have critical services that must not be disrupted, consider additional measures such as temporarily removing instances from service or scheduling patching during maintenance windows.

**Important**  
We don’t recommend using Patch Manager for patching cluster instances in Amazon EMR (previously called Amazon Elastic MapReduce). In particular, don’t select the `RebootIfNeeded` option for the `RebootOption` parameter. (This option is available in the SSM Command documents for patching `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, and `AWS-RunPatchBaselineWithHooks`.)  
The underlying commands for patching using Patch Manager use `yum` and `dnf` commands. Therefore, the operations result in incompatibilities because of how packages are installed. For information about the preferred methods for updating software on Amazon EMR clusters, see [Using the default AMI for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-default-ami.html) in the *Amazon EMR Management Guide*.

RebootIfNeeded  
When you choose the `RebootIfNeeded` option, the instance is rebooted in either of the following cases:   
+ Patch Manager installed one or more patches. 

  Patch Manager doesn't evaluate whether a reboot is *required* by the patch. The system is rebooted even if the patch doesn't require a reboot.
+ Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the `Install` operation. 

  The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the `Install` operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted.
Rebooting instances in these two cases ensures that updated packages are flushed from memory and keeps patching and rebooting behavior consistent across all operating systems.

NoReboot  
When you choose the `NoReboot` option, Patch Manager doesn't reboot an instance even if it installed patches during the `Install` operation. This option is useful if you know that your instances don't require rebooting after patches are applied, or you have applications or processes running on an instance that shouldn't be disrupted by a patching operation reboot. It's also useful when you want more control over the timing of instance reboots, such as by using a maintenance window.

**Patch installation tracking file**: To track patch installation, especially patches that have been installed since the last system reboot, Systems Manager maintains a file on the managed instance.

**Important**  
Don't delete or modify the tracking file. If this file is deleted or corrupted, the patch compliance report for the instance is inaccurate. If this happens, reboot the instance and run a patch Scan operation to restore the file.

This tracking file is stored in the following locations on your managed instances:
+ Linux operating systems: 
  + `/var/log/amazon/ssm/patch-configuration/patch-states-configuration.json`
  + `/var/log/amazon/ssm/patch-configuration/patch-inventory-from-last-operation.json`
+ Windows Server operating system:
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchStatesConfiguration.json`
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchInventoryFromLastOperation.json`

# SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`
AWS-RunPatchBaselineWithHooks

AWS Systems Manager supports `AWS-RunPatchBaselineWithHooks`, a Systems Manager document (SSM document) for Patch Manager, a tool in AWS Systems Manager. This SSM document performs patching operations on managed nodes for both security related and other types of updates. 

`AWS-RunPatchBaselineWithHooks` differs from `AWS-RunPatchBaseline` in the following ways:
+ **A wrapper document** – `AWS-RunPatchBaselineWithHooks` is a wrapper for `AWS-RunPatchBaseline` and relies on `AWS-RunPatchBaseline` for some of its operations.
+ **The `Install` operation** – `AWS-RunPatchBaselineWithHooks` supports lifecycle hooks that run at designated points during managed node patching. Because patch installations sometimes require managed nodes to reboot, the patching operation is divided into two events, for a total of three hooks that support custom functionality. The first hook is before the `Install with NoReboot` operation. The second hook is after the `Install with NoReboot` operation. The third hook is available after the reboot of the managed node.
+ **No custom patch list support** – `AWS-RunPatchBaselineWithHooks` doesn't support the `InstallOverrideList` parameter.
+ **SSM Agent support** – `AWS-RunPatchBaselineWithHooks` requires that SSM Agent 3.0.502 or later be installed on the managed node to patch.

When the document is run, it uses the patch baseline currently specified as the "default" for an operating system type if no patch group is specified. Otherwise, it uses the patch baselines that is associated with the patch group. For information about patch groups, see [Patch groups](patch-manager-patch-groups.md). 

You can use the document `AWS-RunPatchBaselineWithHooks` to apply patches for both operating systems and applications. (On Windows Server, application support is limited to updates for applications released by Microsoft.)

This document supports Linux and Windows Server managed nodes. The document will perform the appropriate actions for each platform.

**Note**  
`AWS-RunPatchBaselineWithHooks` isn't supported on macOS.

------
#### [ Linux ]

On Linux managed nodes, the `AWS-RunPatchBaselineWithHooks` document invokes a Python module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot uses the defined rules and lists of approved and blocked patches to drive the appropriate package manager for each node type: 
+ Amazon Linux 2, Oracle Linux, and RHEL 7 managed nodes use YUM. For YUM operations, Patch Manager requires `Python 2.6` or a later supported version (2.6 - 3.12). Amazon Linux 2023 uses DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12).
+ RHEL 8 managed nodes use DNF. For DNF operations, Patch Manager requires a supported version of `Python 2` or `Python 3` (2.6 - 3.12). (Neither version is installed by default on RHEL 8. You must install one or the other manually.)
+ Debian Server and Ubuntu Server instances use APT. For APT operations, Patch Manager requires a supported version of `Python 3` (3.0 - 3.12).

------
#### [ Windows Server ]

On Windows Server managed nodes, the `AWS-RunPatchBaselineWithHooks` document downloads and invokes a PowerShell module, which in turn downloads a snapshot of the patch baseline that applies to the managed node. This patch baseline snapshot contains a list of approved patches that is compiled by querying the patch baseline against a Windows Server Update Services (WSUS) server. This list is passed to the Windows Update API, which controls downloading and installing the approved patches as appropriate. 

------

Each snapshot is specific to an AWS account, patch group, operating system, and snapshot ID. The snapshot is delivered through a presigned Amazon Simple Storage Service (Amazon S3) URL, which expires 24 hours after the snapshot is created. After the URL expires, however, if you want to apply the same snapshot content to other managed nodes, you can generate a new presigned Amazon S3 URL up to three days after the snapshot was created. To do this, use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-deployable-patch-snapshot-for-instance.html) command. 

After all approved and applicable updates have been installed, with reboots performed as necessary, patch compliance information is generated on an managed node and reported back to Patch Manager. 

If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineWithHooks` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).

**Important**  
While the `NoReboot` option prevents operating system restarts, it does not prevent service-level restarts that might occur when certain packages are updated. For example, updating packages like Docker may trigger automatic restarts of dependent services (such as container orchestration services) even when `NoReboot` is specified.

For information about viewing patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch).

## `AWS-RunPatchBaselineWithHooks` operational steps


When the `AWS-RunPatchBaselineWithHooks` runs, the following steps are performed:

1. **Scan** - A `Scan` operation using `AWS-RunPatchBaseline` is run on the managed node, and a compliance report is generated and uploaded. 

1. **Verify local patch states** - A script is run to determine what steps will be performed based on the selected operation and `Scan` result from Step 1. 

   1. If the selected operation is `Scan`, the operation is marked complete. The operation concludes. 

   1. If the selected operation is `Install`, Patch Manager evaluates the `Scan` result from Step 1 to determine what to run next: 

      1. If no missing patches are detected, and no pending reboots required, the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. 

      1. If no missing patches are detected, but there are pending reboots required and the selected reboot option is `NoReboot`, the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. 

      1. Otherwise, the operation proceeds to the next step.

1. **Pre-patch hook operation** - The SSM document you have provided for the first lifecycle hook, `PreInstallHookDocName`, is run on the managed node. 

1. **Install with NoReboot** - An `Install` operation with the reboot option of `NoReboot` using `AWS-RunPatchBaseline` is run on the managed node, and a compliance report is generated and uploaded. 

1. **Post-install hook operation** - The SSM document you have provided for the second lifecycle hook, `PostInstallHookDocName`, is run on the managed node.

1. **Verify reboot** - A script runs to determine whether a reboot is needed for the managed node and what steps to run:

   1. If the selected reboot option is `NoReboot`, the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. 

   1. If the selected reboot option is `RebootIfNeeded`, Patch Manager checks whether there are any pending reboots required from the inventory collected in Step 4. This means that the operation continues to Step 7 and the managed node is rebooted in either of the following cases:

      1. Patch Manager installed one or more patches. (Patch Manager doesn't evaluate whether a reboot is required by the patch. The system is rebooted even if the patch doesn't require a reboot.)

      1. Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the Install operation. The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the Install operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted. 

      If no patches meeting these criteria are found, the managed node patching operation is complete, and the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped.

1. **Reboot and report** - An installation operation with the reboot option of `RebootIfNeeded` runs on the managed node using `AWS-RunPatchBaseline`, and a compliance report is generated and uploaded. 

1. **Post-reboot hook operation** - The SSM document you have provided for the third lifecycle hook, `OnExitHookDocName`, is run on the managed node. 

For a `Scan` operation, if Step 1 fails, the process of running the document stops and the step is reported as failed, although subsequent steps are reported as successful. 

 For an `Install` operation, if any of the `aws:runDocument` steps fail during the operation, those steps are reported as failed, and the operation proceeds directly to the final step (Step 8), which includes a hook you have provided. Any steps in between are skipped. This step is reported as failed, the last step reports the status of its operation result, and all steps in between are reported as successful.

## `AWS-RunPatchBaselineWithHooks` parameters


`AWS-RunPatchBaselineWithHooks` supports six parameters. 

The `Operation` parameter is required. 

The `RebootOption`, `PreInstallHookDocName`, `PostInstallHookDocName`, and `OnExitHookDocName` parameters are optional. 

`Snapshot-ID` is technically optional, but we recommend that you supply a custom value for it when you run `AWS-RunPatchBaselineWithHooks` outside of a maintenance window. Let Patch Manager supply the value automatically when the document is run as part of a maintenance window operation.

**Topics**
+ [

### Parameter name: `Operation`
](#patch-manager-aws-runpatchbaseline-parameters-operation)
+ [

### Parameter name: `Snapshot ID`
](#patch-manager-aws-runpatchbaselinewithhook-parameters-snapshot-id)
+ [

### Parameter name: `RebootOption`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-norebootoption)
+ [

### Parameter name: `PreInstallHookDocName`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-preinstallhookdocname)
+ [

### Parameter name: `PostInstallHookDocName`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-postinstallhookdocname)
+ [

### Parameter name: `OnExitHookDocName`
](#patch-manager-aws-runpatchbaselinewithhooks-parameters-onexithookdocname)

### Parameter name: `Operation`


**Usage**: Required.

**Options**: `Scan` \$1 `Install`. 

Scan  
When you choose the `Scan` option, the systems uses the `AWS-RunPatchBaseline` document to determine the patch compliance state of the managed node and reports this information back to Patch Manager. `Scan` doesn't prompt updates to be installed or managed nodes to be rebooted. Instead, the operation identifies where updates are missing that are approved and applicable to the node. 

Install  
When you choose the `Install` option, `AWS-RunPatchBaselineWithHooks` attempts to install the approved and applicable updates that are missing from the managed node. Patch compliance information generated as part of an `Install` operation doesn't list any missing updates, but might report updates that are in a failed state if the installation of the update didn't succeed for any reason. Whenever an update is installed on a managed node, the node is rebooted to ensure the update is both installed and active. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaselineWithHooks` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](#patch-manager-aws-runpatchbaselinewithhooks-parameters-norebootoption).)  
If a patch specified by the baseline rules is installed *before* Patch Manager updates the managed node, the system might not reboot as expected. This can happen when a patch is installed manually by a user or installed automatically by another program, such as the `unattended-upgrades` package on Ubuntu Server.

### Parameter name: `Snapshot ID`


**Usage**: Optional.

`Snapshot ID` is a unique ID (GUID) used by Patch Manager to ensure that a set of managed nodes that are patched in a single operation all have the exact same set of approved patches. Although the parameter is defined as optional, our best practice recommendation depends on whether or not you're running `AWS-RunPatchBaselineWithHooks` in a maintenance window, as described in the following table.


**`AWS-RunPatchBaselineWithHooks` best practices**  

| Mode | Best practice | Details | 
| --- | --- | --- | 
| Running AWS-RunPatchBaselineWithHooks inside a maintenance window | Don't supply a Snapshot ID. Patch Manager will supply it for you. |  If you use a maintenance window to run `AWS-RunPatchBaselineWithHooks`, you shouldn't provide your own generated Snapshot ID. In this scenario, Systems Manager provides a GUID value based on the maintenance window execution ID. This ensures that a correct ID is used for all the invocations of `AWS-RunPatchBaselineWithHooks` in that maintenance window.  If you do specify a value in this scenario, note that the snapshot of the patch baseline might not remain in place for more than 3 days. After that, a new snapshot will be generated even if you specify the same ID after the snapshot expires.   | 
| Running AWS-RunPatchBaselineWithHooks outside of a maintenance window | Generate and specify a custom GUID value for the Snapshot ID.¹ |  When you aren't using a maintenance window to run `AWS-RunPatchBaselineWithHooks`, we recommend that you generate and specify a unique Snapshot ID for each patch baseline, particularly if you're running the `AWS-RunPatchBaselineWithHooks` document on multiple managed nodes in the same operation. If you don't specify an ID in this scenario, Systems Manager generates a different Snapshot ID for each managed node the command is sent to. This might result in varying sets of patches being specified among the nodes. For instance, say that you're running the `AWS-RunPatchBaselineWithHooks` document directly through Run Command, a tool inAWS Systems Manager, and targeting a group of 50 managed nodes. Specifying a custom Snapshot ID results in the generation of a single baseline snapshot that is used to evaluate and patch all the managed nodes, ensuring that they end up in a consistent state.   | 
|  ¹ You can use any tool capable of generating a GUID to generate a value for the Snapshot ID parameter. For example, in PowerShell, you can use the `New-Guid` cmdlet to generate a GUID in the format of `12345699-9405-4f69-bc5e-9315aEXAMPLE`.  | 

### Parameter name: `RebootOption`


**Usage**: Optional.

**Options**: `RebootIfNeeded` \$1 `NoReboot` 

**Default**: `RebootIfNeeded`

**Warning**  
The default option is `RebootIfNeeded`. Be sure to select the correct option for your use case. For example, if your managed nodes must reboot immediately to complete a configuration process, choose `RebootIfNeeded`. Or, if you need to maintain managed node availability until a scheduled reboot time, choose `NoReboot`.

**Important**  
We don’t recommend using Patch Manager for patching cluster instances in Amazon EMR (previously called Amazon Elastic MapReduce). In particular, don’t select the `RebootIfNeeded` option for the `RebootOption` parameter. (This option is available in the SSM Command documents for patching `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, and `AWS-RunPatchBaselineWithHooks`.)  
The underlying commands for patching using Patch Manager use `yum` and `dnf` commands. Therefore, the operations result in incompatibilities because of how packages are installed. For information about the preferred methods for updating software on Amazon EMR clusters, see [Using the default AMI for Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-default-ami.html) in the *Amazon EMR Management Guide*.

RebootIfNeeded  
When you choose the `RebootIfNeeded` option, the managed node is rebooted in either of the following cases:   
+ Patch Manager installed one or more patches. 

  Patch Manager doesn't evaluate whether a reboot is *required* by the patch. The system is rebooted even if the patch doesn't require a reboot.
+ Patch Manager detects one or more patches with a status of `INSTALLED_PENDING_REBOOT` during the `Install` operation. 

  The `INSTALLED_PENDING_REBOOT` status can mean that the option `NoReboot` was selected the last time the `Install` operation was run, or that a patch was installed outside of Patch Manager since the last time the managed node was rebooted.
Rebooting managed nodes in these two cases ensures that updated packages are flushed from memory and keeps patching and rebooting behavior consistent across all operating systems.

NoReboot  
When you choose the `NoReboot` option, Patch Manager doesn't reboot a managed node even if it installed patches during the `Install` operation. This option is useful if you know that your managed nodes don't require rebooting after patches are applied, or you have applications or processes running on a node that shouldn't be disrupted by a patching operation reboot. It's also useful when you want more control over the timing of managed node reboots, such as by using a maintenance window.  
If you choose the `NoReboot` option and a patch is installed, the patch is assigned a status of `InstalledPendingReboot`. The managed node itself, however, is marked as `Non-Compliant`. After a reboot occurs and a `Scan` operation is run, the node status is updated to `Compliant`.

**Patch installation tracking file**: To track patch installation, especially patches that were installed since the last system reboot, Systems Manager maintains a file on the managed node.

**Important**  
Don't delete or modify the tracking file. If this file is deleted or corrupted, the patch compliance report for the managed node is inaccurate. If this happens, reboot the node and run a patch Scan operation to restore the file.

This tracking file is stored in the following locations on your managed nodes:
+ Linux operating systems: 
  + `/var/log/amazon/ssm/patch-configuration/patch-states-configuration.json`
  + `/var/log/amazon/ssm/patch-configuration/patch-inventory-from-last-operation.json`
+ Windows Server operating system:
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchStatesConfiguration.json`
  + `C:\ProgramData\Amazon\PatchBaselineOperations\State\PatchInventoryFromLastOperation.json`

### Parameter name: `PreInstallHookDocName`


**Usage**: Optional.

**Default**: `AWS-Noop`. 

The value to provide for the `PreInstallHookDocName` parameter is the name or Amazon Resource Name (ARN) of an SSM document of your choice. You can provide the name of an AWS managed document or the name or ARN of a custom SSM document that you have created or that has been shared with you. (For an SSM document that has been shared with you from a different AWS account, you must specify the full resource ARN, such as `arn:aws:ssm:us-east-2:123456789012:document/MySharedDocument`.)

The SSM document you specify is run before the `Install` operation and performs any actions supported by SSM Agent, such as a shell script to check application health check before patching is performed on the managed node. (For a list of actions, see [Command document plugin reference](documents-command-ssm-plugin-reference.md)). The default SSM document name is `AWS-Noop`, which doesn't perform any operation on the managed node. 

For information about creating a custom SSM document, see [Creating SSM document content](documents-creating-content.md). 

### Parameter name: `PostInstallHookDocName`


**Usage**: Optional.

**Default**: `AWS-Noop`. 

The value to provide for the `PostInstallHookDocName` parameter is the name or Amazon Resource Name (ARN) of an SSM document of your choice. You can provide the name of an AWS managed document or the name or ARN of a custom SSM document that you have created or that has been shared with you. (For an SSM document that has been shared with you from a different AWS account, you must specify the full resource ARN, such as `arn:aws:ssm:us-east-2:123456789012:document/MySharedDocument`.)

The SSM document you specify is run after the `Install with NoReboot` operation and performs any actions supported by SSM Agent, such as a shell script for installing third party updates before reboot. (For a list of actions, see [Command document plugin reference](documents-command-ssm-plugin-reference.md)). The default SSM document name is `AWS-Noop`, which doesn't perform any operation on the managed node. 

For information about creating a custom SSM document, see [Creating SSM document content](documents-creating-content.md). 

### Parameter name: `OnExitHookDocName`


**Usage**: Optional.

**Default**: `AWS-Noop`. 

The value to provide for the `OnExitHookDocName` parameter is the name or Amazon Resource Name (ARN) of an SSM document of your choice. You can provide the name of an AWS managed document or the name or ARN of a custom SSM document that you have created or that has been shared with you. (For an SSM document that has been shared with you from a different AWS account, you must specify the full resource ARN, such as `arn:aws:ssm:us-east-2:123456789012:document/MySharedDocument`.)

The SSM document you specify is run after the managed node reboot operation and performs any actions supported by SSM Agent, such as a shell script to verify node health after the patching operation is complete. (For a list of actions, see [Command document plugin reference](documents-command-ssm-plugin-reference.md)). The default SSM document name is `AWS-Noop`, which doesn't perform any operation on the managed node. 

For information about creating a custom SSM document, see [Creating SSM document content](documents-creating-content.md). 

# Sample scenario for using the InstallOverrideList parameter in `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation`
Sample scenario for using the `InstallOverrideList` parameter

You can use the `InstallOverrideList` parameter when you want to override the patches specified by the current default patch baseline in Patch Manager, a tool in AWS Systems Manager. This topic provides examples that show how to use this parameter to achieve the following:
+ Apply different sets of patches to a target group of managed nodes.
+ Apply these patch sets on different frequencies.
+ Use the same patch baseline for both operations.

Say that you want to install two different categories of patches on your Amazon Linux 2 managed nodes. You want to install these patches on different schedules using maintenance windows. You want one maintenance window to run every week and install all `Security` patches. You want another maintenance window to run once a month and install all available patches, or categories of patches other than `Security`. 

However, only one patch baseline at a time can be defined as the default for an operating system. This requirement helps avoid situations where one patch baseline approves a patch while another blocks it, which can lead to issues between conflicting versions.

With the following strategy, you use the `InstallOverrideList` parameter to apply different types of patches to a target group, on different schedules, while still using the same patch baseline:

1. In the default patch baseline, ensure that only `Security` updates are specified.

1. Create a maintenance window that runs `AWS-RunPatchBaseline` or `AWS-RunPatchBaselineAssociation` each week. Don't specify an override list.

1. Create an override list of the patches of all types that you want to apply on a monthly basis and store it in an Amazon Simple Storage Service (Amazon S3) bucket. 

1. Create a second maintenance window that runs once a month. However, for the Run Command task you register for this maintenance window, specify the location of your override list.

The result: Only `Security` patches, as defined in your default patch baseline, are installed each week. All available patches, or whatever subset of patches you define, are installed each month.

**Note**  
When you're patching a node that only uses IPv6, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided. See [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md) for more information on configuring the agent to use dualstack.

For more information and sample lists, see [Parameter name: `InstallOverrideList`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-installoverridelist).

# Using the BaselineOverride parameter


You can define patching preferences at runtime using the baseline override feature in Patch Manager, a tool in AWS Systems Manager. Do this by specifying an Amazon Simple Storage Service (Amazon S3) bucket containing a JSON object with a list of patch baselines. The patching operation uses the baselines provided in the JSON object that match the host operating system instead of applying the rules from the default patch baseline.

**Important**  
The `BaselineOverride` file name can't contain the following characters: backtick (`), single quote ('), double quote ("), and dollar sign (\$1).

Except when a patching operation uses a patch policy, using the `BaselineOverride` parameter doesn't overwrite the patch compliance of the baseline provided in the parameter. The output results are recorded in the Stdout logs from Run Command, a tool in AWS Systems Manager. The results only print out packages that are marked as `NON_COMPLIANT`. This means the package is marked as `Missing`, `Failed`, `InstalledRejected`, or `InstalledPendingReboot`.

When a patch operation uses a patch policy, however, the system passes the override parameter from the associated S3 bucket, and the compliance value is updated for the managed node. For more information about patch policy behaviors, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

**Note**  
When you're patching a node that only uses IPv6, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided. See [Tutorial: Patching a server in an IPv6 only environment](patch-manager-server-patching-iPv6-tutorial.md) for more information on configuring the agent to use dualstack.

## Using the patch baseline override with Snapshot Id or Install Override List parameters


There are two cases where the patch baseline override has noteworthy behavior.

**Using baseline override and Snapshot Id at the same time**  
Snapshot Ids ensure that all managed nodes in a particular patching command all apply the same thing. For example, if you patch 1,000 nodes at one time, the patches will be the same.

When using both a Snapshot Id and a patch baseline override, the Snapshot Id takes precedence over the patch baseline override. The baseline override rules will still be used, but they will only be evaluated once. In the earlier example, the patches across your 1,000 managed nodes will still always be the same. If, midway through the patching operation, you changed the JSON file in the referenced S3 bucket to be something different, the patches applied will still be the same. This is because the Snapshot Id was provided.

**Using baseline override and Install Override List at the same time**  
You can't use these two parameters at the same time. The patching document fails if both parameters are supplied, and it doesn't perform any scans or installs on the managed node.

## Code examples


The following code example for Python shows how to generate the patch baseline override.

```
import boto3
import json

ssm = boto3.client('ssm')
s3 = boto3.resource('s3')
s3_bucket_name = 'my-baseline-override-bucket'
s3_file_name = 'MyBaselineOverride.json'
baseline_ids_to_export = ['pb-0000000000000000', 'pb-0000000000000001']

baseline_overrides = []
for baseline_id in baseline_ids_to_export:
    baseline_overrides.append(ssm.get_patch_baseline(
        BaselineId=baseline_id
    ))

json_content = json.dumps(baseline_overrides, indent=4, sort_keys=True, default=str)
s3.Object(bucket_name=s3_bucket_name, key=s3_file_name).put(Body=json_content)
```

This produces a patch baseline override like the following.

```
[
    {
        "ApprovalRules": {
            "PatchRules": [
                {
                    "ApproveAfterDays": 0, 
                    "ComplianceLevel": "UNSPECIFIED", 
                    "EnableNonSecurity": false, 
                    "PatchFilterGroup": {
                        "PatchFilters": [
                            {
                                "Key": "PRODUCT", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "CLASSIFICATION", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "SEVERITY", 
                                "Values": [
                                    "*"
                                ]
                            }
                        ]
                    }
                }
            ]
        }, 
        "ApprovedPatches": [], 
        "ApprovedPatchesComplianceLevel": "UNSPECIFIED", 
        "ApprovedPatchesEnableNonSecurity": false, 
        "GlobalFilters": {
            "PatchFilters": []
        }, 
        "OperatingSystem": "AMAZON_LINUX_2", 
        "RejectedPatches": [], 
        "RejectedPatchesAction": "ALLOW_AS_DEPENDENCY", 
        "Sources": []
    }, 
    {
        "ApprovalRules": {
            "PatchRules": [
                {
                    "ApproveUntilDate": "2021-01-06", 
                    "ComplianceLevel": "UNSPECIFIED", 
                    "EnableNonSecurity": true, 
                    "PatchFilterGroup": {
                        "PatchFilters": [
                            {
                                "Key": "PRODUCT", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "CLASSIFICATION", 
                                "Values": [
                                    "*"
                                ]
                            }, 
                            {
                                "Key": "SEVERITY", 
                                "Values": [
                                    "*"
                                ]
                            }
                        ]
                    }
                }
            ]
        }, 
        "ApprovedPatches": [
            "open-ssl*"
        ], 
        "ApprovedPatchesComplianceLevel": "UNSPECIFIED", 
        "ApprovedPatchesEnableNonSecurity": false, 
        "GlobalFilters": {
            "PatchFilters": []
        }, 
        "OperatingSystem": "SUSE", 
        "RejectedPatches": [
            "python*"
        ], 
        "RejectedPatchesAction": "ALLOW_AS_DEPENDENCY", 
        "Sources": []
    }
]
```

# Patch baselines


The topics in this section provide information about how patch baselines work in Patch Manager, a tool in AWS Systems Manager, when you run a `Scan` or `Install` operation on your managed nodes.

**Topics**
+ [

# Predefined and custom patch baselines
](patch-manager-predefined-and-custom-patch-baselines.md)
+ [

# Package name formats for approved and rejected patch lists
](patch-manager-approved-rejected-package-name-formats.md)
+ [

# Patch groups
](patch-manager-patch-groups.md)
+ [

# Patching applications released by Microsoft on Windows Server
](patch-manager-patching-windows-applications.md)

# Predefined and custom patch baselines


Patch Manager, a tool in AWS Systems Manager, provides predefined patch baselines for each of the operating systems supported by Patch Manager. You can use these baselines as they are currently configured (you can't customize them) or you can create your own custom patch baselines. Custom patch baselines allows you greater control over which patches are approved or rejected for your environment. Also, the predefined baselines assign a compliance level of `Unspecified` to all patches installed using those baselines. For compliance values to be assigned, you can create a copy of a predefined baseline and specify the compliance values you want to assign to patches. For more information, see [Custom baselines](#patch-manager-baselines-custom) and [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

**Note**  
The information in this topic applies no matter which method or type of configuration you are using for your patching operations:  
A patch policy configured in Quick Setup
A Host Management option configured in Quick Setup
A maintenance window to run a patch `Scan` or `Install` task
An on-demand **Patch now** operation

**Topics**
+ [

## Predefined baselines
](#patch-manager-baselines-pre-defined)
+ [

## Custom baselines
](#patch-manager-baselines-custom)

## Predefined baselines


The following table describes the predefined patch baselines provided with Patch Manager.

For information about which versions of each operating system Patch Manager supports, see [Patch Manager prerequisites](patch-manager-prerequisites.md).


****  

| Name | Supported operating system | Details | 
| --- | --- | --- | 
|  `AWS-AlmaLinuxDefaultPatchBaseline`  |  AlmaLinux  |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches that are classified as "Bugfix". Patches are auto-approved 7 days after they are released or updated.¹  | 
| AWS-AmazonLinux2DefaultPatchBaseline | Amazon Linux 2 | Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches with a classification of "Bugfix". Patches are auto-approved 7 days after release.¹ | 
| AWS-AmazonLinux2023DefaultPatchBaseline | Amazon Linux 2023 |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Patches are auto-approved seven days after release. Also approves all patches with a classification of "Bugfix" seven days after release.  | 
| AWS-CentOSDefaultPatchBaseline | CentOS Stream | Approves all updates 7 days after they become available, including nonsecurity updates. | 
| AWS-DebianDefaultPatchBaseline | Debian Server | Immediately approves all operating system security-related patches that have a priority of "Required", "Important", "Standard," "Optional," or "Extra." There is no wait before approval because reliable release dates aren't available in the repositories. | 
| AWS-MacOSDefaultPatchBaseline | macOS | Approves all operating system patches that are classified as "Security". Also approves all packages with a current update. | 
| AWS-OracleLinuxDefaultPatchBaseline | Oracle Linux | Approves all operating system patches that are classified as "Security" and that have a severity level of "Important" or "Moderate". Also approves all patches that are classified as "Bugfix" 7 days after release. Patches are auto-approved 7 days after they are released or updated.¹ | 
|  `AWS-RedHatDefaultPatchBaseline`  |  Red Hat Enterprise Linux (RHEL)   |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches that are classified as "Bugfix". Patches are auto-approved 7 days after they are released or updated.¹  | 
|  `AWS-RockyLinuxDefaultPatchBaseline`  |  Rocky Linux  |  Approves all operating system patches that are classified as "Security" and that have a severity level of "Critical" or "Important". Also approves all patches that are classified as "Bugfix". Patches are auto-approved 7 days after they are released or updated.¹  | 
| AWS-SuseDefaultPatchBaseline | SUSE Linux Enterprise Server (SLES) | Approves all operating system patches that are classified as "Security" and with a severity of "Critical" or "Important". Patches are auto-approved 7 days after they are released or updated.¹ | 
|  `AWS-UbuntuDefaultPatchBaseline`  |  Ubuntu Server  |  Immediately approves all operating system security-related patches that have a priority of "Required", "Important", "Standard," "Optional," or "Extra." There is no wait before approval because reliable release dates aren't available in the repositories.  | 
| AWS-DefaultPatchBaseline |  Windows Server  |  Approves all Windows Server operating system patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". Patches are auto-approved 7 days after they are released or updated.²  | 
| AWS-WindowsPredefinedPatchBaseline-OS |  Windows Server  |  Approves all Windows Server operating system patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". Patches are auto-approved 7 days after they are released or updated.²  | 
| AWS-WindowsPredefinedPatchBaseline-OS-Applications | Windows Server | For the Windows Server operating system, approves all patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". For applications released by Microsoft, approves all patches. Patches for both OS and applications are auto-approved 7 days after they are released or updated.² | 

¹ For Amazon Linux 2, the 7-day wait before patches are auto-approved is calculated from an `Updated Date` value in `updateinfo.xml`, not a `Release Date` value. Various factors can affect the `Updated Date` value. Other operating systems handle release and update dates differently. For information to help you avoid unexpected results with auto-approval delays, see [How package release dates and update dates are calculated](patch-manager-release-dates.md).

² For Windows Server, default baselines include a 7-day auto-approval delay. To install a patch within 7 days after release, you must create a custom baseline.

## Custom baselines


Use the following information to help you create custom patch baselines to meet your patching goals.

**Topics**
+ [

### Using auto-approvals in custom baselines
](#baselines-auto-approvals)
+ [

### Additional information for creating patch baselines
](#baseline-additional-info)

### Using auto-approvals in custom baselines


If you create your own patch baseline, you can choose which patches to auto-approve by using the following categories.
+ **Operating system**: Supported versions of Windows Server, Amazon Linux, Ubuntu Server, and so on.
+ **Product name **(for operating systems): For example, RHEL 7.5, Amazon Linux 2023 2023.8.20250808, Windows Server 2012, Windows Server 2012 R2, and so on.
+ **Product name **(for applications released by Microsoft on Windows Server only): For example, Word 2016, BizTalk Server, and so on.
+ **Classification**: For example, Critical updates, Security updates, and so on.
+ **Severity**: For example, Critical, Important, and so on.

For each approval rule that you create, you can choose to specify an auto-approval delay or specify a patch approval cutoff date. 

**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.

An auto-approval delay is the number of days to wait after the patch was released or last updated, before the patch is automatically approved for patching. For example, if you create a rule using the `CriticalUpdates` classification and configure it for 7 days auto-approval delay, then a new critical patch released on July 7 is automatically approved on July 14.

If a Linux repository does not provide release date information for packages, Patch Manager uses the build time of the package as the date for auto-approval date specifications for Amazon Linux 2, Amazon Linux 2023, and Red Hat Enterprise Linux (RHEL). If the build time of the package can't be determined, Patch Manager uses a default date of January 1st, 1970. This results in Patch Manager bypassing any auto-approval date specifications in patch baselines that are configured to approve patches for any date after January 1st, 1970.

When you specify an auto-approval cutoff date, Patch Manager automatically applies all patches released or last updated on or before that date. For example, if you specify July 7, 2023 as the cutoff date, no patches released or last updated on or after July 8, 2023 are installed automatically.

When you create a custom patch baseline, you can specify a compliance severity level for patches approved by that patch baseline, such as `Critical` or `High`. If the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

### Additional information for creating patch baselines


Keep the following in mind when you create a patch baseline:
+ Patch Manager provides one predefined patch baseline for each supported operating system. These predefined patch baselines are used as the default patch baselines for each operating system type unless you create your own patch baseline and designate it as the default for the corresponding operating system type. 
**Note**  
For Windows Server, three predefined patch baselines are provided. The patch baselines `AWS-DefaultPatchBaseline` and `AWS-WindowsPredefinedPatchBaseline-OS` support only operating system updates on the Windows operating system itself. `AWS-DefaultPatchBaseline` is used as the default patch baseline for Windows Server managed nodes unless you specify a different patch baseline. The configuration settings in these two patch baselines are the same. The newer of the two, `AWS-WindowsPredefinedPatchBaseline-OS`, was created to distinguish it from the third predefined patch baseline for Windows Server. That patch baseline, `AWS-WindowsPredefinedPatchBaseline-OS-Applications`, can be used to apply patches to both the Windows Server operating system and supported applications released by Microsoft.
+ By default, Windows Server 2019 and Windows Server 2022 remove updates that are replaced by later updates. As a result, if you use the `ApproveUntilDate` parameter in a Windows Server patch baseline, but the date selected in the `ApproveUntilDate` parameter is before the date of the latest patch, then the new patch isn't installed when the patching operation runs. For more information about Windows Server patching rules, see the Windows Server tab in [How security patches are selected](patch-manager-selecting-patches.md).

  This means that the managed node is compliant in terms of Systems Manager operations, even though a critical patch from the previous month might not be installed. This same scenario can occur when using the `ApproveAfterDays` parameter. Because of the Microsoft superseded patch behavior, it is possible to set a number (generally greater than 30 days) so that patches for Windows Server are never installed if the latest available patch from Microsoft is released before the number of days in `ApproveAfterDays` has elapsed. 
+ For Windows Server only, an available security update patch that is not approved by the patch baseline can have a compliance value of `Compliant` or `Non-Compliant`, as defined in a custom patch baseline. 

  When you create or update a patch baseline, you choose the status you want to assign to security patches that are available but not approved because they don't meet the installation criteria specified in the patch baseline. For example, security patches that you might want installed can be skipped if you have specified a long period to wait after a patch is released before installation. If an update to the patch is released during your specified waiting period, the waiting period for installing the patch starts over. If the waiting period is too long, multiple versions of the patch could be released but never installed.

  Using the console to create or update a patch baseline, you specify this option in the **Available security updates compliance status** field. Using the AWS CLI to run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html) or [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html) command, you specify this option in the `available-security-updates-compliance-status` parameter. 
+ For on-premises servers and virtual machines (VMs), Patch Manager attempts to use your custom default patch baseline. If no custom default patch baseline exists, the system uses the predefined patch baseline for the corresponding operating system.
+ If a patch is listed as both approved and rejected in the same patch baseline, the patch is rejected.
+ A managed node can have only one patch baseline defined for it.
+ The formats of package names you can add to lists of approved patches and rejected patches for a patch baseline depend on the type of operating system you're patching.

  For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
+ If you are using a [patch policy configuration](patch-manager-policies.md) in Quick Setup, updates you make to custom patch baselines are synchronized with Quick Setup once an hour. 

  If a custom patch baseline that was referenced in a patch policy is deleted, a banner displays on the Quick Setup **Configuration details** page for your patch policy. The banner informs you that the patch policy references a patch baseline that no longer exists, and that subsequent patching operations will fail. In this case, return to the Quick Setup **Configurations** page, select the Patch Manager configuration , and choose **Actions**, **Edit configuration**. The deleted patch baseline name is highlighted, and you must select a new patch baseline for the affected operating system.
+ When you create an approval rule with multiple `Classification` and `Severity` values, patches are approved based on their available attributes. Packages with both `Classification` and `Severity` attributes will match the selected baseline values for both fields. Packages with only `Classification` attributes are matched only against the selected baseline `Classification` values. Severity requirements in the same rule are ignored for packages that don't have `Severity` attributes. 

For information about creating a patch baseline, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md) and [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md).

# Package name formats for approved and rejected patch lists


The formats of package names you can add to lists of approved patches and rejected patches depend on the type of operating system you're patching.

## Package name formats for Linux operating systems


The formats you can specify for approved and rejected patches in your patch baseline vary by Linux type. More specifically, the formats that are supported depend on the package manager used by the type of Linux operating system.

**Topics**
+ [

### Amazon Linux 2, Amazon Linux 2023, Oracle Linux, and Red Hat Enterprise Linux (RHEL)
](#patch-manager-approved-rejected-package-name-formats-standard)
+ [

### Debian Server and Ubuntu Server
](#patch-manager-approved-rejected-package-name-formats-ubuntu)
+ [

### SUSE Linux Enterprise Server (SLES)
](#patch-manager-approved-rejected-package-name-formats-sles)

### Amazon Linux 2, Amazon Linux 2023, Oracle Linux, and Red Hat Enterprise Linux (RHEL)


**Package manager**: YUM, except for Amazon Linux 2023, and RHEL 8, which use DNF as the package manager.

**Approved patches**: For approved patches, you can specify any of the following:
+ Bugzilla IDs, in the format `1234567` (The system processes numbers-only strings as Bugzilla IDs.)
+ CVE IDs, in the format `CVE-2018-1234567`
+ Advisory IDs, in formats such as `RHSA-2017:0864` and `ALAS-2018-123`
+ Package names that are constructed using one or more of the available components for package naming. To illustrate, for the package named `dbus.x86_64:1:1.12.28-1.amzn2023.0.1`, the components are as follows: 
  + `name`: `dbus`
  + `architecture`: `x86_64`
  + `epoch`: `1`
  + `version`: `1.12.28`
  + `release`: `1.amzn2023.0.1`

  Package names with the following constructions are supported:
  + `name`
  + `name.arch`
  + `name-version`
  + `name-version-release`
  + `name-version-release.arch`
  + `version`
  + `version-release`
  + `epoch:version-release`
  + `name-epoch:version-release`
  + `name-epoch:version-release.arch`
  + `epoch:name-version-release.arch`
  + `name.arch:epoch:version-release`

  Some examples:
  + `dbus.x86_64`
  + `dbus-1.12.28`
  + `dbus-1.12.28-1.amzn2023.0.1`
  + `dbus-1:1.12.28-1.amzn2023.0.1.x86_64`
+ We also support package name components with a single wild card in the above formats, such as the following:
  + `dbus*` 
  + `dbus-1.12.2*`
  + `dbus-*:1.12.28-1.amzn2023.0.1.x86_64`

**Rejected patches**: For rejected patches, you can specify any of the following:
+ Package names that are constructed using one or more of the available components for package naming. To illustrate, for the package named `dbus.x86_64:1:1.12.28-1.amzn2023.0.1`, the components are as follows: 
  + `name`: `dbus`
  + `architecture`; `x86_64`
  + `epoch`: `1`
  + `version`: `1.12.28`
  + `release`: `1.amzn2023.0.1`

  Package names with the following constructions are supported:
  + `name`
  + `name.arch`
  + `name-version`
  + `name-version-release`
  + `name-version-release.arch`
  + `version`
  + `version-release`
  + `epoch:version-release`
  + `name-epoch:version-release`
  + `name-epoch:version-release.arch`
  + `epoch:name-version-release.arch`
  + `name.arch:epoch:version-release`

  Some examples:
  + `dbus.x86_64`
  + `dbus-1.12.28`
  + `dbus-1.12.28-1.amzn2023.0.1`
  + `dbus-1:1.12.28-1.amzn2023.0.1.x86_64` 
+ We also support package name components with a single wild card in the above formats, such as the following:
  + `dbus*` 
  + `dbus-1.12.2*`
  + `dbus-*:1.12.28-1.amzn2023.0.1.x86_64`

### Debian Server and Ubuntu Server


**Package manager**: APT

**Approved patches** and **rejected patches**: For both approved and rejected patches, specify the following:
+ Package names, in the format `ExamplePkg33`
**Note**  
For Debian Server lists, and Ubuntu Server lists, don't include elements such as architecture or versions. For example, you specify the package name `ExamplePkg33` to include all the following in a patch list:  
`ExamplePkg33.x86.1`
`ExamplePkg33.x86.2`
`ExamplePkg33.x64.1`
`ExamplePkg33.3.2.5-364.noarch`

### SUSE Linux Enterprise Server (SLES)


**Package manager**: Zypper

**Approved patches** and **rejected patches**: For both approved and rejected patch lists, you can specify any of the following:
+ Full package names, in formats such as:
  + `SUSE-SLE-Example-Package-15-2023-123`
  + `example-pkg-2023.15.4-46.17.1.x86_64.rpm`
+ Package names with a single wildcard, such as:
  + `SUSE-SLE-Example-Package-15-2023-*`
  + `example-pkg-2023.15.4-46.17.1.*.rpm`

## Package name formats for macOS


**Supported package managers**: softwareupdate, installer, Brew, Brew Cask

**Approved patches** and **rejected patches**: For both approved and rejected patch lists, you specify full package names, in formats such as:
+ `XProtectPlistConfigData`
+ `MRTConfigData`

Wildcards aren't supported in approved and rejected patch lists for macOS.

## Package name formats for Windows operating systems


For Windows operating systems, specify patches using Microsoft Knowledge Base IDs and Microsoft Security Bulletin IDs; for example:

```
KB2032276,KB2124261,MS10-048
```

# Patch groups


**Note**  
Patch groups are not used in patching operations that are based on *patch policies*. For information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).  
Patch group functionality is not supported in the console for account-Region pairs that did not already use patch groups before patch policy support was released on December 22, 2022. Patch group functionality is still available in account-Region pairs that began using patch groups before this date.

You can use a *patch group* to associate managed nodes with a specific patch baseline in Patch Manager, a tool in AWS Systems Manager. Patch groups help ensure that you're deploying the appropriate patches, based on the associated patch baseline rules, to the correct set of nodes. Patch groups can also help you avoid deploying patches before they have been adequately tested. For example, you can create patch groups for different environments (such as Development, Test, and Production) and register each patch group to an appropriate patch baseline. 

When you run `AWS-RunPatchBaseline` or other SSM Command documents for patching, you can target managed nodes using their ID or tags. SSM Agent and Patch Manager then evaluate which patch baseline to use based on the patch group value that you added to the managed node.

## Using tags to define patch groups


You create a patch group by using tags applied to your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. Note the following details about using tags for patch groups:
+ 

  A patch group must be defined using either the tag key `Patch Group` or `PatchGroup` applied to your managed nodes. When registering a patch group for a patch baseline, any identical *values* specified for these two keys are interpreted to be part of the same group. For instance, say that you have tagged five nodes with the first of the following key-value pairs, and five with the second:
  + `key=PatchGroup,value=DEV` 
  + `key=Patch Group,value=DEV`

  The Patch Manager command to create a baseline combines these 10 managed nodes into a single group based on the value `DEV`. The AWS CLI equivalent for the command to create a patch baseline for patch groups is as follows:

  ```
  aws ssm register-patch-baseline-for-patch-group \
      --baseline-id pb-0c10e65780EXAMPLE \
      --patch-group DEV
  ```

  Combining values from different keys into a single target is unique to this Patch Manager command for creating a new patch group and not supported by other API actions. For example, if you run [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) actions using `PatchGroup` and `Patch Group` keys with the same values, you are targeting two completely different sets of nodes:

  ```
  aws ssm send-command \
      --document-name AWS-RunPatchBaseline \
      --targets "Key=tag:PatchGroup,Values=DEV"
  ```

  ```
  aws ssm send-command \
      --document-name AWS-RunPatchBaseline \
      --targets "Key=tag:Patch Group,Values=DEV"
  ```
+ There are limits on tag-based targeting. Each array of targets for `SendCommand` can contain a maximum of five key-value pairs.
+ We recommend that you choose only one of these tag key conventions, either `PatchGroup` (without a space) or `Patch Group` (with a space). However, if you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS) on an instance, you must use `PatchGroup`.
+ The key is case-sensitive. You can specify any *value* to help you identify and target the resources in that group, for example "web servers" or "US-EAST-PROD", but the key must be `Patch Group` or `PatchGroup`.

After you create a patch group and tag managed nodes, you can register the patch group with a patch baseline. Registering the patch group with a patch baseline ensures that the nodes within the patch group use the rules defined in the associated patch baseline. 

For more information about how to create a patch group and associate the patch group to a patch baseline, see [Creating and managing patch groups](patch-manager-tag-a-patch-group.md) and [Add a patch group to a patch baseline](patch-manager-tag-a-patch-group.md#sysman-patch-group-patchbaseline).

To view an example of creating a patch baseline and patch groups by using the AWS Command Line Interface (AWS CLI), see [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md). For more information about Amazon EC2 tags, see [Tag your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) in the *Amazon EC2 User Guide*.

## How it works


When the system runs the task to apply a patch baseline to a managed node, SSM Agent verifies that a patch group value is defined for the node. If the node is assigned to a patch group, Patch Manager then verifies which patch baseline is registered to that group. If a patch baseline is found for that group, Patch Manager notifies SSM Agent to use the associated patch baseline. If a node isn't configured for a patch group, Patch Manager automatically notifies SSM Agent to use the currently configured default patch baseline.

**Important**  
A managed node can only be in one patch group.  
A patch group can be registered with only one patch baseline for each operating system type.  
You can't apply the `Patch Group` tag (with a space) to an Amazon EC2 instance if the **Allow tags in instance metadata** option is enabled on the instance. Allowing tags in instance metadata prevents tag key names from containing spaces. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use the tag key `PatchGroup` (without a space).

**Diagram 1: General example of patching operations process flow**

The following illustration shows a general example of the processes that Systems Manager performs when sending a Run Command task to your fleet of servers to patch using Patch Manager. These processes determine which patch baselines to use in patching operations. (A similar process is used when a maintenance window is configured to send a command to patch using Patch Manager.)

The full process is explained below the illustration.

![\[Patch Manager workflow for determining which patch baselines to use when performing patching operations.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/patch-groups-how-it-works.png)


In this example, we have three groups of EC2 instances for Windows Server with the following tags applied:


****  

| EC2 instances group | Tags | 
| --- | --- | 
|  Group 1  |  `key=OS,value=Windows` `key=PatchGroup,value=DEV`  | 
|  Group 2  |  `key=OS,value=Windows`  | 
|  Group 3  |  `key=OS,value=Windows` `key=PatchGroup,value=QA`  | 

For this example, we also have these two Windows Server patch baselines:


****  

| Patch baseline ID | Default | Associated patch group | 
| --- | --- | --- | 
|  `pb-0123456789abcdef0`  |  Yes  |  `Default`  | 
|  `pb-9876543210abcdef0`  |  No  |  `DEV`  | 

The general process to scan or install patches using Run Command, a tool in AWS Systems Manager, and Patch Manager is as follows:

1. **Send a command to patch**: Use the Systems Manager console, SDK, AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell to send a Run Command task using the document `AWS-RunPatchBaseline`. The diagram shows a Run Command task to patch managed instances by targeting the tag `key=OS,value=Windows`.

1. **Patch baseline determination**: SSM Agent verifies the patch group tags applied to the EC2 instance and queries Patch Manager for the corresponding patch baseline.
   + **Matching patch group value associated with patch baseline:**

     1. SSM Agent, which is installed on EC2 instances in group one, receives the command issued in Step 1 to begin a patching operation. SSM Agent validates that the EC2 instances have the patch group tag-value `DEV` applied and queries Patch Manager for an associated patch baseline.

     1. Patch Manager verifies that patch baseline `pb-9876543210abcdef0` has the patch group `DEV` associated and notifies SSM Agent.

     1. SSM Agent retrieves a patch baseline snapshot from Patch Manager based on the approval rules and exceptions configured in `pb-9876543210abcdef0` and proceeds to the next step.
   + **No patch group tag added to instance:**

     1. SSM Agent, which is installed on EC2 instances in group two, receives the command issued in Step 1 to begin a patching operation. SSM Agent validates that the EC2 instances don't have a `Patch Group` or `PatchGroup` tag applied and as a result, SSM Agent queries Patch Manager for the default Windows patch baseline.

     1. Patch Manager verifies that the default Windows Server patch baseline is `pb-0123456789abcdef0` and notifies SSM Agent.

     1. SSM Agent retrieves a patch baseline snapshot from Patch Manager based on the approval rules and exceptions configured in the default patch baseline `pb-0123456789abcdef0` and proceeds to the next step.
   + **No matching patch group value associated with a patch baseline:**

     1. SSM Agent, which is installed on EC2 instances in group three, receives the command issued in Step 1 to begin a patching operation. SSM Agent validates that the EC2 instances have the patch group tag-value `QA` applied and queries Patch Manager for an associated patch baseline.

     1. Patch Manager doesn't find a patch baseline that has the patch group `QA` associated.

     1. Patch Manager notifies SSM Agent to use the default Windows patch baseline `pb-0123456789abcdef0`.

     1. SSM Agent retrieves a patch baseline snapshot from Patch Manager based on the approval rules and exceptions configured in the default patch baseline `pb-0123456789abcdef0` and proceeds to the next step.

1. **Patch scan or install**: After determining the appropriate patch baseline to use, SSM Agent begins either scanning for or installing patches based on the operation value specified in Step 1. The patches that are scanned for or installed are determined by the approval rules and patch exceptions defined in the patch baseline snapshot provided by Patch Manager.

**More info**  
+ [Patch compliance state values](patch-manager-compliance-states.md)

# Patching applications released by Microsoft on Windows Server


Use the information in this topic to help you prepare to patch applications on Windows Server using Patch Manager, a tool in AWS Systems Manager.

**Microsoft application patching**  
Patching support for applications on Windows Server managed nodes is limited to applications released by Microsoft.

**Note**  
In some cases, Microsoft releases patches for applications that don't specify an updated date and time. In these cases, an updated date and time of `01/01/1970` is supplied by default.

**Patch baselines to patch applications released by Microsoft**  
For Windows Server, three predefined patch baselines are provided. The patch baselines `AWS-DefaultPatchBaseline` and `AWS-WindowsPredefinedPatchBaseline-OS` support only operating system updates on the Windows operating system itself. `AWS-DefaultPatchBaseline` is used as the default patch baseline for Windows Server managed nodes unless you specify a different patch baseline. The configuration settings in these two patch baselines are the same. The newer of the two, `AWS-WindowsPredefinedPatchBaseline-OS`, was created to distinguish it from the third predefined patch baseline for Windows Server. That patch baseline, `AWS-WindowsPredefinedPatchBaseline-OS-Applications`, can be used to apply patches to both the Windows Server operating system and supported applications released by Microsoft.

You can also create a custom patch baseline to update applications released by Microsoft on Windows Server machines.

**Support for patching applications released by Microsoft on on-premises servers, edge devices, VMs, and other non-EC2 nodes**  
To patch applications released by Microsoft on virtual machines (VMs) and other non-EC2 managed nodes, you must turn on the advanced-instances tier. There is a charge to use the advanced-instances tier. **However, there is no additional charge to patch applications released by Microsoft on Amazon Elastic Compute Cloud (Amazon EC2) instances.** For more information, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).

**Windows update option for "other Microsoft products"**  
In order for Patch Manager to be able to patch applications released by Microsoft on your Windows Server managed nodes, the Windows Update option **Give me updates for other Microsoft products when I update Windows** must be activated on the managed node. 

For information about allowing this option on a single managed node, see [Update Office with Microsoft Update](https://support.microsoft.com/en-us/office/update-office-with-microsoft-update-f59d3f9d-bd5d-4d3b-a08e-1dd659cf5282) on the Microsoft Support website.

For a fleet of managed nodes running Windows Server 2016 and later, you can use a Group Policy Object (GPO) to turn on the setting. In the Group Policy Management Editor, go to **Computer Configuration**, **Administrative Templates**, **Windows Components**, **Windows Updates**, and choose **Install updates for other Microsoft products**. We also recommend configuring the GPO with additional parameters that prevent unplanned automatic updates and reboots outside of Patch Manager. For more information, see [Configuring Automatic Updates in a Non-Active Directory Environment](https://docs.microsoft.com/de-de/security-updates/windowsupdateservices/18127499) on the Microsoft technical documentation website.

For a fleet of managed nodes running Windows Server 2012 or 2012 R2 , you can turn on the option by using a script, as described in [Enabling and Disabling Microsoft Update in Windows 7 via Script](https://docs.microsoft.com/en-us/archive/blogs/technet/danbuche/enabling-and-disabling-microsoft-update-in-windows-7-via-script) on the Microsoft Docs Blog website. For example, you could do the following:

1. Save the script from the blog post in a file.

1. Upload the file to an Amazon Simple Storage Service (Amazon S3) bucket or other accessible location.

1. Use Run Command, a tool in AWS Systems Manager, to run the script on your managed nodes using the Systems Manager document (SSM document) `AWS-RunPowerShellScript` with a command similar to the following.

   ```
   Invoke-WebRequest `
       -Uri "https://s3.aws-api-domain/amzn-s3-demo-bucket/script.vbs" `
       -Outfile "C:\script.vbs" cscript c:\script.vbs
   ```

**Minimum parameter requirements**  
To include applications released by Microsoft in your custom patch baseline, you must, at a minimum, specify the product that you want to patch. The following AWS Command Line Interface (AWS CLI) command demonstrates the minimal requirements to patch a product, such as Microsoft Office 2016.

------
#### [ Linux & macOS ]

```
aws ssm create-patch-baseline \
    --name "My-Windows-App-Baseline" \
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT,Values='Office 2016'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------
#### [ Windows Server ]

```
aws ssm create-patch-baseline ^
    --name "My-Windows-App-Baseline" ^
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT,Values='Office 2016'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------

If you specify the Microsoft application product family, each product you specify must be a supported member of the selected product family. For example, to patch the product "Active Directory Rights Management Services Client 2.0," you must specify its product family as "Active Directory" and not, for example, "Office" or "SQL Server." The following AWS CLI command demonstrates a matched pairing of product family and product.

------
#### [ Linux & macOS ]

```
aws ssm create-patch-baseline \
    --name "My-Windows-App-Baseline" \
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT_FAMILY,Values='Active Directory'},{Key=PRODUCT,Values='Active Directory Rights Management Services Client 2.0'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------
#### [ Windows Server ]

```
aws ssm create-patch-baseline ^
    --name "My-Windows-App-Baseline" ^
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=PRODUCT_FAMILY,Values='Active Directory'},{Key=PRODUCT,Values='Active Directory Rights Management Services Client 2.0'},{Key=PATCH_SET,Values='APPLICATION'}]},ApproveAfterDays=5}]"
```

------

**Note**  
If you receive an error message about a mismatched product and family pairing, see [Issue: mismatched product family/product pairs](patch-manager-troubleshooting.md#patch-manager-troubleshooting-product-family-mismatch) for help resolving the issue.

# Using Kernel Live Patching on Amazon Linux 2 managed nodes


Kernel Live Patching for Amazon Linux 2 allows you to apply security vulnerability and critical bug patches to a running Linux kernel without reboots or disruptions to running applications. This allows you to benefit from improved service and application availability, while keeping your infrastructure secure and up to date. Kernel Live Patching is supported on Amazon EC2 instances, AWS IoT Greengrass core devices, and [on-premises virtual machines](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/amazon-linux-2-virtual-machine.html) running Amazon Linux 2.

For general information about Kernel Live Patching, see [Kernel Live Patching on AL2](https://docs.aws.amazon.com/linux/al2/ug/al2-live-patching.html) in the *Amazon Linux 2 User Guide*.

After you turn on Kernel Live Patching on an Amazon Linux 2 managed node, you can use Patch Manager, a tool in AWS Systems Manager, to apply kernel live patches to the managed node. Using Patch Manager is an alternative to using existing yum workflows on the node to apply the updates.

**Before you begin**  
To use Patch Manager to apply kernel live patches to your Amazon Linux 2 managed nodes, ensure your nodes are based on the correct architecture and kernel version. For information, see [Supported configurations and prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-prereq) in the *Amazon EC2 User Guide*.

**Topics**
+ [

## Kernel Live Patching  using  Patch Manager
](#about-klp)
+ [

## How Kernel Live Patching using Patch Manager works
](#how-klp-works)
+ [

# Turning on Kernel Live Patching using Run Command
](enable-klp.md)
+ [

# Applying kernel live patches using Run Command
](install-klp.md)
+ [

# Turning off Kernel Live Patching using Run Command
](disable-klp.md)

## Kernel Live Patching  using  Patch Manager


Updating the kernel version  
You don't need to reboot a managed node after applying a kernel live patch update. However, AWS provides kernel live patches for an Amazon Linux 2 kernel version for up to three months after its release. After the three-month period, you must update to a later kernel version to continue to receive kernel live patches. We recommend using a maintenance window to schedule a reboot of your node at least once every three months to prompt the kernel version update.

Uninstalling kernel live patches  
Kernel live patches can't be uninstalled using Patch Manager. Instead, you can turn off Kernel Live Patching, which removes the RPM packages for the applied kernel live patches. For more information, see [Turning off Kernel Live Patching using Run Command](disable-klp.md).

Kernel compliance  
In some cases, installing all CVE fixes from live patches for the current kernel version can bring that kernel into the same compliance state that a newer kernel version would have. When that happens, the newer version is reported as `Installed`, and the managed node reported as `Compliant`. No installation time is reported for newer kernel version, however.

One kernel live patch, multiple CVEs  
If a kernel live patch addresses multiple CVEs, and those CVEs have various classification and severity values, only the highest classification and severity from among the CVEs is reported for the patch. 

The remainder of this section describes how to use Patch Manager to apply kernel live patches to managed nodes that meet these requirements.

## How Kernel Live Patching using Patch Manager works


AWS releases two types of kernel live patches for Amazon Linux 2: security updates and bug fixes. To apply those types of patches, you use a patch baseline document that targets only the classifications and severities listed in the following table.


| Classification | Severity | 
| --- | --- | 
| Security | Critical, Important | 
| Bugfix | All | 

You can create a custom patch baseline that targets only these patches, or use the predefined `AWS-AmazonLinux2DefaultPatchBaseline` patch baseline. In other words, you can use `AWS-AmazonLinux2DefaultPatchBaseline` with Amazon Linux 2 managed nodes on which Kernel Live Patching is turned on, and kernel live updates will be applied.

**Note**  
The `AWS-AmazonLinux2DefaultPatchBaseline` configuration specifies a 7-day waiting period after a patch is released or last updated before it's installed automatically. If you don't want to wait 7 days for kernel live patches to be auto-approved, you can create and use a custom patch baseline. In your patch baseline, you can specify no auto-approval waiting period, or specify a shorter or longer one. For more information, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

We recommend the following strategy to patch your managed nodes with kernel live updates:

1. Turn on Kernel Live Patching on your Amazon Linux 2 managed nodes.

1. Use Run Command, a tool in AWS Systems Manager, to run a `Scan` operation on your managed nodes using the predefined `AWS-AmazonLinux2DefaultPatchBaseline` or a custom patch baseline that also targets only `Security` updates with severity classified as `Critical` and `Important`, and the `Bugfix` severity of `All`. 

1. Use Compliance, a tool in AWS Systems Manager, to review whether non-compliance for patching is reported for any of the managed nodes that were scanned. If so, view the node compliance details to determine whether any kernel live patches are missing from the managed node.

1. To install missing kernel live patches, use Run Command with the same patch baseline you specified before, but this time run an `Install` operation instead of a `Scan` operation.

   Because kernel live patches are installed without the need to reboot, you can choose the `NoReboot` reboot option for this operation. 
**Note**  
You can still reboot the managed node if required for other types of patches installed on it, or if you want to update to a newer kernel. In these cases, choose the `RebootIfNeeded` reboot option instead.

1. Return to Compliance to verify that the kernel live patches were installed.

# Turning on Kernel Live Patching using Run Command
Turning on Kernel Live Patching

To turn on Kernel Live Patching, you can either run `yum` commands on your managed nodes or use Run Command and a custom Systems Manager document (SSM document) that you create.

For information about turning on Kernel Live Patching by running `yum` commands directly on the managed node, see [Enable Kernel Live Patching](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-prereq) in the *Amazon EC2 User Guide*.

**Note**  
When you turn on Kernel Live Patching, if the kernel already running on the managed node is *earlier* than `kernel-4.14.165-131.185.amzn2.x86_64` (the minimum supported version), the process installs the latest available kernel version and reboots the managed node. If the node is already running `kernel-4.14.165-131.185.amzn2.x86_64` or later, the process doesn't install a newer version and doesn't reboot the node.

**To turn on Kernel Live Patching using Run Command (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose the custom SSM document `AWS-ConfigureKernelLivePatching`.

1. In the **Command parameters** section, specify whether you want managed nodes to reboot as part of this operation.

1. For information about working with the remaining controls on this page, see [Running commands from the console](running-commands-console.md).

1. Choose **Run**.

**To turn on Kernel Live Patching (AWS CLI)**
+ Run the following command on your local machine.

------
#### [ Linux & macOS ]

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureKernelLivePatching" \
      --parameters "EnableOrDisable=Enable" \
      --targets "Key=instanceids,Values=instance-id"
  ```

------
#### [ Windows Server ]

  ```
  aws ssm send-command ^
      --document-name "AWS-ConfigureKernelLivePatching" ^
      --parameters "EnableOrDisable=Enable" ^
      --targets "Key=instanceids,Values=instance-id"
  ```

------

  Replace *instance-id* with the ID of the Amazon Linux 2 managed node on which you want to turn on the feature, such as i-02573cafcfEXAMPLE. To turn on the feature on multiple managed nodes, you can use either of the following formats.
  + `--targets "Key=instanceids,Values=instance-id1,instance-id2"`
  + `--targets "Key=tag:tag-key,Values=tag-value"`

  For information about other options you can use in the command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

# Applying kernel live patches using Run Command
Applying kernel live patches

To apply kernel live patches, you can either run `yum` commands on your managed nodes or use Run Command and the SSM document `AWS-RunPatchBaseline`. 

For information about applying kernel live patches by running `yum` commands directly on the managed node, see [Apply kernel live patches](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-apply) in the *Amazon EC2 User Guide*.

**To apply kernel live patches using Run Command (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose the SSM document `AWS-RunPatchBaseline`.

1. In the **Command parameters** section, do one of the following:
   + If you're checking whether new kernel live patches are available, for **Operation**, choose `Scan`. For **Reboot Option**, if don't want your managed nodes to reboot after this operation, choose `NoReboot`. After the operation is complete, you can check for new patches and compliance status in Compliance.
   + If you checked patch compliance already and are ready to apply available kernel live patches, for **Operation**, choose `Install`. For **Reboot Option**, if you don't want your managed nodes to reboot after this operation, choose `NoReboot`.

1. For information about working with the remaining controls on this page, see [Running commands from the console](running-commands-console.md).

1. Choose **Run**.

**To apply kernel live patches using Run Command (AWS CLI)**

1. To perform a `Scan` operation before checking your results in Compliance, run the following command from your local machine.

------
#### [ Linux & macOS ]

   ```
   aws ssm send-command \
       --document-name "AWS-RunPatchBaseline" \
       --targets "Key=InstanceIds,Values=instance-id" \
       --parameters '{"Operation":["Scan"],"RebootOption":["RebootIfNeeded"]}'
   ```

------
#### [ Windows Server ]

   ```
   aws ssm send-command ^
       --document-name "AWS-RunPatchBaseline" ^
       --targets "Key=InstanceIds,Values=instance-id" ^
       --parameters {\"Operation\":[\"Scan\"],\"RebootOption\":[\"RebootIfNeeded\"]}
   ```

------

   For information about other options you can use in the command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

1. To perform an `Install` operation after checking your results in Compliance, run the following command from your local machine.

------
#### [ Linux & macOS ]

   ```
   aws ssm send-command \
       --document-name "AWS-RunPatchBaseline" \
       --targets "Key=InstanceIds,Values=instance-id" \
       --parameters '{"Operation":["Install"],"RebootOption":["NoReboot"]}'
   ```

------
#### [ Windows Server ]

   ```
   aws ssm send-command ^
       --document-name "AWS-RunPatchBaseline" ^
       --targets "Key=InstanceIds,Values=instance-id" ^
       --parameters {\"Operation\":[\"Install\"],\"RebootOption\":[\"NoReboot\"]}
   ```

------

In both of the preceding commands, replace *instance-id* with the ID of the Amazon Linux 2 managed node on which you want to apply kernel live patches, such as i-02573cafcfEXAMPLE. To turn on the feature on multiple managed nodes, you can use either of the following formats.
+ `--targets "Key=instanceids,Values=instance-id1,instance-id2"`
+ `--targets "Key=tag:tag-key,Values=tag-value"`

For information about other options you can use in these commands, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

# Turning off Kernel Live Patching using Run Command
Turning off Kernel Live Patching

To turn off Kernel Live Patching, you can either run `yum` commands on your managed nodes or use Run Command and the custom SSM document `AWS-ConfigureKernelLivePatching`.

**Note**  
If you no longer need to use Kernel Live Patching, you can turn it off at any time. In most cases, turning off the feature isn't necessary.

For information about turning off Kernel Live Patching by running `yum` commands directly on the managed node, see [Enable Kernel Live Patching](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/al2-live-patching.html#al2-live-patching-enable) in the *Amazon EC2 User Guide*.

**Note**  
When you turn off Kernel Live Patching, the process uninstalls the Kernel Live Patching plugin and then reboots the managed node.

**To turn off Kernel Live Patching using Run Command (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose the SSM document `AWS-ConfigureKernelLivePatching`.

1. In the **Command parameters** section, specify values for required parameters.

1. For information about working with the remaining controls on this page, see [Running commands from the console](running-commands-console.md).

1. Choose **Run**.

**To turn off Kernel Live Patching (AWS CLI)**
+ Run a command similar to the following.

------
#### [ Linux & macOS ]

  ```
  aws ssm send-command \
      --document-name "AWS-ConfigureKernelLivePatching" \
      --targets "Key=instanceIds,Values=instance-id" \
      --parameters "EnableOrDisable=Disable"
  ```

------
#### [ Windows Server ]

  ```
  aws ssm send-command ^
      --document-name "AWS-ConfigureKernelLivePatching" ^
      --targets "Key=instanceIds,Values=instance-id" ^
      --parameters "EnableOrDisable=Disable"
  ```

------

  Replace *instance-id* with the ID of the Amazon Linux 2 managed node on which you want to turn off the feature, such as i-02573cafcfEXAMPLE. To turn off the feature on multiple managed nodes, you can use either of the following formats.
  + `--targets "Key=instanceids,Values=instance-id1,instance-id2"`
  + `--targets "Key=tag:tag-key,Values=tag-value"`

  For information about other options you can use in the command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-command.html) in the *AWS CLI Command Reference*.

# Working with Patch Manager resources and compliance using the console


To use Patch Manager, a tool in AWS Systems Manager, complete the following tasks. These tasks are described in more detail in this section.

1. Verify that the AWS predefined patch baseline for each operating system type that you use meets your needs. If it doesn't, create a patch baseline that defines a standard set of patches for that managed node type and set it as the default instead.

1. Organize managed nodes into patch groups by using Amazon Elastic Compute Cloud (Amazon EC2) tags (optional, but recommended).

1. Do one of the following:
   + (Recommended) Configure a patch policy in Quick Setup, a tool in Systems Manager, that lets you install missing patches on a schedule for an entire organization, a subset of organizational units, or a single AWS account. For more information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).
   + Create a maintenance window that uses the Systems Manager document (SSM document) `AWS-RunPatchBaseline` in a Run Command task type. For more information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).
   + Manually run `AWS-RunPatchBaseline` in a Run Command operation. For more information, see [Running commands from the console](running-commands-console.md).
   + Manually patch nodes on demand using the **Patch now** feature. For more information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

1. Monitor patching to verify compliance and investigate failures.

**Topics**
+ [

# Creating a patch policy
](patch-manager-create-a-patch-policy.md)
+ [

# Viewing patch Dashboard summaries
](patch-manager-view-dashboard-summaries.md)
+ [

# Working with patch compliance reports
](patch-manager-compliance-reports.md)
+ [

# Patching managed nodes on demand
](patch-manager-patch-now-on-demand.md)
+ [

# Working with patch baselines
](patch-manager-create-a-patch-baseline.md)
+ [

# Viewing available patches
](patch-manager-view-available-patches.md)
+ [

# Creating and managing patch groups
](patch-manager-tag-a-patch-group.md)
+ [

# Integrating Patch Manager with AWS Security Hub CSPM
](patch-manager-security-hub-integration.md)

# Creating a patch policy


A patch policy is a configuration you set up using Quick Setup, a tool in AWS Systems Manager. Patch policies provide more extensive and more centralized control over your patching operations than is available with other methods of configuring patching. A patch policy defines the schedule and baseline to use when automatically patching your nodes and applications.

For more information, see the following topics:
+ [Patch policy configurations in Quick Setup](patch-manager-policies.md)
+ [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md)

# Viewing patch Dashboard summaries
Viewing patch Dashboard summaries

The **Dashboard** tab in Patch Manager provides you with a summary view in the console that you can use to monitor your patching operations in a consolidated view. Patch Manager is a tool in AWS Systems Manager. On the **Dashboard** tab, you can view the following:
+ A snapshot of how many managed nodes are compliant and noncompliant with patching rules.
+ A snapshot of the age of patch compliance results for your managed nodes.
+ A linked count of how many noncompliant managed nodes there are for each of the most common reasons for noncompliance.
+ A linked list of the most recent patching operations.
+ A linked list of the recurring patching tasks that have been set up.

**To view patch Dashboard summaries**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Dashboard** tab.

1. Scroll to the section containing summary data that you want to view:
   + **Amazon EC2 instance management**
   + **Compliance summary**
   + **Noncompliance counts**
   + **Compliance reports**
   + **Non-patch policy-based operations**
   + **Non-patch policy-based recurring tasks**

# Working with patch compliance reports
Patch compliance reports

Use the information in the following topics to help you generate and work with patch compliance reports in Patch Manager, a tool in AWS Systems Manager.

The information in the following topics apply no matter which method or type of configuration you're using for your patching operations: 
+ A patch policy configured in Quick Setup
+ A Host Management option configured in Quick Setup
+ A maintenance window to run a patch `Scan` or `Install` task
+ An on-demand **Patch now** operation

**Important**  
Patch compliance reports are point-in-time snapshots generated only by successful patching operations. Each report contains a capture time that identifies when the compliance status was calculated.  
If you have multiple types of operations in place to scan your instances for patch compliance, note that each scan overwrites the patch compliance data of previous scans. As a result, you might end up with unexpected results in your patch compliance data. For more information, see [Identifying the execution that created patch compliance data](patch-manager-compliance-data-overwrites.md).  
To verify which patch baseline was used to generate the latest compliance information, navigate to the **Compliance reporting** tab in Patch Manager, locate the row for the managed node you want information about, and then choose the baseline ID in the **Baseline ID used** column.

**Topics**
+ [

# Viewing patch compliance results
](patch-manager-view-compliance-results.md)
+ [

# Generating .csv patch compliance reports
](patch-manager-store-compliance-results-in-s3.md)
+ [

# Remediating noncompliant managed nodes with Patch Manager
](patch-manager-noncompliant-nodes.md)
+ [

# Identifying the execution that created patch compliance data
](patch-manager-compliance-data-overwrites.md)

# Viewing patch compliance results
Viewing patch compliance results

Use these procedures to view patch compliance information about your managed nodes.

This procedure applies to patch operations that use the `AWS-RunPatchBaseline` document. For information about viewing patch compliance information for patch operations that use the `AWS-RunPatchBaselineAssociation` document, see [Identifying noncompliant managed nodes](patch-manager-find-noncompliant-nodes.md).

**Note**  
The patch scanning operations for Quick Setup and Explorer use the `AWS-RunPatchBaselineAssociation` document. Quick Setup and Explorer are both tools in AWS Systems Manager.

**Identify the patch solution for a specific CVE issue (Linux)**  
For many Linux-based operating systems, patch compliance results indicate which Common Vulnerabilities and Exposure (CVE) bulletin issues are resolved by which patches. This information can help you determine how urgently you need to install a missing or failed patch.

CVE details are included for supported versions of the following operating system types:
+ AlmaLinux
+ Amazon Linux 2
+ Amazon Linux 2023
+ Oracle Linux
+ Red Hat Enterprise Linux (RHEL)
+ Rocky Linux

**Note**  
By default, CentOS Stream doesn't provide CVE information about updates. You can, however, allow this support by using third-party repositories such as the Extra Packages for Enterprise Linux (EPEL) repository published by Fedora. For information, see [EPEL](https://fedoraproject.org/wiki/EPEL) on the Fedora Wiki.  
Currently, CVE ID values are reported only for patches with a status of `Missing` or `Failed`.

You can also add CVE IDs to your lists of approved or rejected patches in your patch baselines, as the situation and your patching goals warrant.

For information about working with approved and rejected patch lists, see the following topics:
+ [Working with custom patch baselines](patch-manager-manage-patch-baselines.md)
+ [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md)
+ [How patch baseline rules work on Linux-based systems](patch-manager-linux-rules.md)
+ [How patches are installed](patch-manager-installing-patches.md)

**Note**  
In some cases, Microsoft releases patches for applications that don't specify an updated date and time. In these cases, an updated date and time of `01/01/1970` is supplied by default.

## Viewing patching compliance results


Use the following procedures to view patch compliance results in the AWS Systems Manager console. 

**Note**  
For information about generating patch compliance reports that are downloaded to an Amazon Simple Storage Service (Amazon S3) bucket, see [Generating .csv patch compliance reports](patch-manager-store-compliance-results-in-s3.md).

**To view patch compliance results**

1. Do one of the following.

   **Option 1** (recommended) – Navigate from Patch Manager, a tool in AWS Systems Manager:
   + In the navigation pane, choose **Patch Manager**.
   + Choose the **Compliance reporting** tab.
   + In the **Node patching details** area, choose the node ID of the managed node for which you want to review patch compliance results. Nodes that are `stopped` or `terminated` will not be displayed here.
   + In the **Details** area, in the **Properties** list, choose **Patches**.

   **Option 2** – Navigate from Compliance, a tool in AWS Systems Manager:
   + In the navigation pane, choose **Compliance**.
   + For **Compliance resources summary**, choose a number in the column for the types of patch resources you want to review, such as **Non-Compliant resources**.
   + Below, in the **Resource** list, choose the ID of the managed node for which you want to review patch compliance results.
   + In the **Details** area, in the **Properties** list, choose **Patches**.

   **Option 3** – Navigate from Fleet Manager, a tool in AWS Systems Manager.
   + In the navigation pane, choose **Fleet Manager**.
   + In the **Managed instances** area, choose the ID of the managed node for which you want to review patch compliance results.
   + In the **Details** area, in the **Properties** list, choose **Patches**.

1. (Optional) In the Search box (![\[The Search icon\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/search-icon.png)), choose from the available filters.

   For example, for Red Hat Enterprise Linux (RHEL), choose from the following:
   + Name
   + Classification
   + State
   + Severity

    For Windows Server, choose from the following:
   + KB
   + Classification
   + State
   + Severity

1. Choose one of the available values for the filter type you chose. For example, if you chose **State**, now choose a compliance state such as **InstalledPendingReboot**, **Failed** or **Missing**.
**Note**  
Currently, CVE ID values are reported only for patches with a status of `Missing` or `Failed`.

1. Depending on the compliance state of the managed node, you can choose what action to take to remedy any noncompliant nodes.

   For example, you can choose to patch your noncompliant managed nodes immediately. For information about patching your managed nodes on demand, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

   For information about patch compliance states, see [Patch compliance state values](patch-manager-compliance-states.md).

# Generating .csv patch compliance reports
Generating .csv patch compliance reports

You can use the AWS Systems Manager console to generate patch compliance reports that are saved as a .csv file to an Amazon Simple Storage Service (Amazon S3) bucket of your choice. You can generate a single on-demand report or specify a schedule for generating the reports automatically. 

Reports can be generated for a single managed node or for all managed nodes in your selected AWS account and AWS Region. For a single node, a report contains comprehensive details, including the IDs of patches related to a node being noncompliant. For a report on all managed nodes, only summary information and counts of noncompliant nodes' patches are provided.

After a report is generated, you can use a tool like Amazon Quick to import and analyze the data. Quick is a business intelligence (BI) service you can use to explore and interpret information in an interactive visual environment. For more information, see the [Amazon Quick User Guide](https://docs.aws.amazon.com/quicksuite/latest/userguide/what-is.html).

**Note**  
When you create a custom patch baseline, you can specify a compliance severity level for patches approved by that patch baseline, such as `Critical` or `High`. If the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

You can also specify an Amazon Simple Notification Service (Amazon SNS) topic to use for sending notifications when a report is generated.

**Service roles for generating patch compliance reports**  
The first time you generate a report, Systems Manager creates an Automation assume role named `AWS-SystemsManager-PatchSummaryExportRole` to use for the export process to S3.

**Note**  
If you are exporting compliance data to an encrypted S3 bucket, you must update its associated AWS KMS key policy to provide the necessary permissions for `AWS-SystemsManager-PatchSummaryExportRole`. For instance, add a permission similar to this to your S3 bucket's AWS KMS policy:  

```
{
    "Effect": "Allow",
    "Action": [
        "kms:GenerateDataKey"
    ],
    "Resource": "role-arn"
}
```
Replace *role-arn* with the Amazon Resource Name (ARN) of the created in your account, in the format `arn:aws:iam::111222333444:role/service-role/AWS-SystemsManager-PatchSummaryExportRole`.  
For more information, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

The first time you generate a report on a schedule, Systems Manager creates another service role named `AWS-EventBridge-Start-SSMAutomationRole`, along with the service role `AWS-SystemsManager-PatchSummaryExportRole` (if not created already) to use for the export process. `AWS-EventBridge-Start-SSMAutomationRole` enables Amazon EventBridge to start an automation using the runbook [AWS-ExportPatchReportToS3](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-exportpatchreporttos3).

We recommend against attempting to modify these policies and roles. Doing so could cause patch compliance report generation to fail. For more information, see [Troubleshooting patch compliance report generation](#patch-compliance-reports-troubleshooting).

**Topics**
+ [

## What's in a generated patch compliance report?
](#patch-compliance-reports-to-s3-examples)
+ [

## Generating patch compliance reports for a single managed node
](#patch-compliance-reports-to-s3-one-instance)
+ [

## Generating patch compliance reports for all managed nodes
](#patch-compliance-reports-to-s3-all-instances)
+ [

## Viewing patch compliance reporting history
](#patch-compliance-reporting-history)
+ [

## Viewing patch compliance reporting schedules
](#patch-compliance-reporting-schedules)
+ [

## Troubleshooting patch compliance report generation
](#patch-compliance-reports-troubleshooting)

## What's in a generated patch compliance report?


This topic provides information about the types of content included in the patch compliance reports that are generated and downloaded to a specified S3 bucket.

### Report format for a single managed node


A report generated for a single managed node provides both summary and detailed information.

[Download a sample report (single node)](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/Sample-single-instance-patch-compliance-report.zip)

Summary information for a single managed node includes the following:
+ Index
+ Instance ID
+ Instance name
+ Instance IP
+ Platform name
+ Platform version
+ SSM Agent version
+ Patch baseline
+ Patch group
+ Compliance status
+ Compliance severity
+ Noncompliant Critical severity patch count
+ Noncompliant High severity patch count
+ Noncompliant Medium severity patch count
+ Noncompliant Low severity patch count
+ Noncompliant Informational severity patch count
+ Noncompliant Unspecified severity patch count

Detailed information for a single managed node includes the following:
+ Index
+ Instance ID
+ Instance name
+ Patch name
+ KB ID/Patch ID
+ Patch state
+ Last report time
+ Compliance level
+ Patch severity
+ Patch classification
+ CVE ID
+ Patch baseline
+ Logs URL
+ Instance IP
+ Platform name
+ Platform version
+ SSM Agent version

**Note**  
When you create a custom patch baseline, you can specify a compliance severity level for patches approved by that patch baseline, such as `Critical` or `High`. If the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

### Report format for all managed nodes


A report generated for all managed nodes provides only summary information.

[Download a sample report (all managed nodes)](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/Sample-all-instances-patch-compliance-report.zip)

Summary information for all managed nodes includes the following:
+ Index
+ Instance ID
+ Instance name
+ Instance IP
+ Platform name
+ Platform version
+ SSM Agent version
+ Patch baseline
+ Patch group
+ Compliance status
+ Compliance severity
+ Noncompliant Critical severity patch count
+ Noncompliant High severity patch count
+ Noncompliant Medium severity patch count
+ Noncompliant Low severity patch count
+ Noncompliant Informational severity patch count
+ Noncompliant Unspecified severity patch count

## Generating patch compliance reports for a single managed node


Use the following procedure to generate a patch summary report for a single managed node in your AWS account. The report for a single managed node provides details about each patch that is out of compliance, including patch names and IDs. 

**To generate patch compliance reports for a single managed node**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose the button for the row of the managed node for which you want to generate a report, and then choose **View detail**.

1. In the **Patch summary** section, choose **Export to S3**.

1. For **Report name**, enter a name to help you identify the report later.

1. For **Reporting frequency**, choose one of the following:
   + **On demand** – Create a one-time report. Skip to Step 9.
   + **On a schedule** – Specify a recurring schedule for automatically generating reports. Continue to Step 8.

1. For **Schedule type**, specify either a rate expression, such as every 3 days, or provide a cron expression to set the report frequency.

   For information about cron expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. For **Bucket name**, select the name of an S3 bucket where you want to store the .csv report files.
**Important**  
If you're working in an AWS Region that was launched after March 20, 2019, you must select an S3 bucket in that same Region. Regions launched after that date were turned off by default. For more information and a list of these Regions, see [Enabling a Region](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable) in the *Amazon Web Services General Reference*.

1. (Optional) To send notifications when the report is generated, expend the **SNS topic** section, and then choose an existing Amazon SNS topic from **SNS topic Amazon Resource Name (ARN)**.

1. Choose **Submit**.

For information about viewing a history of generated reports, see [Viewing patch compliance reporting history](#patch-compliance-reporting-history).

For information about viewing details of reporting schedules you have created, see [Viewing patch compliance reporting schedules](#patch-compliance-reporting-schedules).

## Generating patch compliance reports for all managed nodes


Use the following procedure to generate a patch summary report for all managed nodes in your AWS account. The report for all managed nodes indicates which nodes are out of compliance and the numbers of noncompliant patches. It doesn't provide the names or other identifiers of the patches. For these additional details, you can generate a patch compliance report for a single managed node. For information, see [Generating patch compliance reports for a single managed node](#patch-compliance-reports-to-s3-one-instance) earlier in this topic. 

**To generate patch compliance reports for all managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose **Export to S3**. (Don't select a node ID first.)

1. For **Report name**, enter a name to help you identify the report later.

1. For **Reporting frequency**, choose one of the following:
   + **On demand** – Create a one-time report. Skip to Step 8.
   + **On a schedule** – Specify a recurring schedule for automatically generating reports. Continue to Step 7.

1. For **Schedule type**, specify either a rate expression, such as every 3 days, or provide a cron expression to set the report frequency.

   For information about cron expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. For **Bucket name**, select the name of an S3 bucket where you want to store the .csv report files.
**Important**  
If you're working in an AWS Region that was launched after March 20, 2019, you must select an S3 bucket in that same Region. Regions launched after that date were turned off by default. For more information and a list of these Regions, see [Enabling a Region](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable) in the *Amazon Web Services General Reference*.

1. (Optional) To send notifications when the report is generated, expend the **SNS topic** section, and then choose an existing Amazon SNS topic from **SNS topic Amazon Resource Name (ARN)**.

1. Choose **Submit**.

For information about viewing a history of generated reports, see [Viewing patch compliance reporting history](#patch-compliance-reporting-history).

For information about viewing details of reporting schedules you have created, see [Viewing patch compliance reporting schedules](#patch-compliance-reporting-schedules).

## Viewing patch compliance reporting history


Use the information in this topic to help you view details about the patch compliance reports generated in your AWS account.

**To view patch compliance reporting history**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose **View all S3 exports**, and then choose the **Export history** tab.

## Viewing patch compliance reporting schedules


Use the information in this topic to help you view details about the patch compliance reporting schedules created in your AWS account.

**To view patch compliance reporting history**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Compliance reporting** tab.

1. Choose **View all S3 exports**, and then choose the **Report schedule rules** tab.

## Troubleshooting patch compliance report generation


Use the following information to help you troubleshoot problems with generating patch compliance report generation in Patch Manager, a tool in AWS Systems Manager.

**Topics**
+ [

### A message reports that the `AWS-SystemsManager-PatchManagerExportRolePolicy` policy is corrupted
](#patch-compliance-reports-troubleshooting-1)
+ [

### After deleting patch compliance policies or roles, scheduled reports aren't generated successfully
](#patch-compliance-reports-troubleshooting-2)

### A message reports that the `AWS-SystemsManager-PatchManagerExportRolePolicy` policy is corrupted


**Problem**: You receive an error message similar to the following, indicating the `AWS-SystemsManager-PatchManagerExportRolePolicy` is corrupted:

```
An error occurred while updating the AWS-SystemsManager-PatchManagerExportRolePolicy
policy. If you have edited the policy, you might need to delete the policy, and any 
role that uses it, then try again. Systems Manager recreates the roles and policies 
you have deleted.
```
+ **Solution**: Use the Patch Manager console or AWS CLI to delete the affected roles and policies before generating a new patch compliance report.

**To delete the corrupt policy using the console**

  1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

  1. Do one of the following:

     **On-demand reports** – If the problem occurred while generating a one-time on-demand report, in the left navigation, choose **Policies**, search for `AWS-SystemsManager-PatchManagerExportRolePolicy`, then delete the policy. Next, choose **Roles**, search for `AWS-SystemsManager-PatchSummaryExportRole`, then delete the role.

     **Scheduled reports** – If the problem occurred while generating a report on a schedule, in the left navigation, choose **Policies**, search one at a time for `AWS-EventBridge-Start-SSMAutomationRolePolicy` and `AWS-SystemsManager-PatchManagerExportRolePolicy`, and delete each policy. Next, choose **Roles**, search one at a time for `AWS-EventBridge-Start-SSMAutomationRole` and `AWS-SystemsManager-PatchSummaryExportRole`, and delete each role.

**To delete the corrupt policy using the AWS CLI**

  Replace the *placeholder values* with your account ID.
  + If the problem occurred while generating a one-time on-demand report, run the following commands:

    ```
    aws iam delete-policy --policy-arn arn:aws:iam::account-id:policy/AWS-SystemsManager-PatchManagerExportRolePolicy
    ```

    ```
    aws iam delete-role --role-name AWS-SystemsManager-PatchSummaryExportRole
    ```

    If the problem occurred while generating a report on a schedule, run the following commands:

    ```
    aws iam delete-policy --policy-arn arn:aws:iam::account-id:policy/AWS-EventBridge-Start-SSMAutomationRolePolicy
    ```

    ```
    aws iam delete-policy --policy-arn arn:aws:iam::account-id:policy/AWS-SystemsManager-PatchManagerExportRolePolicy
    ```

    ```
    aws iam delete-role --role-name AWS-EventBridge-Start-SSMAutomationRole
    ```

    ```
    aws iam delete-role --role-name AWS-SystemsManager-PatchSummaryExportRole
    ```

  After completing either procedure, follow the steps to generate or schedule a new patch compliance report.

### After deleting patch compliance policies or roles, scheduled reports aren't generated successfully


**Problem**: The first time you generate a report, Systems Manager creates a service role and a policy to use for the export process (`AWS-SystemsManager-PatchSummaryExportRole` and `AWS-SystemsManager-PatchManagerExportRolePolicy`). The first time you generate a report on a schedule, Systems Manager creates another service role and a policy (`AWS-EventBridge-Start-SSMAutomationRole` and `AWS-EventBridge-Start-SSMAutomationRolePolicy`). These let Amazon EventBridge start an automation using the runbook [AWS-ExportPatchReportToS3 ](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-exportpatchreporttos3).

If you delete any of these policies or roles, the connections between your schedule and your specified S3 bucket and Amazon SNS topic might be lost. 
+ **Solution**: To work around this problem, we recommend deleting the previous schedule and creating a new schedule to replace the one that was experiencing issues.

# Remediating noncompliant managed nodes with Patch Manager
Remediating noncompliant managed nodes

The topics in this section provide overviews of how to identify managed nodes that are out of patch compliance and how to bring nodes into compliance.

**Topics**
+ [

# Identifying noncompliant managed nodes
](patch-manager-find-noncompliant-nodes.md)
+ [

# Patch compliance state values
](patch-manager-compliance-states.md)
+ [

# Patching noncompliant managed nodes
](patch-manager-compliance-remediation.md)

# Identifying noncompliant managed nodes


Out-of-compliance managed nodes are identified when either of two AWS Systems Manager documents (SSM documents) are run. These SSM documents reference the appropriate patch baseline for each managed node in Patch Manager, a tool in AWS Systems Manager. They then evaluate the patch state of the managed node and then make compliance results available to you.

There are two SSM documents that are used to identify or update noncompliant managed nodes: `AWS-RunPatchBaseline` and `AWS-RunPatchBaselineAssociation`. Each one is used by different processes, and their compliance results are available through different channels. The following table outlines the differences between these documents.

**Note**  
Patch compliance data from Patch Manager can be sent to AWS Security Hub CSPM. Security Hub CSPM gives you a comprehensive view of your high-priority security alerts and compliance status. It also monitors the patching status of your fleet. For more information, see [Integrating Patch Manager with AWS Security Hub CSPM](patch-manager-security-hub-integration.md). 


|  | `AWS-RunPatchBaseline` | `AWS-RunPatchBaselineAssociation` | 
| --- | --- | --- | 
| Processes that use the document |  **Patch on demand** - You can scan or patch managed nodes on demand using the **Patch now** option. For information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md). **Systems Manager Quick Setup patch policies** – You can create a patching configuration in Quick Setup, a tool in AWS Systems Manager, that can scan for or install missing patches on separate schedules for an entire organization, a subset of organizational units, or a single AWS account. For information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md). **Run a command** – You can manually run `AWS-RunPatchBaseline` in an operation in Run Command, a tool in AWS Systems Manager. For information, see [Running commands from the console](running-commands-console.md). **Maintenance window** – You can create a maintenance window that uses the SSM document `AWS-RunPatchBaseline` in a Run Command task type. For information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).  |  **Systems Manager Quick Setup Host Management** – You can enable a Host Management configuration option in Quick Setup to scan your managed instances for patch compliance each day. For information, see [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md). **Systems Manager [Explorer](Explorer.md)** – When you allow Explorer, a tool in AWS Systems Manager, it regularly scans your managed instances for patch compliance and reports results in the Explorer dashboard.  | 
| Format of the patch scan result data |  After `AWS-RunPatchBaseline` runs, Patch Manager sends an `AWS:PatchSummary` object to Inventory, a tool in AWS Systems Manager. This report is generated only by successful patching operations and includes a capture time that identifies when the compliance status was calculated.  |  After `AWS-RunPatchBaselineAssociation` runs, Patch Manager sends an `AWS:ComplianceItem` object to Systems Manager Inventory.  | 
| Viewing patch compliance reports in the console |  You can view patch compliance information for processes that use `AWS-RunPatchBaseline` in [Systems Manager Configuration Compliance](systems-manager-compliance.md) and [Working with managed nodes](fleet-manager-managed-nodes.md). For more information, see [Viewing patch compliance results](patch-manager-view-compliance-results.md).  |  If you use Quick Setup to scan your managed instances for patch compliance, you can see the compliance report in [Systems Manager Fleet Manager](fleet-manager.md). In the Fleet Manager console, choose the node ID of your managed node. In the **General** menu, choose **Configuration compliance**. If you use Explorer to scan your managed instances for patch compliance, you can see the compliance report in both Explorer and [Systems Manager OpsCenter](OpsCenter.md).  | 
| AWS CLI commands for viewing patch compliance results |  For processes that use `AWS-RunPatchBaseline`, you can use the following AWS CLI commands to view summary information about patches on a managed node. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-find-noncompliant-nodes.html)  |  For processes that use `AWS-RunPatchBaselineAssociation`, you can use the following AWS CLI command to view summary information about patches on an instance. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-find-noncompliant-nodes.html)  | 
| Patching operations |  For processes that use `AWS-RunPatchBaseline`, you specify whether you want the operation to run a `Scan` operation only, or a `Scan and install` operation. If your goal is to identify noncompliant managed nodes and not remediate them, run only a `Scan` operation.  | Quick Setup and Explorer processes, which use AWS-RunPatchBaselineAssociation, run only a Scan operation. | 
| More info |  [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md)  |  [SSM Command document for patching: `AWS-RunPatchBaselineAssociation`](patch-manager-aws-runpatchbaselineassociation.md)  | 

For information about the various patch compliance states you might see reported, see [Patch compliance state values](patch-manager-compliance-states.md)

For information about remediating managed nodes that are out of patch compliance, see [Patching noncompliant managed nodes](patch-manager-compliance-remediation.md).

# Patch compliance state values


The information about patches for a managed node include a report of the state, or status, of each individual patch.

**Tip**  
If you want to assign a specific patch compliance state to a managed node, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/put-compliance-items.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/put-compliance-items.html) AWS Command Line Interface (AWS CLI) command or the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutComplianceItems.html) API operation. Assigning compliance state isn't supported in the console.

Use the information in the following tables to help you identify why a managed node might be out of patch compliance.

## Patch compliance values for Debian Server and Ubuntu Server


For Debian Server and Ubuntu Server, the rules for package classification into the different compliance states are described in the following table.

**Note**  
Keep the following in mind when you're evaluating the `INSTALLED`, `INSTALLED_OTHER`, and `MISSING` status values: If you don't select the **Include nonsecurity updates** check box when creating or updating a patch baseline, patch candidate versions are limited to patches in the following repositories: .   
Ubuntu Server 16.04 LTS: `xenial-security`
Ubuntu Server 18.04 LTS: `bionic-security`
Ubuntu Server 20.04 LTS: `focal-security`
Ubuntu Server 22.04 LTS (`jammy-security`)
Ubuntu Server 24.04 LTS (`noble-security`)
Ubuntu Server 25.04 (`plucky-security`)
`debian-security` (Debian Server)
If you do select the **Include nonsecurity updates** check box, patches from other repositories are considered as well.


| Patch state | Description | Compliance status | 
| --- | --- | --- | 
|  **`INSTALLED`**  |  The patch is listed in the patch baseline and is installed on the managed node. It could have been installed either manually by an individual or automatically by Patch Manager when the `AWS-RunPatchBaseline` document was run on the managed node.  | Compliant | 
|  **`INSTALLED_OTHER`**  |  The patch isn't included in the baseline or isn't approved by the baseline but is installed on the managed node. The patch might have been installed manually, the package could be a required dependency of another approved patch, or the patch might have been included in an InstallOverrideList operation. If you don't specify `Block` as the **Rejected patches** action, `INSTALLED_OTHER` patches also includes installed but rejected patches.   | Compliant | 
|  **`INSTALLED_PENDING_REBOOT`**  |  `INSTALLED_PENDING_REBOOT` can mean either of two things: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html) In neither case does it mean that a patch with this status *requires* a reboot, only that the node hasn't been rebooted since the patch was installed.  | Non-Compliant | 
|  **`INSTALLED_REJECTED`**  |  The patch is installed on the managed node but is specified in a **Rejected patches** list. This typically means the patch was installed before it was added to a list of rejected patches.  | Non-Compliant | 
|  **`MISSING`**  |  The patch is approved in the baseline, but it isn't installed on the managed node. If you configure the `AWS-RunPatchBaseline` document task to scan (instead of install), the system reports this status for patches that were located during the scan but haven't been installed.  | Non-Compliant | 
|  **`FAILED`**  |  The patch is approved in the baseline, but it couldn't be installed. To troubleshoot this situation, review the command output for information that might help you understand the problem.  | Non-Compliant | 

## Patch compliance values for other operating systems


For all operating systems besides Debian Server and Ubuntu Server, the rules for package classification into the different compliance states are described in the following table. 


|  Patch state | Description | Compliance value | 
| --- | --- | --- | 
|  **`INSTALLED`**  |  The patch is listed in the patch baseline and is installed on the managed node. It could have been installed either manually by an individual or automatically by Patch Manager when the `AWS-RunPatchBaseline` document was run on the node.  | Compliant | 
|  **`INSTALLED_OTHER`**¹  |  The patch isn't in the baseline, but it is installed on the managed node. There are two possible reasons for this: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html)  | Compliant | 
|  **`INSTALLED_REJECTED`**  |  The patch is installed on the managed node but is specified in a rejected patches list. This typically means the patch was installed before it was added to a list of rejected patches.  | Non-Compliant | 
|  **`INSTALLED_PENDING_REBOOT`**  |  `INSTALLED_PENDING_REBOOT` can mean either of two things: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-states.html) In neither case does it mean that a patch with this status *requires* a reboot, only that the node hasn't been rebooted since the patch was installed.  | Non-Compliant | 
|  **`MISSING`**  |  The patch is approved in the baseline, but it isn't installed on the managed node. If you configure the `AWS-RunPatchBaseline` document task to scan (instead of install), the system reports this status for patches that were located during the scan but haven't been installed.  | Non-Compliant | 
|  **`FAILED`**  |  The patch is approved in the baseline, but it couldn't be installed. To troubleshoot this situation, review the command output for information that might help you understand the problem.  | Non-Compliant | 
|  **`NOT_APPLICABLE`**¹  |  *This compliance state is reported only for Windows Server operating systems.* The patch is approved in the baseline, but the service or feature that uses the patch isn't installed on the managed node. For example, a patch for a web server service such as Internet Information Services (IIS) would show `NOT_APPLICABLE` if it was approved in the baseline, but the web service isn't installed on the managed node. A patch can also be marked `NOT_APPLICABLE` if it has been superseded by a subsequent update. This means that the later update is installed and the `NOT_APPLICABLE` update is no longer required.  | Not applicable | 
| AVAILABLE\$1SECURITY\$1UPDATES |  *This compliance state is reported only for Windows Server operating systems.* An available security update patch that is not approved by the patch baseline can have a compliance value of `Compliant` or `Non-Compliant`, as defined in a custom patch baseline. When you create or update a patch baseline, you choose the status you want to assign to security patches that are available but not approved because they don't meet the installation criteria specified in the patch baseline. For example, security patches that you might want installed can be skipped if you have specified a long period to wait after a patch is released before installation. If an update to the patch is released during your specified waiting period, the waiting period for installing the patch starts over. If the waiting period is too long, multiple versions of the patch could be released but never installed. For patch summary counts, when a patch is reported as `AvailableSecurityUpdate`, it will always be included in `AvailableSecurityUpdateCount`. If the baseline is configured to report these patches as `NonCompliant`, it is also included in `SecurityNonCompliantCount`. If the baseline is configured to report these patches as `Compliant`, they are not included in `SecurityNonCompliantCount`. These patches are always reported with an unspecified severity and are never included in the `CriticalNonCompliantCount`.  |  Compliant or Non-Compliant, depending on the option selected for available security updates.  Using the console to create or update a patch baseline, you specify this option in the **Available security updates compliance status** field. Using the AWS CLI to run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-patch-baseline.html) or [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-patch-baseline.html) command, you specify this option in the `available-security-updates-compliance-status` parameter.   | 

¹ For patches with the state `INSTALLED_OTHER` and `NOT_APPLICABLE`, Patch Manager omits some data from query results based on the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patches.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-instance-patches.html) command, such as the values for `Classification` and `Severity`. This is done to help prevent exceeding the data limit for individual nodes in Inventory, a tool in AWS Systems Manager. To view all patch details, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html) command. 

# Patching noncompliant managed nodes


Many of the same AWS Systems Manager tools and processes you can use to check managed nodes for patch compliance can be used to bring nodes into compliance with the patch rules that currently apply to them. To bring managed nodes into patch compliance, Patch Manager, a tool in AWS Systems Manager, must run a `Scan and install` operation. (If your goal is only to identify noncompliant managed nodes and not remediate them, run a `Scan` operation instead. For more information, see [Identifying noncompliant managed nodes](patch-manager-find-noncompliant-nodes.md).)

**Install patches using Systems Manager**  
You can choose from several tools to run a `Scan and install` operation:
+ (Recommended) Configure a patch policy in Quick Setup, a tool in Systems Manager, that lets you install missing patches on a schedule for an entire organization, a subset of organizational units, or a single AWS account. For more information, see [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).
+ Create a maintenance window that uses the Systems Manager document (SSM document) `AWS-RunPatchBaseline` in a Run Command task type. For information, see [Tutorial: Create a maintenance window for patching using the console](maintenance-window-tutorial-patching.md).
+ Manually run `AWS-RunPatchBaseline` in a Run Command operation. For information, see [Running commands from the console](running-commands-console.md).
+ Install patches on demand using the **Patch now** option. For information, see [Patching managed nodes on demand](patch-manager-patch-now-on-demand.md).

# Identifying the execution that created patch compliance data


Patch compliance data represents a point-in-time snapshot from the latest successful patching operation. Each compliance report includes both an execution ID and a capture time that help you identify which operation created the compliance data and when it was generated.

If you have multiple types of operations in place to scan your instances for patch compliance, each scan overwrites the patch compliance data of previous scans. As a result, you might end up with unexpected results in your patch compliance data.

For example, suppose you create a patch policy that scans for patch compliance each day at 2 AM local time. That patch policy uses a patch baseline that targets patches with severity marked as `Critical`, `Important`, and `Moderate`. This patch baseline also specifies a few specifically rejected patches. 

Also suppose that you already had a maintenance window set up to scan the same set of managed nodes each day at 4 AM local time, which you don't delete or deactivate. That maintenance window’s task uses a different patch baseline, one that targets only patches with a `Critical` severity and doesn’t exclude any specific patches. 

When this second scan is performed by the maintenance window, the patch compliance data from the first scan is deleted and replaced with patch compliance from the second scan. 

Therefore, we strongly recommend using only one automated method for scanning and installing in your patching operations. If you're setting up patch policies, you should delete or deactivate other methods of scanning for patch compliance. For more information, see the following topics: 
+ To remove a patching operation task from a maintenance window – [Updating or deregistering maintenance window tasks using the console](sysman-maintenance-update.md#sysman-maintenance-update-tasks) 
+ To delete a State Manager association – [Deleting associations](systems-manager-state-manager-delete-association.md).

To deactivate daily patch compliance scans in a Host Management configuration, do the following in Quick Setup:

1. In the navigation pane, choose **Quick Setup**.

1. Select the Host Management configuration to update.

1. Choose **Actions, Edit configuration**.

1. Clear the **Scan instances for missing patches daily** check box.

1. Choose **Update**.

**Note**  
Using the **Patch now** option to scan a managed node for compliance also results in an overwrite of patch compliance data. 

# Patching managed nodes on demand
Patching managed nodes on demand

Using the **Patch now** option in Patch Manager, a tool in AWS Systems Manager, you can run on-demand patching operations from the Systems Manager console. This means you don’t have to create a schedule in order to update the compliance status of your managed nodes or to install patches on noncompliant nodes. You also don’t need to switch the Systems Manager console between Patch Manager and Maintenance Windows, a tool in AWS Systems Manager, in order to set up or modify a scheduled patching window.

**Patch now** is especially useful when you must apply zero-day updates or install other critical patches on your managed nodes as soon as possible.

**Note**  
Patching on demand is supported for a single AWS account-AWS Region pair at a time. It can't be used with patching operations that are based on *patch policies*. We recommend using patch policies for keeping all your managed nodes in compliance. For more information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

**Topics**
+ [

## How 'Patch now' works
](#patch-on-demand-how-it-works)
+ [

## Running 'Patch now'
](#run-patch-now)

## How 'Patch now' works


To run **Patch now**, you specify just two required settings:
+ Whether to scan for missing patches only, or to scan *and* install patches on your managed nodes
+ Which managed nodes to run the operation on

When the **Patch now** operation runs, it determines which patch baseline to use in the same way one is selected for other patching operations. If a managed node is associated with a patch group, the patch baseline specified for that group is used. If the managed node isn't associated with a patch group, the operation uses the patch baseline that is currently set as the default for the operating system type of the managed node. This can be a predefined baseline, or the custom baseline you have set as the default. For more information about patch baseline selection, see [Patch groups](patch-manager-patch-groups.md). 

Options you can specify for **Patch now** include choosing when, or whether, to reboot managed nodes after patching, specifying an Amazon Simple Storage Service (Amazon S3) bucket to store log data for the patching operation, and running Systems Manager documents (SSM documents) as lifecycle hooks during patching.

### Concurrency and error thresholds for 'Patch now'


For **Patch now** operations, concurrency and error threshold options are handled by Patch Manager. You don't need to specify how many managed nodes to patch at once, nor how many errors are permitted before the operation fails. Patch Manager applies the concurrency and error threshold settings described in the following tables when you patch on demand.

**Important**  
The following thresholds apply to `Scan and install` operations only. For `Scan` operations, Patch Manager attempts to scan up to 1,000 nodes concurrently, and continue scanning until it has encountered up to 1,000 errors.


**Concurrency: Install operations**  

| Total number of managed nodes in the **Patch now** operation | Number of managed nodes scanned or patched at a time | 
| --- | --- | 
| Fewer than 25 | 1 | 
| 25-100 | 5% | 
| 101 to 1,000 | 8% | 
| More than 1,000 | 10% | 


**Error threshold: Install operations**  

| Total number of managed nodes in the **Patch now** operation | Number of errors permitted before the operation fails | 
| --- | --- | 
| Fewer than 25 | 1 | 
| 25-100 | 5 | 
| 101 to 1,000 | 10 | 
| More than 1,000 | 10 | 

### Using 'Patch now' lifecycle hooks


**Patch now** provides you with the ability to run SSM Command documents as lifecycle hooks during an `Install` patching operation. You can use these hooks for tasks such as shutting down applications before patching or running health checks on your applications after patching or after a reboot. 

For more information about using lifecycle hooks, see [SSM Command document for patching: `AWS-RunPatchBaselineWithHooks`](patch-manager-aws-runpatchbaselinewithhooks.md).

The following table lists the lifecycle hooks available for each of the three **Patch now** reboot options, in addition to sample uses for each hook.


**Lifecycle hooks and sample uses**  

| Reboot option | Hook: Before installation | Hook: After installation | Hook: On exit | Hook: After scheduled reboot | 
| --- | --- | --- | --- | --- | 
| Reboot if needed |  Run an SSM document before patching begins. Example use: Safely shut down applications before the patching process begins.   |  Run an SSM document at the end of the patching operation and before managed node reboot. Example use: Run operations such as installing third-party applications before a potential reboot.  |  Run an SSM document after the patching operation is complete and instances are rebooted. Example use: Ensure that applications are running as expected after patching.  | Not available | 
| Do not reboot my instances | Same as above. |  Run an SSM document at the end of the patching operation. Example use: Ensure that applications are running as expected after patching.  |  *Not available*   |  *Not available*   | 
| Schedule a reboot time | Same as above. | Same as for Do not reboot my instances. | Not available |  Run an SSM document immediately after a scheduled reboot is complete. Example use: Ensure that applications are running as expected after the reboot.  | 

## Running 'Patch now'


Use the following procedure to patch your managed nodes on demand.

**To run 'Patch now'**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose **Patch now**.

1. For **Patching operation**, choose one of the following:
   + **Scan**: Patch Manager finds which patches are missing from your managed nodes but doesn't install them. You can view the results in the **Compliance** dashboard or in other tools you use for viewing patch compliance.
   + **Scan and install**: Patch Manager finds which patches are missing from your managed nodes and installs them.

1. Use this step only if you chose **Scan and install** in the previous step. For **Reboot option**, choose one of the following:
   + **Reboot if needed**: After installation, Patch Manager reboots managed nodes only if needed to complete a patch installation.
   + **Don't reboot my instances**: After installation, Patch Manager doesn't reboot managed nodes. You can reboot nodes manually when you choose or manage reboots outside of Patch Manager.
   + **Schedule a reboot time**: Specify the date, time, and UTC time zone for Patch Manager to reboot your managed nodes. After you run the **Patch now** operation, the scheduled reboot is listed as an association in State Manager with the name `AWS-PatchRebootAssociation`.
**Important**  
If you cancel the main patching operation after it has started, the `AWS-PatchRebootAssociation` association in State Manager is NOT automatically canceled. To prevent unexpected reboots, you must manually delete `AWS-PatchRebootAssociation` from State Manager if you no longer want the scheduled reboot to occur. Failure to do so may result in unplanned system reboots that could impact production workloads. You can find this association in the Systems Manager console under **State Manager** > **Associations**.

1. For **Instances to patch**, choose one of the following:
   + **Patch all instances**: Patch Manager runs the specified operation on all managed nodes in your AWS account in the current AWS Region.
   + **Patch only the target instances I specify**: You specify which managed nodes to target in the next step.

1. Use this step only if you chose **Patch only the target instances I specify** in the previous step. In the **Target selection** section, identify the nodes on which you want to run this operation by specifying tags, selecting nodes manually, or specifying a resource group.
**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.  
If you choose to target a resource group, note that resource groups that are based on an AWS CloudFormation stack must still be tagged with the default `aws:cloudformation:stack-id` tag. If it has been removed, Patch Manager might be unable to determine which managed nodes belong to the resource group.

1. (Optional) For **Patching log storage**, if you want to create and save logs from this patching operation, select the S3 bucket for storing the logs.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. (Optional) If you want to run SSM documents as lifecycle hooks during specific points of the patching operation, do the following:
   + Choose **Use lifecycle hooks**.
   + For each available hook, select the SSM document to run at the specified point of the operation:
     + Before installation
     + After installation
     + On exit
     + After scheduled reboot
**Note**  
The default document, `AWS-Noop`, runs no operations.

1. Choose **Patch now**.

   The **Association execution summary** page opens. (Patch now uses associations in State Manager, a tool in AWS Systems Manager, for its operations.) In the **Operation summary** area, you can monitor the status of scanning or patching on the managed nodes you specified.

# Working with patch baselines
Patch baselines

A patch baseline in Patch Manager, a tool in AWS Systems Manager, defines which patches are approved for installation on your managed nodes. You can specify approved or rejected patches one by one. You can also create auto-approval rules to specify that certain types of updates (for example, critical updates) should be automatically approved. The rejected list overrides both the rules and the approve list. To use a list of approved patches to install specific packages, you first remove all auto-approval rules. If you explicitly identify a patch as rejected, it won't be approved or installed, even if it matches all of the criteria in an auto-approval rule. Also, a patch is installed on a managed node only if it applies to the software on the node, even if the patch has otherwise been approved for the managed node.

**Topics**
+ [

# Viewing AWS predefined patch baselines
](patch-manager-view-predefined-patch-baselines.md)
+ [

# Working with custom patch baselines
](patch-manager-manage-patch-baselines.md)
+ [

# Setting an existing patch baseline as the default
](patch-manager-default-patch-baseline.md)

**More info**  
+ [Patch baselines](patch-manager-patch-baselines.md)

# Viewing AWS predefined patch baselines
Viewing AWS predefined patch baselines

Patch Manager, a tool in AWS Systems Manager, includes a predefined patch baseline for each operating system supported by Patch Manager. You can use these patch baselines (you can't customize them), or you can create your own. The following procedure describes how to view a predefined patch baseline to see if it meets your needs. To learn more about patch baselines, see [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md).

**To view AWS predefined patch baselines**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. In the patch baselines list, choose the baseline ID of one of the predefined patch baselines.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose the baseline ID of one of the predefined patch baselines.
**Note**  
For Windows Server, three predefined patch baselines are provided. The patch baselines `AWS-DefaultPatchBaseline` and `AWS-WindowsPredefinedPatchBaseline-OS` support only operating system updates on the Windows operating system itself. `AWS-DefaultPatchBaseline` is used as the default patch baseline for Windows Server managed nodes unless you specify a different patch baseline. The configuration settings in these two patch baselines are the same. The newer of the two, `AWS-WindowsPredefinedPatchBaseline-OS`, was created to distinguish it from the third predefined patch baseline for Windows Server. That patch baseline, `AWS-WindowsPredefinedPatchBaseline-OS-Applications`, can be used to apply patches to both the Windows Server operating system and supported applications released by Microsoft.  
For more information, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules** section, review the patch baseline configuration.

1. If the configuration is acceptable for your managed nodes, you can skip ahead to the procedure [Creating and managing patch groups](patch-manager-tag-a-patch-group.md). 

   -or-

   To create your own default patch baseline, continue to the topic [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

# Working with custom patch baselines
Custom patch baselines

Patch Manager, a tool in AWS Systems Manager, includes a predefined patch baseline for each operating system supported by Patch Manager. You can use these patch baselines (you can't customize them), or you can create your own. 

The following procedures describe how to create, update, and delete your own custom patch baselines. To learn more about patch baselines, see [Predefined and custom patch baselines](patch-manager-predefined-and-custom-patch-baselines.md).

**Topics**
+ [

# Creating a custom patch baseline for Linux
](patch-manager-create-a-patch-baseline-for-linux.md)
+ [

# Creating a custom patch baseline for macOS
](patch-manager-create-a-patch-baseline-for-macos.md)
+ [

# Creating a custom patch baseline for Windows Server
](patch-manager-create-a-patch-baseline-for-windows.md)
+ [

# Updating or deleting a custom patch baseline
](patch-manager-update-or-delete-a-patch-baseline.md)

# Creating a custom patch baseline for Linux


Use the following procedure to create a custom patch baseline for Linux managed nodes in Patch Manager, a tool in AWS Systems Manager. 

For information about creating a patch baseline for macOS managed nodes, see [Creating a custom patch baseline for macOS](patch-manager-create-a-patch-baseline-for-macos.md). For information about creating a patch baseline for Windows managed nodes, see [Creating a custom patch baseline for Windows Server](patch-manager-create-a-patch-baseline-for-windows.md).

**To create a custom patch baseline for Linux managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose **Create patch baseline**.

1. For **Name**, enter a name for your new patch baseline, for example, `MyRHELPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose an operating system, for example, `Red Hat Enterprise Linux`.

1. If you want to begin using this patch baseline as the default for the selected operating system as soon as you create it, check the box next to **Set this patch baseline as the default patch baseline for *operating system name* instances**.
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating-systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The version of the operating systems the approval rule applies to, such as `RedhatEnterpriseLinux7.4`. The default selection is `All`.
   + **Classification**: The type of patches the approval rule applies to, such as `Security` or `Enhancement`. The default selection is `All`. 
**Tip**  
You can configure a patch baseline to control whether minor version upgrades for Linux are installed, such as RHEL 7.8. Minor version upgrades can be installed automatically by Patch Manager provided that the update is available in the appropriate repository.  
For Linux operating systems, minor version upgrades aren't classified consistently. They can be classified as bug fixes or security updates, or not classified, even within the same kernel version. Here are a few options for controlling whether a patch baseline installs them.   
**Option 1**: The broadest approval rule to ensure minor version upgrades are installed when available is to specify **Classification** as `All` (\$1) and choose the **Include nonsecurity updates** option.
**Option 2**: To ensure patches for an operating system version are installed, you can use a wildcard (\$1) to specify its kernel format in the **Patch exceptions** section of the baseline. For example, the kernel format for RHEL 7.\$1 is `kernel-3.10.0-*.el7.x86_64`.  
Enter `kernel-3.10.0-*.el7.x86_64` in the **Approved patches** list in your patch baseline to ensure all patches, including minor version upgrades, are applied to your RHEL 7.\$1 managed nodes. (If you know the exact package name of a minor version patch, you can enter that instead.)
**Option 3**: You can have the most control over which patches are applied to your managed nodes, including minor version upgrades, by using the [InstallOverrideList](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-installoverridelist) parameter in the `AWS-RunPatchBaseline` document. For more information, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).
   + **Severity**: The severity value of patches the rule is to apply to, such as `Critical`. The default selection is `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
**Note**  
Because it's not possible to reliably determine the release dates of update packages for Ubuntu Server, the auto-approval options aren't supported for this operating system.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or last updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `Critical` or `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.
   + **Include non-security updates**: Select the check box to install nonsecurity Linux operating system patches made available in the source repository, in addition to the security-related patches. 

   For more information about working with approval rules in a custom patch baseline, see [Custom baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom).

1. If you want to explicitly approve any patches in addition to those meeting your approval rules, do the following in the **Patch exceptions** section:
   + For **Approved patches**, enter a comma-separated list of the patches you want to approve.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + (Optional) For **Approved patches compliance level**, assign a compliance level to the patches in the list.
   + If any approved patches you specify aren't related to security, select the **Include non-security updates** check box for these patches to be installed on your Linux operating system as well.

1. If you want to explicitly reject any patches that otherwise meet your approval rules, do the following in the **Patch exceptions** section:
   + For **Rejected patches**, enter a comma-separated list of the patches you want to reject.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + For **Rejected patches action**, select the action for Patch Manager to take on patches included in the **Rejected patches** list.
     + **Allow as dependency**: A package in the **Rejected patches** list is installed only if it's a dependency of another package. It's considered compliant with the patch baseline and its status is reported as *InstalledOther*. This is the default action if no option is specified.
     + **Block**: Packages in the **Rejected patches** list, and packages that include them as dependencies, aren't installed by Patch Manager under any circumstances. If a package was installed before it was added to the **Rejected patches** list, or is installed outside of Patch Manager afterward, it's considered noncompliant with the patch baseline and its status is reported as *InstalledRejected*.
**Note**  
Patch Manager searches for patch dependencies recursively.

1. (Optional) If you want to specify alternative patch repositories for different versions of an operating system, such as *AmazonLinux2016.03* and *AmazonLinux2017.09*, do the following for each product in the **Patch sources** section:
   + In **Name**, enter a name to help you identify the source configuration.
   + In **Product**, select the version of the operating systems the patch source repository is for, such as `RedhatEnterpriseLinux7.4`.
   + In **Configuration**, enter the value of the repository configuration to use in the appropriate format:

------
#### [  Example for yum repositories  ]

     ```
     [main]
     name=MyCustomRepository
     baseurl=https://my-custom-repository
     enabled=1
     ```

**Tip**  
For information about other options available for your yum repository configuration, see [dnf.conf(5)](https://man7.org/linux/man-pages/man5/dnf.conf.5.html).

------
#### [  Examples for Ubuntu Server and Debian Server ]

      `deb http://security.ubuntu.com/ubuntu jammy main` 

      `deb https://site.example.com/debian distribution component1 component2 component3` 

     Repo information for Ubuntu Server repositories must be specifed in a single line. For more examples and information, see [jammy (5) sources.list.5.gz](https://manpages.ubuntu.com/manpages/jammy/man5/sources.list.5.html) on the *Ubuntu Server Manuals* website and [sources.list format](https://wiki.debian.org/SourcesList#sources.list_format) on the *Debian Wiki*.

------

     Choose **Add another source** to specify a source repository for each additional operating system version, up to a maximum of 20.

     For more information about alternative source patch repositories, see [How to specify an alternative patch source repository (Linux)](patch-manager-alternative-source-repository.md).

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a patch baseline to identify the severity level of patches it specifies, the operating system family it applies to, and the environment type. In this case, you could specify tags similar to the following key name/value pairs:
   + `Key=PatchSeverity,Value=Critical`
   + `Key=OS,Value=RHEL`
   + `Key=Environment,Value=Production`

1. Choose **Create patch baseline**.

# Creating a custom patch baseline for macOS


Use the following procedure to create a custom patch baseline for macOS managed nodes in Patch Manager, a tool in AWS Systems Manager. 

For information about creating a patch baseline for Windows Server managed nodes, see [Creating a custom patch baseline for Windows Server](patch-manager-create-a-patch-baseline-for-windows.md). For information about creating a patch baseline for Linux managed nodes, see [Creating a custom patch baseline for Linux](patch-manager-create-a-patch-baseline-for-linux.md). 

**Note**  
macOS is not supported in all AWS Regions. For more information about Amazon EC2 support for macOS, see [Amazon EC2 Mac instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html) in the *Amazon EC2 User Guide*.

**To create a custom patch baseline for macOS managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose **Create patch baseline**.

1. For **Name**, enter a name for your new patch baseline, for example, `MymacOSPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose macOS.

1. If you want to begin using this patch baseline as the default for macOS as soon as you create it, check the box next to **Set this patch baseline as the default patch baseline for macOS instances**.
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating-systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The version of the operating systems the approval rule applies to, such as `BigSur11.3.1` or `Ventura13.7`. The default selection is `All`.
   + **Classification**: The package manager or package managers that you want to apply packages during the patching process. You can choose from the following:
     + softwareupdate
     + installer
     + brew
     + brew cask

     The default selection is `All`. 
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `Critical` or `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

   For more information about working with approval rules in a custom patch baseline, see [Custom baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom).

1. If you want to explicitly approve any patches in addition to those meeting your approval rules, do the following in the **Patch exceptions** section:
   + For **Approved patches**, enter a comma-separated list of the patches you want to approve.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + (Optional) For **Approved patches compliance level**, assign a compliance level to the patches in the list.

1. If you want to explicitly reject any patches that otherwise meet your approval rules, do the following in the **Patch exceptions** section:
   + For **Rejected patches**, enter a comma-separated list of the patches you want to reject.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + For **Rejected patches action**, select the action for Patch Manager to take on patches included in the **Rejected patches** list.
     + **Allow as dependency**: A package in the **Rejected patches** list is installed only if it's a dependency of another package. It's considered compliant with the patch baseline and its status is reported as *InstalledOther*. This is the default action if no option is specified.
     + **Block**: Packages in the **Rejected patches** list, and packages that include them as dependencies, aren't installed by Patch Manager under any circumstances. If a package was installed before it was added to the **Rejected patches** list, or is installed outside of Patch Manager afterward, it's considered noncompliant with the patch baseline and its status is reported as *InstalledRejected*.

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a patch baseline to identify the severity level of patches it specifies, the package manager it applies to, and the environment type. In this case, you could specify tags similar to the following key name/value pairs:
   + `Key=PatchSeverity,Value=Critical`
   + `Key=PackageManager,Value=softwareupdate`
   + `Key=Environment,Value=Production`

1. Choose **Create patch baseline**.

# Creating a custom patch baseline for Windows Server


Use the following procedure to create a custom patch baseline for Windows managed nodes in Patch Manager, a tool in AWS Systems Manager. 

For information about creating a patch baseline for Linux managed nodes, see [Creating a custom patch baseline for Linux](patch-manager-create-a-patch-baseline-for-linux.md). Fo information about creating a patch baseline for macOS managed nodes, see [Creating a custom patch baseline for macOS](patch-manager-create-a-patch-baseline-for-macos.md).

For an example of creating a patch baseline that is limited to installing Windows Service Packs only, see [Tutorial: Create a patch baseline for installing Windows Service Packs using the console](patch-manager-windows-service-pack-patch-baseline-tutorial.md).

**To create a custom patch baseline (Windows)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**. 

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, choose the **Patch baselines** tab, and then choose **Create patch baseline**.

1. For **Name**, enter a name for your new patch baseline, for example, `MyWindowsPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose `Windows`.

1. For **Available security updates compliance status**, choose the status you want to assign to security patches that are available but not approved because they don't meet the installation criteria specified in the patch baseline, **Non-Compliant** or **Compliant**.

   Example scenario: Security patches that you might want installed can be skipped if you have specified a long period to wait after a patch is released before installation. If an update to the patch is released during your specified waiting period, the waiting period for installing the patch starts over. If the waiting period is too long, multiple versions of the patch could be released but never installed.

1. If you want to begin using this patch baseline as the default for Windows as soon as you create it, select **Set this patch baseline as the default patch baseline for Windows Server instances** .
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The version of the operating systems the approval rule applies to, such as `WindowsServer2012`. The default selection is `All`.
   + **Classification**: The type of patches the approval rule applies to, such as `CriticalUpdates`, `Drivers`, and `Tools`. The default selection is `All`. 
**Tip**  
You can include Windows Service Pack installations in your approval rules by including `ServicePacks` or by choosing `All` in your **Classification** list. For an example, see [Tutorial: Create a patch baseline for installing Windows Service Packs using the console](patch-manager-windows-service-pack-patch-baseline-tutorial.md).
   + **Severity**: The severity value of patches the rule is to apply to, such as `Critical`. The default selection is `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

1. (Optional) In the **Approval rules for applications** section, use the fields to create one or more auto-approval rules.
**Note**  
Instead of specifying approval rules, you can specify lists of approved and rejected patches as patch exceptions. See steps 10 and 11. 
   + **Product family**: The general Microsoft product family for which you want to specify a rule, such as `Office` or `Exchange Server`.
   + **Products**: The version of the application the approval rule applies to, such as `Office 2016` or `Active Directory Rights Management Services Client 2.0 2016`. The default selection is `All`.
   + **Classification**: The type of patches the approval rule applies to, such as `CriticalUpdates`. The default selection is `All`. 
   + **Severity**: The severity value of patches the rule applies to, such as `Critical`. The default selection is `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to patches approved by the baseline, such as `Critical` or `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved patch is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

1. (Optional) If you want to explicitly approve any patches instead of letting patches be selected according to approval rules, do the following in the **Patch exceptions** section:
   + For **Approved patches**, enter a comma-separated list of the patches you want to approve.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + (Optional) For **Approved patches compliance level**, assign a compliance level to the patches in the list.

1. If you want to explicitly reject any patches that otherwise meet your approval rules, do the following in the **Patch exceptions** section:
   + For **Rejected patches**, enter a comma-separated list of the patches you want to reject.

     For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).
   + For **Rejected patches action**, select the action for Patch Manager to take on patches included in the **Rejected patches** list.
     + **Allow as dependency**: Windows Server doesn't support the concept of package dependencies. If a package in the **Rejected patches** list and already installed on the node, its status is reported as `INSTALLED_OTHER`. Any package not already installed on the node is skipped. 
     + **Block**: Packages in the **Rejected patches** list aren't installed by Patch Manager under any circumstances. If a package was installed before it was added to the **Rejected patches** list, or is installed outside of Patch Manager afterward, it's considered noncompliant with the patch baseline and its status is reported as `INSTALLED_REJECTED`.

     For more information about rejected package actions, see [Rejected patch list options in custom patch baselines](patch-manager-windows-and-linux-differences.md#rejected-patches-diff). 

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a patch baseline to identify the severity level of patches it specifies, the operating system family it applies to, and the environment type. In this case, you could specify tags similar to the following key name/value pairs:
   + `Key=PatchSeverity,Value=Critical`
   + `Key=OS,Value=RHEL`
   + `Key=Environment,Value=Production`

1. Choose **Create patch baseline**.

# Updating or deleting a custom patch baseline


You can update or delete a custom patch baseline that you have created in Patch Manager, a tool in AWS Systems Manager. When you update a patch baseline, you can change its name or description, its approval rules, and its exceptions for approved and rejected patches. You can also update the tags that are applied to the patch baseline. You can't change the operating system type that a patch baseline has been created for, and you can't make changes to a predefined patch baseline provided by AWS.

## Updating or deleting a patch baseline


Follow these steps to update or delete a patch baseline.

**Important**  
 Use caution when deleting a custom patch baseline that might be used by a patch policy configuration in Quick Setup.  
If you are using a [patch policy configuration](patch-manager-policies.md) in Quick Setup, updates you make to custom patch baselines are synchronized with Quick Setup once an hour.   
If a custom patch baseline that was referenced in a patch policy is deleted, a banner displays on the Quick Setup **Configuration details** page for your patch policy. The banner informs you that the patch policy references a patch baseline that no longer exists, and that subsequent patching operations will fail. In this case, return to the Quick Setup **Configurations** page, select the Patch Manager configuration , and choose **Actions**, **Edit configuration**. The deleted patch baseline name is highlighted, and you must select a new patch baseline for the affected operating system.

**To update or delete a patch baseline**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the patch baseline that you want to update or delete, and then do one of the following:
   + To remove the patch baseline from your AWS account, choose **Delete**. The system prompts you to confirm your actions. 
   + To make changes to the patch baseline name or description, approval rules, or patch exceptions, choose **Edit**. On the **Edit patch baseline** page, change the values and options that you want, and then choose **Save changes**. 
   + To add, change, or delete tags applied to the patch baseline, choose the **Tags** tab, and then choose **Edit tags**. On the **Edit patch baseline tags** page, make updates to the patch baseline tags, and then choose **Save changes**. 

   For information about the configuration choices you can make, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

# Setting an existing patch baseline as the default
Setting an existing patch baseline as the default

**Important**  
Any default patch baseline selections you make here do not apply to patching operations that are based on a patch policy. Patch policies use their own patch baseline specifications. For more information about patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

When you create a custom patch baseline in Patch Manager, a tool in AWS Systems Manager, you can set the baseline as the default for the associated operating system type as soon as you create it. For information, see [Working with custom patch baselines](patch-manager-manage-patch-baselines.md).

You can also set an existing patch baseline as the default for an operating system type.

**Note**  
The steps you follow depend on whether you first accessed Patch Manager before or after the patch policies release on December 22, 2022. If you used Patch Manager before that date, you can use the console procedure. Otherwise, use the AWS CLI procedure. The **Actions** menu referenced in the console procedure is not displayed in Regions where Patch Manager wasn't used before the patch policies release.

**To set a patch baseline as the default**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab.

1. In the patch baselines list, choose the button of a patch baseline that isn't currently set as the default for an operating system type.

   The **Default baseline** column indicates which baselines are currently set as the defaults.

1. In the **Actions** menu, choose **Set default patch baseline**.
**Important**  
The **Actions** menu is not available if you didn't work with Patch Manager in the current AWS account and Region before December 22, 2022. See the **Note** earlier in this topic for more information.

1. In the confirmation dialog box, choose **Set default**.

**To set a patch baseline as the default (AWS CLI)**

1. Run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-baselines.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-baselines.html) command to view a list of available patch baselines and their IDs and Amazon Resource Names (ARNs).

   ```
   aws ssm describe-patch-baselines
   ```

1. Run the [https://docs.aws.amazon.com/cli/latest/reference/ssm/register-default-patch-baseline.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-default-patch-baseline.html) command to set a baseline as the default for the operating system it's associated with. Replace *baseline-id-or-ARN* with the ID of the custom patch baseline or predefined baseline to use. 

------
#### [ Linux & macOS ]

   ```
   aws ssm register-default-patch-baseline \
       --baseline-id baseline-id-or-ARN
   ```

   The following is an example of a setting a custom baseline as the default.

   ```
   aws ssm register-default-patch-baseline \
       --baseline-id pb-abc123cf9bEXAMPLE
   ```

   The following is an example of a setting a predefined baseline managed by AWS as the default.

   ```
   aws ssm register-default-patch-baseline \
       --baseline-id arn:aws:ssm:us-east-2:733109147000:patchbaseline/pb-0574b43a65ea646e
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-default-patch-baseline ^
       --baseline-id baseline-id-or-ARN
   ```

   The following is an example of a setting a custom baseline as the default.

   ```
   aws ssm register-default-patch-baseline ^
       --baseline-id pb-abc123cf9bEXAMPLE
   ```

   The following is an example of a setting a predefined baseline managed by AWS as the default.

   ```
   aws ssm register-default-patch-baseline ^
       --baseline-id arn:aws:ssm:us-east-2:733109147000:patchbaseline/pb-071da192df1226b63
   ```

------

# Viewing available patches
Viewing available patches

With Patch Manager, a tool in AWS Systems Manager, you can view all available patches for a specified operating system and, optionally, a specific operating system version.

**Tip**  
To generate a list of available patches and save them to a file, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-available-patches.html) command and specify your preferred [output](https://docs.aws.amazon.com/cli/latest/reference/ssm/cli-usage-output.html).

**To view available patches**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patches** tab.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, and then choose the **Patches** tab.
**Note**  
For Windows Server, the **Patches** tab displays updates that are available from Windows Server Update Service (WSUS).

1. For **Operating system**, choose the operating system for which you want to view available patches, such as `Windows` or `Amazon Linux`.

1. (Optional) For **Product**, choose an OS version, such as `WindowsServer2019` or `AmazonLinux2018.03`.

1. (Optional) To add or remove information columns for your results, choose the configure button (![\[The icon to view configuration settings.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/configure-button.png)) at the top right of the **Patches** list. (By default, the **Patches** tab displays columns for only some of the available patch metadata.)

   For information about the types of metadata you can add to your view, see [Patch](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_Patch.html) in the *AWS Systems Manager API Reference*.

# Creating and managing patch groups


If you are *not* using patch policies in your operations, you can organize your patching efforts by adding managed nodes to patch groups by using tags.

**Note**  
Patch groups are not used in patching operations that are based on *patch policies*. For information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).  
Patch group functionality is not supported in the console for account-Region pairs that did not already use patch groups before patch policy support was released on December 22, 2022. Patch group functionality is still available in account-Region pairs that began using patch groups before this date.

To use tags in patching operations, you must apply the tag key `Patch Group` or `PatchGroup` to your managed nodes. You must also specify the name that you want to give the patch group as the value of the tag. You can specify any tag value, but the tag key must be `Patch Group` or `PatchGroup`.

`PatchGroup` (without a space) is required if you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS). 

After you group your managed nodes using tags, you add the patch group value to a patch baseline. By registering the patch group with a patch baseline, you ensure that the correct patches are installed during the patching operation. For more information about patch groups, see [Patch groups](patch-manager-patch-groups.md).

Complete the tasks in this topic to prepare your managed nodes for patching using tags with your nodes and patch baseline. Task 1 is required only if you are patching Amazon EC2 instances. Task 2 is required only if you are patching non-EC2 instances in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. Task 3 is required for all managed nodes.

**Tip**  
You can also add tags to managed nodes using the AWS CLI command `[https://docs.aws.amazon.com/cli/latest/reference/ssm/add-tags-to-resource.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/add-tags-to-resource.html)` or the Systems Manager API operation ssm-agent-minimum-s3-permissions-required`[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_AddTagsToResource.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_AddTagsToResource.html)`.

**Topics**
+ [

## Task 1: Add EC2 instances to a patch group using tags
](#sysman-patch-group-tagging-ec2)
+ [

## Task 2: Add managed nodes to a patch group using tags
](#sysman-patch-group-tagging-managed)
+ [

## Task 3: Add a patch group to a patch baseline
](#sysman-patch-group-patchbaseline)

## Task 1: Add EC2 instances to a patch group using tags
Task 1: Add EC2 instances to a patch group using tags

You can add tags to EC2 instances using the Systems Manager console or the Amazon EC2 console. This task is required only if you are patching Amazon EC2 instances.

**Important**  
You can't apply the `Patch Group` tag (with a space) to an Amazon EC2 instance if the **Allow tags in instance metadata** option is enabled on the instance. Allowing tags in instance metadata prevents tag key names from containing spaces. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use the tag key `PatchGroup` (without a space).

**Option 1: To add EC2 instances to a patch group (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Managed nodes** list, choose the ID of a managed EC2 instance that you want to configure for patching. Node IDs for EC2 instances begin with `i-`.
**Note**  
When using the Amazon EC2 console and AWS CLI, it's possible to apply `Key = Patch Group` or `Key = PatchGroup` tags to instances that aren't yet configured for use with Systems Manager.  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. Choose the **Tags** tab, then choose **Edit**.

1. In the left column, enter **Patch Group** or **PatchGroup**. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space).

1. In the right column, enter a tag value to serve as the name for the patch group.

1. Choose **Save**.

1. Repeat this procedure to add other EC2 instances to the same patch group.

**Option 2: To add EC2 instances to a patch group (Amazon EC2 console)**

1. Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/), and then choose **Instances** in the navigation pane. 

1. In the list of instances, choose an instance that you want to configure for patching.

1. In the **Actions** menu, choose **Instance settings**, **Manage tags**.

1. Choose **Add new tag**.

1. For **Key**, enter **Patch Group** or **PatchGroup**. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space).

1. For **Value**, enter a value to serve as the name for the patch group.

1. Choose **Save**.

1. Repeat this procedure to add other instances to the same patch group.

## Task 2: Add managed nodes to a patch group using tags
Task 2: Add non-EC2 managed nodes to a patch group using tags

Follow the steps in this topic to add tags to AWS IoT Greengrass core devices and non-EC2 hybrid-activated managed nodes (mi-\$1). This task is required only if you are patching non-EC2 instances in a hybrid and multicloud environment.

**Note**  
You can't add tags for non-EC2 managed nodes using the Amazon EC2 console.

**To add non-EC2 managed nodes to a patch group (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Fleet Manager**.

1. In the **Managed nodes** list, choose the name of the managed node that you want to configure for patching.
**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. Choose the **Tags** tab, then choose **Edit**.

1. In the left column, enter **Patch Group** or **PatchGroup**. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space).

1. In the right column, enter a tag value to serve as the name for the patch group.

1. Choose **Save**.

1. Repeat this procedure to add other managed nodes to the same patch group.

## Task 3: Add a patch group to a patch baseline
Task 3: Add a patch group to a patch baseline

To associate a specific patch baseline with your managed nodes, you must add the patch group value to the patch baseline. By registering the patch group with a patch baseline, you can ensure that the correct patches are installed during a patching operation. This task is required whether you are patching EC2 instances, non-EC2 managed nodes, or both.

For more information about patch groups, see [Patch groups](patch-manager-patch-groups.md).

**Note**  
The steps you follow depend on whether you first accessed Patch Manager before or after the [patch policies](patch-manager-policies.md) release on December 22, 2022.

**To add a patch group to a patch baseline (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. If you're accessing Patch Manager for the first time in the current AWS Region and the Patch Manager start page opens, choose **Start with an overview**.

1. Choose the **Patch baselines** tab, and then in the **Patch baselines** list, choose the name of the patch baseline that you want to configure for your patch group.

   If you didn't first access Patch Manager until after the patch policies release, you must choose a custom baseline that you have created.

1. If the **Baseline ID** details page includes an **Actions** menu, do the following: 
   + Choose **Actions**, then **Modify patch groups**.
   + Enter the tag *value* you added to your managed nodes in [Task 2: Add managed nodes to a patch group using tags](#sysman-patch-group-tagging-managed), then choose **Add**.

   If the **Baseline ID** details page does *not* include an **Actions** menu, patch groups can't be configured in the console. Instead, you can do either of the following:
   + (Recommended) Set up a patch policy in Quick Setup, a tool in AWS Systems Manager, to map a patch baseline to one or more EC2 instances.

     For more information, see [Using Quick Setup patch policies](https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-policies.html) and [Automate organization-wide patching using a Quick Setup patch policy](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-patch-manager.html).
   + Use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/register-patch-baseline-for-patch-group.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-patch-baseline-for-patch-group.html) command in the AWS Command Line Interface (AWS CLI) to configure a patch group.

# Integrating Patch Manager with AWS Security Hub CSPM
Integrating Patch Manager with AWS Security Hub CSPM

[AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides you with a comprehensive view of your security state in AWS. Security Hub CSPM collects security data from across AWS accounts, AWS services, and supported third-party partner products. With Security Hub CSPM, you can check your environment against security industry standards and best practices. Security Hub CSPM helps you to analyze your security trends and identify the highest priority security issues.

By using the integration between Patch Manager, a tool in AWS Systems Manager, and Security Hub CSPM, you can send findings about noncompliant nodes from Patch Manager to Security Hub CSPM. A finding is the observable record of a security check or security-related detection. Security Hub CSPM can then include those patch-related findings in its analysis of your security posture.

The information in the following topics applies no matter which method or type of configuration you are using for your patching operations:
+ A patch policy configured in Quick Setup
+ A Host Management option configured in Quick Setup
+ A maintenance window to run a patch `Scan` or `Install` task
+ An on-demand **Patch now** operation

**Contents**
+ [

## How Patch Manager sends findings to Security Hub CSPM
](#securityhub-integration-sending-findings)
  + [

### Types of findings that Patch Manager sends
](#securityhub-integration-finding-types)
  + [

### Latency for sending findings
](#securityhub-integration-finding-latency)
  + [

### Retrying when Security Hub CSPM isn't available
](#securityhub-integration-retry-send)
  + [

### Viewing findings in Security Hub CSPM
](#securityhub-integration-view-findings)
+ [

## Typical finding from Patch Manager
](#securityhub-integration-finding-example)
+ [

## Turning on and configuring the integration
](#securityhub-integration-enable)
+ [

## How to stop sending findings
](#securityhub-integration-disable)

## How Patch Manager sends findings to Security Hub CSPM


In Security Hub CSPM, security issues are tracked as findings. Some findings come from issues that are detected by other AWS services or by third-party partners. Security Hub CSPM also has a set of rules that it uses to detect security issues and generate findings.

 Patch Manager is one of the Systems Manager tools that sends findings to Security Hub CSPM. After you perform a patching operation by running a SSM document (`AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, or `AWS-RunPatchBaselineWithHooks`), the patching information is sent to Inventory or Compliance, tools in AWS Systems Manager, or both. After Inventory, Compliance, or both receive the data, Patch Manager receives a notification. Then, Patch Manager evaluates the data for accuracy, formatting, and compliance. If all conditions are met, Patch Manager forwards the data to Security Hub CSPM.

Security Hub CSPM provides tools to manage findings from across all of these sources. You can view and filter lists of findings and view details for a finding. For more information, see [Viewing findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-viewing.html) in the *AWS Security Hub User Guide*. You can also track the status of an investigation into a finding. For more information, see [Taking action on findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-taking-action.html) in the *AWS Security Hub User Guide*.

All findings in Security Hub CSPM use a standard JSON format called the AWS Security Finding Format (ASFF). The ASFF includes details about the source of the issue, the affected resources, and the current status of the finding. For more information, see [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.htm) in the *AWS Security Hub User Guide*.

### Types of findings that Patch Manager sends


Patch Manager sends the findings to Security Hub CSPM using the [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html). In ASFF, the `Types` field provides the finding type. Findings from Patch Manager have the following value for `Types`:
+ Software and Configuration Checks/Patch Management

 Patch Manager sends one finding per noncompliant managed node. The finding is reported with the resource type [https://docs.aws.amazon.com//securityhub/latest/userguide/securityhub-findings-format-attributes.html#asff-resourcedetails-awsec2instance](https://docs.aws.amazon.com//securityhub/latest/userguide/securityhub-findings-format-attributes.html#asff-resourcedetails-awsec2instance) so that findings can be correlated with other Security Hub CSPM integrations that report `AwsEc2Instance` resource types. Patch Manager only forwards a finding to Security Hub CSPM if the operation discovered the managed node to be noncompliant. The finding includes the Patch Summary results. 

**Note**  
After reporting a noncompliant node to Security Hub CSPM. Patch Manager doesn't send an update to Security Hub CSPM after the node is made compliant. You can manually resolve findings in Security Hub CSPM after the required patches have been applied to the managed node.

For more information about compliance definitions, see [Patch compliance state values](patch-manager-compliance-states.md). For more information about `PatchSummary`, see [PatchSummary](https://docs.aws.amazon.com//securityhub/1.0/APIReference/API_PatchSummary.html) in the *AWS Security Hub API Reference*.

### Latency for sending findings


When Patch Manager creates a new finding, it's usually sent to Security Hub CSPM within a few seconds to 2 hours. The speed depends on the traffic in the AWS Region being processed at that time.

### Retrying when Security Hub CSPM isn't available


If there is a service outage, an AWS Lambda function is run to put the messages back into the main queue after the service is running again. After the messages are in the main queue, the retry is automatic.

If Security Hub CSPM isn't available, Patch Manager retries sending the findings until they're received.

### Viewing findings in Security Hub CSPM


This procedure describes how to view findings in Security Hub CSPM about managed nodes in your fleet that are out of patch compliance.

**To review Security Hub CSPM findings for patch compliance**

1. Sign in to the AWS Management Console and open the AWS Security Hub CSPM console at [https://console.aws.amazon.com/securityhub/](https://console.aws.amazon.com/securityhub/).

1. In the navigation pane, choose **Findings**.

1. Choose the **Add filters** (![\[The Search icon\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/search-icon.png)) box.

1. In the menu, under **Filters**, choose **Product name**.

1. In the dialog box that opens, choose **is** in the first field and then enter **Systems Manager Patch Manager** in the second field.

1. Choose **Apply**.

1. Add any additional filters you want to help narrow down your results.

1. In the list of results, choose the title of a finding you want more information about.

   A pane opens on the right side of the screen with more details about the resource, the issue discovered, and a recommended remediation.
**Important**  
At this time, Security Hub CSPM reports the resource type of all managed nodes as `EC2 Instance`. This includes on-premises servers and virtual machines (VMs) that you have registered for use with Systems Manager.

**Severity classifications**  
The list of findings for **Systems Manager Patch Manager** includes a report of the severity of the finding. **Severity** levels include the following, from lowest to highest:
+ **INFORMATIONAL ** – No issue was found.
+ **LOW** – The issue does not require remediation.
+ **MEDIUM** – The issue must be addressed but is not urgent.
+ **HIGH** – The issue must be addressed as a priority.
+ **CRITICAL** – The issue must be remediated immediately to avoid escalation.

Severity is determined by the most severe noncompliant package on an instance. Because you can have multiple patch baselines with multiple severity levels, the highest severity is reported out of all the noncompliant packages. For example, suppose you have two noncompliant packages where the severity of package A is "Critical" and the severity of package B is "Low". "Critical" will be reported as the severity.

Note that the severity field correlates directly with the Patch Manager `Compliance` field. This is a field that you set assign to individual patches that match the rule. Because this `Compliance` field is assigned to individual patches, it is not reflected at the Patch Summary level.

**Related content**
+ [Findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings.html) in the *AWS Security Hub User Guide*
+ [Multi-Account patch compliance with Patch Manager and Security Hub](https://aws.amazon.com/blogs/mt/multi-account-patch-compliance-with-patch-manager-and-security-hub/) in the *AWS Management & Governance Blog*

## Typical finding from Patch Manager


Patch Manager sends findings to Security Hub CSPM using the [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html).

Here is an example of a typical finding from Patch Manager.

```
{
  "SchemaVersion": "2018-10-08",
  "Id": "arn:aws:patchmanager:us-east-2:111122223333:instance/i-02573cafcfEXAMPLE/document/AWS-RunPatchBaseline/run-command/d710f5bd-04e3-47b4-82f6-df4e0EXAMPLE",
  "ProductArn": "arn:aws:securityhub:us-east-1::product/aws/ssm-patch-manager",
  "GeneratorId": "d710f5bd-04e3-47b4-82f6-df4e0EXAMPLE",
  "AwsAccountId": "111122223333",
  "Types": [
    "Software & Configuration Checks/Patch Management/Compliance"
  ],
  "CreatedAt": "2021-11-11T22:05:25Z",
  "UpdatedAt": "2021-11-11T22:05:25Z",
  "Severity": {
    "Label": "INFORMATIONAL",
    "Normalized": 0
  },
  "Title": "Systems Manager Patch Summary - Managed Instance Non-Compliant",
  "Description": "This AWS control checks whether each instance that is managed by AWS Systems Manager is in compliance with the rules of the patch baseline that applies to that instance when a compliance Scan runs.",
  "Remediation": {
    "Recommendation": {
      "Text": "For information about bringing instances into patch compliance, see 'Remediating out-of-compliance instances (Patch Manager)'.",
      "Url": "https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-compliance-remediation.html"
    }
  },
  "SourceUrl": "https://us-east-2.console.aws.amazon.com/systems-manager/fleet-manager/i-02573cafcfEXAMPLE/patch?region=us-east-2",
  "ProductFields": {
    "aws/securityhub/FindingId": "arn:aws:securityhub:us-east-2::product/aws/ssm-patch-manager/arn:aws:patchmanager:us-east-2:111122223333:instance/i-02573cafcfEXAMPLE/document/AWS-RunPatchBaseline/run-command/d710f5bd-04e3-47b4-82f6-df4e0EXAMPLE",
    "aws/securityhub/ProductName": "Systems Manager Patch Manager",
    "aws/securityhub/CompanyName": "AWS"
  },
  "Resources": [
    {
      "Type": "AwsEc2Instance",
      "Id": "i-02573cafcfEXAMPLE",
      "Partition": "aws",
      "Region": "us-east-2"
    }
  ],
  "WorkflowState": "NEW",
  "Workflow": {
    "Status": "NEW"
  },
  "RecordState": "ACTIVE",
  "PatchSummary": {
    "Id": "pb-0c10e65780EXAMPLE",
    "InstalledCount": 45,
    "MissingCount": 2,
    "FailedCount": 0,
    "InstalledOtherCount": 396,
    "InstalledRejectedCount": 0,
    "InstalledPendingReboot": 0,
    "OperationStartTime": "2021-11-11T22:05:06Z",
    "OperationEndTime": "2021-11-11T22:05:25Z",
    "RebootOption": "NoReboot",
    "Operation": "SCAN"
  }
}
```

## Turning on and configuring the integration


To use the Patch Manager integration with Security Hub CSPM, you must turn on Security Hub CSPM. For information about how to turn on Security Hub CSPM, see [Setting up Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html) in the *AWS Security Hub User Guide*.

The following procedure describes how to integrate Patch Manager and Security Hub CSPM when Security Hub CSPM is already active but Patch Manager integration is turned off. You only need to complete this procedure if integration was manually turned off.

**To add Patch Manager to Security Hub CSPM integration**

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Settings** tab.

   -or-

   If you are accessing Patch Manager for the first time in the current AWS Region, choose **Start with an overview**, and then choose the **Settings** tab.

1. Under the **Export to Security Hub CSPM** section, to the right of **Patch compliance findings aren't being exported to Security Hub**, choose **Enable**.

## How to stop sending findings


To stop sending findings to Security Hub CSPM, you can use either the Security Hub CSPM console or the API.

For more information, see the following topics in the *AWS Security Hub User Guide*:
+ [Disabling and enabling the flow of findings from an integration (console)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-integrations-managing.html#securityhub-integration-findings-flow-console)
+ [Disabling the flow of findings from an integration (Security Hub CSPM API, AWS CLI)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-integrations-managing.html#securityhub-integration-findings-flow-disable-api)

# Working with Patch Manager resources using the AWS CLI


The section includes examples of AWS Command Line Interface (AWS CLI) commands that you can use to perform configuration tasks for Patch Manager, a tool in AWS Systems Manager.

For an illustration of using the AWS CLI to patch a server environment by using a custom patch baseline, see [Tutorial: Patch a server environment using the AWS CLI](patch-manager-patch-servers-using-the-aws-cli.md).

For more information about using the AWS CLI for AWS Systems Manager tasks, see the [AWS Systems Manager section of the AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/ssm/index.html).

**Topics**
+ [

## AWS CLI commands for patch baselines
](#patch-baseline-cli-commands)
+ [

## AWS CLI commands for patch groups
](#patch-group-cli-commands)
+ [

## AWS CLI commands for viewing patch summaries and details
](#patch-details-cli-commands)
+ [

## AWS CLI commands for scanning and patching managed nodes
](#patch-operations-cli-commands)

## AWS CLI commands for patch baselines


**Topics**
+ [

### Create a patch baseline
](#patch-manager-cli-commands-create-patch-baseline)
+ [

### Create a patch baseline with custom repositories for different OS versions
](#patch-manager-cli-commands-create-patch-baseline-mult-sources)
+ [

### Update a patch baseline
](#patch-manager-cli-commands-update-patch-baseline)
+ [

### Rename a patch baseline
](#patch-manager-cli-commands-rename-patch-baseline)
+ [

### Delete a patch baseline
](#patch-manager-cli-commands-delete-patch-baseline)
+ [

### List all patch baselines
](#patch-manager-cli-commands-describe-patch-baselines)
+ [

### List all AWS-provided patch baselines
](#patch-manager-cli-commands-describe-patch-baselines-aws)
+ [

### List my patch baselines
](#patch-manager-cli-commands-describe-patch-baselines-custom)
+ [

### Display a patch baseline
](#patch-manager-cli-commands-get-patch-baseline)
+ [

### Get the default patch baseline
](#patch-manager-cli-commands-get-default-patch-baseline)
+ [

### Set a custom patch baseline as the default
](#patch-manager-cli-commands-register-default-patch-baseline)
+ [

### Reset an AWS patch baseline as the default
](#patch-manager-cli-commands-register-aws-patch-baseline)
+ [

### Tag a patch baseline
](#patch-manager-cli-commands-add-tags-to-resource)
+ [

### List the tags for a patch baseline
](#patch-manager-cli-commands-list-tags-for-resource)
+ [

### Remove a tag from a patch baseline
](#patch-manager-cli-commands-remove-tags-from-resource)

### Create a patch baseline


The following command creates a patch baseline that approves all critical and important security updates for Windows Server 2012 R2 5 days after they're released. Patches have also been specified for the Approved and Rejected patch lists. In addition, the patch baseline has been tagged to indicate that it's for a production environment.

------
#### [ Linux & macOS ]

```
aws ssm create-patch-baseline \
    --name "Windows-Server-2012R2" \
    --tags "Key=Environment,Value=Production" \
    --description "Windows Server 2012 R2, Important and Critical security updates" \
    --approved-patches "KB2032276,MS10-048" \
    --rejected-patches "KB2124261" \
    --rejected-patches-action "ALLOW_AS_DEPENDENCY" \
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Important,Critical]},{Key=CLASSIFICATION,Values=SecurityUpdates},{Key=PRODUCT,Values=WindowsServer2012R2}]},ApproveAfterDays=5}]"
```

------
#### [ Windows Server ]

```
aws ssm create-patch-baseline ^
    --name "Windows-Server-2012R2" ^
    --tags "Key=Environment,Value=Production" ^
    --description "Windows Server 2012 R2, Important and Critical security updates" ^
    --approved-patches "KB2032276,MS10-048" ^
    --rejected-patches "KB2124261" ^
    --rejected-patches-action "ALLOW_AS_DEPENDENCY" ^
    --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Important,Critical]},{Key=CLASSIFICATION,Values=SecurityUpdates},{Key=PRODUCT,Values=WindowsServer2012R2}]},ApproveAfterDays=5}]"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Create a patch baseline with custom repositories for different OS versions


Applies to Linux managed nodes only. The following command shows how to specify the patch repository to use for a particular version of the Amazon Linux operating system. This sample uses a source repository allowed by default on Amazon Linux 2017.09, but it could be adapted to a different source repository that you have configured for a managed node.

**Note**  
To better demonstrate this more complex command, we're using the `--cli-input-json` option with additional options stored an external JSON file.

1. Create a JSON file with a name like `my-patch-repository.json` and add the following content to it.

   ```
   {
       "Description": "My patch repository for Amazon Linux 2",
       "Name": "Amazon-Linux-2",
       "OperatingSystem": "AMAZON_LINUX_2",
       "ApprovalRules": {
           "PatchRules": [
               {
                   "ApproveAfterDays": 7,
                   "EnableNonSecurity": true,
                   "PatchFilterGroup": {
                       "PatchFilters": [
                           {
                               "Key": "SEVERITY",
                               "Values": [
                                   "Important",
                                   "Critical"
                               ]
                           },
                           {
                               "Key": "CLASSIFICATION",
                               "Values": [
                                   "Security",
                                   "Bugfix"
                               ]
                           },
                           {
                               "Key": "PRODUCT",
                               "Values": [
                                   "AmazonLinux2"
                               ]
                           }
                       ]
                   }
               }
           ]
       },
       "Sources": [
           {
               "Name": "My-AL2",
               "Products": [
                   "AmazonLinux2"
               ],
               "Configuration": "[amzn-main] \nname=amzn-main-Base\nmirrorlist=http://repo./$awsregion./$awsdomain//$releasever/main/mirror.list //nmirrorlist_expire=300//nmetadata_expire=300 \npriority=10 \nfailovermethod=priority \nfastestmirror_enabled=0 \ngpgcheck=1 \ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-amazon-ga \nenabled=1 \nretries=3 \ntimeout=5\nreport_instanceid=yes"
           }
       ]
   }
   ```

1. In the directory where you saved the file, run the following command.

   ```
   aws ssm create-patch-baseline --cli-input-json file://my-patch-repository.json
   ```

   The system returns information like the following.

   ```
   {
       "BaselineId": "pb-0c10e65780EXAMPLE"
   }
   ```

### Update a patch baseline


The following command adds two patches as rejected and one patch as approved to an existing patch baseline.

For information about accepted formats for lists of approved patches and rejected patches, see [Package name formats for approved and rejected patch lists](patch-manager-approved-rejected-package-name-formats.md).

------
#### [ Linux & macOS ]

```
aws ssm update-patch-baseline \
    --baseline-id pb-0c10e65780EXAMPLE \
    --rejected-patches "KB2032276" "MS10-048" \
    --approved-patches "KB2124261"
```

------
#### [ Windows Server ]

```
aws ssm update-patch-baseline ^
    --baseline-id pb-0c10e65780EXAMPLE ^
    --rejected-patches "KB2032276" "MS10-048" ^
    --approved-patches "KB2124261"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE",
   "Name":"Windows-Server-2012R2",
   "RejectedPatches":[
      "KB2032276",
      "MS10-048"
   ],
   "GlobalFilters":{
      "PatchFilters":[

      ]
   },
   "ApprovalRules":{
      "PatchRules":[
         {
            "PatchFilterGroup":{
               "PatchFilters":[
                  {
                     "Values":[
                        "Important",
                        "Critical"
                     ],
                     "Key":"MSRC_SEVERITY"
                  },
                  {
                     "Values":[
                        "SecurityUpdates"
                     ],
                     "Key":"CLASSIFICATION"
                  },
                  {
                     "Values":[
                        "WindowsServer2012R2"
                     ],
                     "Key":"PRODUCT"
                  }
               ]
            },
            "ApproveAfterDays":5
         }
      ]
   },
   "ModifiedDate":1481001494.035,
   "CreatedDate":1480997823.81,
   "ApprovedPatches":[
      "KB2124261"
   ],
   "Description":"Windows Server 2012 R2, Important and Critical security updates"
}
```

### Rename a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm update-patch-baseline \
    --baseline-id pb-0c10e65780EXAMPLE \
    --name "Windows-Server-2012-R2-Important-and-Critical-Security-Updates"
```

------
#### [ Windows Server ]

```
aws ssm update-patch-baseline ^
    --baseline-id pb-0c10e65780EXAMPLE ^
    --name "Windows-Server-2012-R2-Important-and-Critical-Security-Updates"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE",
   "Name":"Windows-Server-2012-R2-Important-and-Critical-Security-Updates",
   "RejectedPatches":[
      "KB2032276",
      "MS10-048"
   ],
   "GlobalFilters":{
      "PatchFilters":[

      ]
   },
   "ApprovalRules":{
      "PatchRules":[
         {
            "PatchFilterGroup":{
               "PatchFilters":[
                  {
                     "Values":[
                        "Important",
                        "Critical"
                     ],
                     "Key":"MSRC_SEVERITY"
                  },
                  {
                     "Values":[
                        "SecurityUpdates"
                     ],
                     "Key":"CLASSIFICATION"
                  },
                  {
                     "Values":[
                        "WindowsServer2012R2"
                     ],
                     "Key":"PRODUCT"
                  }
               ]
            },
            "ApproveAfterDays":5
         }
      ]
   },
   "ModifiedDate":1481001795.287,
   "CreatedDate":1480997823.81,
   "ApprovedPatches":[
      "KB2124261"
   ],
   "Description":"Windows Server 2012 R2, Important and Critical security updates"
}
```

### Delete a patch baseline


```
aws ssm delete-patch-baseline --baseline-id "pb-0c10e65780EXAMPLE"
```

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### List all patch baselines


```
aws ssm describe-patch-baselines
```

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"AWS-DefaultPatchBaseline",
         "DefaultBaseline":true,
         "BaselineDescription":"Default Patch Baseline Provided by AWS.",
         "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
      },
      {
         "BaselineName":"Windows-Server-2012R2",
         "DefaultBaseline":false,
         "BaselineDescription":"Windows Server 2012 R2, Important and Critical security updates",
         "BaselineId":"pb-0c10e65780EXAMPLE"
      }
   ]
}
```

Here is another command that lists all patch baselines in an AWS Region.

------
#### [ Linux & macOS ]

```
aws ssm describe-patch-baselines \
    --region us-east-2 \
    --filters "Key=OWNER,Values=[All]"
```

------
#### [ Windows Server ]

```
aws ssm describe-patch-baselines ^
    --region us-east-2 ^
    --filters "Key=OWNER,Values=[All]"
```

------

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"AWS-DefaultPatchBaseline",
         "DefaultBaseline":true,
         "BaselineDescription":"Default Patch Baseline Provided by AWS.",
         "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
      },
      {
         "BaselineName":"Windows-Server-2012R2",
         "DefaultBaseline":false,
         "BaselineDescription":"Windows Server 2012 R2, Important and Critical security updates",
         "BaselineId":"pb-0c10e65780EXAMPLE"
      }
   ]
}
```

### List all AWS-provided patch baselines


------
#### [ Linux & macOS ]

```
aws ssm describe-patch-baselines \
    --region us-east-2 \
    --filters "Key=OWNER,Values=[AWS]"
```

------
#### [ Windows Server ]

```
aws ssm describe-patch-baselines ^
    --region us-east-2 ^
    --filters "Key=OWNER,Values=[AWS]"
```

------

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"AWS-DefaultPatchBaseline",
         "DefaultBaseline":true,
         "BaselineDescription":"Default Patch Baseline Provided by AWS.",
         "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
      }
   ]
}
```

### List my patch baselines


------
#### [ Linux & macOS ]

```
aws ssm describe-patch-baselines \
    --region us-east-2 \
    --filters "Key=OWNER,Values=[Self]"
```

------
#### [ Windows Server ]

```
aws ssm describe-patch-baselines ^
    --region us-east-2 ^
    --filters "Key=OWNER,Values=[Self]"
```

------

The system returns information like the following.

```
{
   "BaselineIdentities":[
      {
         "BaselineName":"Windows-Server-2012R2",
         "DefaultBaseline":false,
         "BaselineDescription":"Windows Server 2012 R2, Important and Critical security updates",
         "BaselineId":"pb-0c10e65780EXAMPLE"
      }
   ]
}
```

### Display a patch baseline


```
aws ssm get-patch-baseline --baseline-id pb-0c10e65780EXAMPLE
```

**Note**  
For custom patch baselines, you can specify either the patch baseline ID or the full Amazon Resource Name (ARN). For an AWS-provided patch baseline, you must specify the full ARN. For example, `arn:aws:ssm:us-east-2:075727635805:patchbaseline/pb-0c10e65780EXAMPLE`.

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE",
   "Name":"Windows-Server-2012R2",
   "PatchGroups":[
      "Web Servers"
   ],
   "RejectedPatches":[

   ],
   "GlobalFilters":{
      "PatchFilters":[

      ]
   },
   "ApprovalRules":{
      "PatchRules":[
         {
            "PatchFilterGroup":{
               "PatchFilters":[
                  {
                     "Values":[
                        "Important",
                        "Critical"
                     ],
                     "Key":"MSRC_SEVERITY"
                  },
                  {
                     "Values":[
                        "SecurityUpdates"
                     ],
                     "Key":"CLASSIFICATION"
                  },
                  {
                     "Values":[
                        "WindowsServer2012R2"
                     ],
                     "Key":"PRODUCT"
                  }
               ]
            },
            "ApproveAfterDays":5
         }
      ]
   },
   "ModifiedDate":1480997823.81,
   "CreatedDate":1480997823.81,
   "ApprovedPatches":[

   ],
   "Description":"Windows Server 2012 R2, Important and Critical security updates"
}
```

### Get the default patch baseline


```
aws ssm get-default-patch-baseline --region us-east-2
```

The system returns information like the following.

```
{
   "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
}
```

### Set a custom patch baseline as the default


------
#### [ Linux & macOS ]

```
aws ssm register-default-patch-baseline \
    --region us-east-2 \
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm register-default-patch-baseline ^
    --region us-east-2 ^
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Reset an AWS patch baseline as the default


------
#### [ Linux & macOS ]

```
aws ssm register-default-patch-baseline \
    --region us-east-2 \
    --baseline-id "arn:aws:ssm:us-east-2:123456789012:patchbaseline/pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm register-default-patch-baseline ^
    --region us-east-2 ^
    --baseline-id "arn:aws:ssm:us-east-2:123456789012:patchbaseline/pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Tag a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm add-tags-to-resource \
    --resource-type "PatchBaseline" \
    --resource-id "pb-0c10e65780EXAMPLE" \
    --tags "Key=Project,Value=Testing"
```

------
#### [ Windows Server ]

```
aws ssm add-tags-to-resource ^
    --resource-type "PatchBaseline" ^
    --resource-id "pb-0c10e65780EXAMPLE" ^
    --tags "Key=Project,Value=Testing"
```

------

### List the tags for a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm list-tags-for-resource \
    --resource-type "PatchBaseline" \
    --resource-id "pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm list-tags-for-resource ^
    --resource-type "PatchBaseline" ^
    --resource-id "pb-0c10e65780EXAMPLE"
```

------

### Remove a tag from a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm remove-tags-from-resource \
    --resource-type "PatchBaseline" \
    --resource-id "pb-0c10e65780EXAMPLE" \
    --tag-keys "Project"
```

------
#### [ Windows Server ]

```
aws ssm remove-tags-from-resource ^
    --resource-type "PatchBaseline" ^
    --resource-id "pb-0c10e65780EXAMPLE" ^
    --tag-keys "Project"
```

------

## AWS CLI commands for patch groups


**Topics**
+ [

### Create a patch group
](#patch-manager-cli-commands-create-patch-group)
+ [

### Register a patch group "web servers" with a patch baseline
](#patch-manager-cli-commands-register-patch-baseline-for-patch-group-web-servers)
+ [

### Register a patch group "Backend" with the AWS-provided patch baseline
](#patch-manager-cli-commands-register-patch-baseline-for-patch-group-backend)
+ [

### Display patch group registrations
](#patch-manager-cli-commands-describe-patch-groups)
+ [

### Deregister a patch group from a patch baseline
](#patch-manager-cli-commands-deregister-patch-baseline-for-patch-group)

### Create a patch group


**Note**  
Patch groups are not used in patching operations that are based on *patch policies*. For information about working with patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

To help you organize your patching efforts, we recommend that you add managed nodes to patch groups by using tags. Patch groups require use of the tag key `Patch Group` or `PatchGroup`. If you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS), you must use `PatchGroup` (without a space). You can specify any tag value, but the tag key must be `Patch Group` or `PatchGroup`. For more information about patch groups, see [Patch groups](patch-manager-patch-groups.md).

After you group your managed nodes using tags, you add the patch group value to a patch baseline. By registering the patch group with a patch baseline, you ensure that the correct patches are installed during the patching operation.

#### Task 1: Add EC2 instances to a patch group using tags


**Note**  
When using the Amazon Elastic Compute Cloud (Amazon EC2) console and AWS CLI, it's possible to apply `Key = Patch Group` or `Key = PatchGroup` tags to instances that aren't yet configured for use with Systems Manager. If an EC2 instance you expect to see in Patch Manager isn't listed after applying the `Patch Group` or `Key = PatchGroup` tag, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

Run the following command to add the `PatchGroup` tag to an EC2 instance.

```
aws ec2 create-tags --resources "i-1234567890abcdef0" --tags "Key=PatchGroup,Value=GroupValue"
```

#### Task 2: Add managed nodes to a patch group using tags


Run the following command to add the `PatchGroup` tag to a managed node.

------
#### [ Linux & macOS ]

```
aws ssm add-tags-to-resource \
    --resource-type "ManagedInstance" \
    --resource-id "mi-0123456789abcdefg" \
    --tags "Key=PatchGroup,Value=GroupValue"
```

------
#### [ Windows Server ]

```
aws ssm add-tags-to-resource ^
    --resource-type "ManagedInstance" ^
    --resource-id "mi-0123456789abcdefg" ^
    --tags "Key=PatchGroup,Value=GroupValue"
```

------

#### Task 3: Add a patch group to a patch baseline


Run the following command to associate a `PatchGroup` tag value to the specified patch baseline.

------
#### [ Linux & macOS ]

```
aws ssm register-patch-baseline-for-patch-group \
    --baseline-id "pb-0c10e65780EXAMPLE" \
    --patch-group "Development"
```

------
#### [ Windows Server ]

```
aws ssm register-patch-baseline-for-patch-group ^
    --baseline-id "pb-0c10e65780EXAMPLE" ^
    --patch-group "Development"
```

------

The system returns information like the following.

```
{
  "PatchGroup": "Development",
  "BaselineId": "pb-0c10e65780EXAMPLE"
}
```

### Register a patch group "web servers" with a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm register-patch-baseline-for-patch-group \
    --baseline-id "pb-0c10e65780EXAMPLE" \
    --patch-group "Web Servers"
```

------
#### [ Windows Server ]

```
aws ssm register-patch-baseline-for-patch-group ^
    --baseline-id "pb-0c10e65780EXAMPLE" ^
    --patch-group "Web Servers"
```

------

The system returns information like the following.

```
{
   "PatchGroup":"Web Servers",
   "BaselineId":"pb-0c10e65780EXAMPLE"
}
```

### Register a patch group "Backend" with the AWS-provided patch baseline


------
#### [ Linux & macOS ]

```
aws ssm register-patch-baseline-for-patch-group \
    --region us-east-2 \
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE" \
    --patch-group "Backend"
```

------
#### [ Windows Server ]

```
aws ssm register-patch-baseline-for-patch-group ^
    --region us-east-2 ^
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE" ^
    --patch-group "Backend"
```

------

The system returns information like the following.

```
{
   "PatchGroup":"Backend",
   "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
}
```

### Display patch group registrations


```
aws ssm describe-patch-groups --region us-east-2
```

The system returns information like the following.

```
{
   "PatchGroupPatchBaselineMappings":[
      {
         "PatchGroup":"Backend",
         "BaselineIdentity":{
            "BaselineName":"AWS-DefaultPatchBaseline",
            "DefaultBaseline":false,
            "BaselineDescription":"Default Patch Baseline Provided by AWS.",
            "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
         }
      },
      {
         "PatchGroup":"Web Servers",
         "BaselineIdentity":{
            "BaselineName":"Windows-Server-2012R2",
            "DefaultBaseline":true,
            "BaselineDescription":"Windows Server 2012 R2, Important and Critical updates",
            "BaselineId":"pb-0c10e65780EXAMPLE"
         }
      }
   ]
}
```

### Deregister a patch group from a patch baseline


------
#### [ Linux & macOS ]

```
aws ssm deregister-patch-baseline-for-patch-group \
    --region us-east-2 \
    --patch-group "Production" \
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm deregister-patch-baseline-for-patch-group ^
    --region us-east-2 ^
    --patch-group "Production" ^
    --baseline-id "arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "PatchGroup":"Production",
   "BaselineId":"arn:aws:ssm:us-east-2:111122223333:patchbaseline/pb-0c10e65780EXAMPLE"
}
```

## AWS CLI commands for viewing patch summaries and details


**Topics**
+ [

### Get all patches defined by a patch baseline
](#patch-manager-cli-commands-describe-effective-patches-for-patch-baseline)
+ [

### Get all patches for AmazonLinux2018.03 that have a Classification `SECURITY` and Severity of `Critical`
](#patch-manager-cli-commands-describe-available-patches-linux)
+ [

### Get all patches for Windows Server 2012 that have a MSRC severity of `Critical`
](#patch-manager-cli-commands-describe-available-patches)
+ [

### Get all available patches
](#patch-manager-cli-commands-describe-available-patches)
+ [

### Get patch summary states per-managed node
](#patch-manager-cli-commands-describe-instance-patch-states)
+ [

### Get patch compliance details for a managed node
](#patch-manager-cli-commands-describe-instance-patches)
+ [

### View patching compliance results (AWS CLI)
](#viewing-patch-compliance-results-cli)

### Get all patches defined by a patch baseline


**Note**  
This command is supported for Windows Server patch baselines only.

------
#### [ Linux & macOS ]

```
aws ssm describe-effective-patches-for-patch-baseline \
    --region us-east-2 \
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------
#### [ Windows Server ]

```
aws ssm describe-effective-patches-for-patch-baseline ^
    --region us-east-2 ^
    --baseline-id "pb-0c10e65780EXAMPLE"
```

------

The system returns information like the following.

```
{
   "NextToken":"--token string truncated--",
   "EffectivePatches":[
      {
         "PatchStatus":{
            "ApprovalDate":1384711200.0,
            "DeploymentStatus":"APPROVED"
         },
         "Patch":{
            "ContentUrl":"https://support.microsoft.com/en-us/kb/2876331",
            "ProductFamily":"Windows",
            "Product":"WindowsServer2012R2",
            "Vendor":"Microsoft",
            "Description":"A security issue has been identified in a Microsoft software 
               product that could affect your system. You can help protect your system 
               by installing this update from Microsoft. For a complete listing of the 
               issues that are included in this update, see the associated Microsoft 
               Knowledge Base article. After you install this update, you may have to 
               restart your system.",
            "Classification":"SecurityUpdates",
            "Title":"Security Update for Windows Server 2012 R2 Preview (KB2876331)",
            "ReleaseDate":1384279200.0,
            "MsrcClassification":"Critical",
            "Language":"All",
            "KbNumber":"KB2876331",
            "MsrcNumber":"MS13-089",
            "Id":"e74ccc76-85f0-4881-a738-59e9fc9a336d"
         }
      },
      {
         "PatchStatus":{
            "ApprovalDate":1428858000.0,
            "DeploymentStatus":"APPROVED"
         },
         "Patch":{
            "ContentUrl":"https://support.microsoft.com/en-us/kb/2919355",
            "ProductFamily":"Windows",
            "Product":"WindowsServer2012R2",
            "Vendor":"Microsoft",
            "Description":"Windows Server 2012 R2 Update is a cumulative 
               set of security updates, critical updates and updates. You 
               must install Windows Server 2012 R2 Update to ensure that 
               your computer can continue to receive future Windows Updates, 
               including security updates. For a complete listing of the 
               issues that are included in this update, see the associated 
               Microsoft Knowledge Base article for more information. After 
               you install this item, you may have to restart your computer.",
            "Classification":"SecurityUpdates",
            "Title":"Windows Server 2012 R2 Update (KB2919355)",
            "ReleaseDate":1428426000.0,
            "MsrcClassification":"Critical",
            "Language":"All",
            "KbNumber":"KB2919355",
            "MsrcNumber":"MS14-018",
            "Id":"8452bac0-bf53-4fbd-915d-499de08c338b"
         }
      }
     ---output truncated---
```

### Get all patches for AmazonLinux2018.03 that have a Classification `SECURITY` and Severity of `Critical`


------
#### [ Linux & macOS ]

```
aws ssm describe-available-patches \
    --region us-east-2 \
    --filters Key=PRODUCT,Values=AmazonLinux2018.03 Key=SEVERITY,Values=Critical
```

------
#### [ Windows Server ]

```
aws ssm describe-available-patches ^
    --region us-east-2 ^
    --filters Key=PRODUCT,Values=AmazonLinux2018.03 Key=SEVERITY,Values=Critical
```

------

The system returns information like the following.

```
{
    "Patches": [
        {
            "AdvisoryIds": ["ALAS-2011-1"],
            "BugzillaIds": [ "1234567" ],
            "Classification": "SECURITY",
            "CVEIds": [ "CVE-2011-3192"],
            "Name": "zziplib",
            "Epoch": "0",
            "Version": "2.71",
            "Release": "1.3.amzn1",
            "Arch": "i686",
            "Product": "AmazonLinux2018.03",
            "ReleaseDate": 1590519815,
            "Severity": "CRITICAL"
        }
    ]
}     
---output truncated---
```

### Get all patches for Windows Server 2012 that have a MSRC severity of `Critical`


------
#### [ Linux & macOS ]

```
aws ssm describe-available-patches \
    --region us-east-2 \
    --filters Key=PRODUCT,Values=WindowsServer2012 Key=MSRC_SEVERITY,Values=Critical
```

------
#### [ Windows Server ]

```
aws ssm describe-available-patches ^
    --region us-east-2 ^
    --filters Key=PRODUCT,Values=WindowsServer2012 Key=MSRC_SEVERITY,Values=Critical
```

------

The system returns information like the following.

```
{
   "Patches":[
      {
         "ContentUrl":"https://support.microsoft.com/en-us/kb/2727528",
         "ProductFamily":"Windows",
         "Product":"WindowsServer2012",
         "Vendor":"Microsoft",
         "Description":"A security issue has been identified that could 
           allow an unauthenticated remote attacker to compromise your 
           system and gain control over it. You can help protect your 
           system by installing this update from Microsoft. After you 
           install this update, you may have to restart your system.",
         "Classification":"SecurityUpdates",
         "Title":"Security Update for Windows Server 2012 (KB2727528)",
         "ReleaseDate":1352829600.0,
         "MsrcClassification":"Critical",
         "Language":"All",
         "KbNumber":"KB2727528",
         "MsrcNumber":"MS12-072",
         "Id":"1eb507be-2040-4eeb-803d-abc55700b715"
      },
      {
         "ContentUrl":"https://support.microsoft.com/en-us/kb/2729462",
         "ProductFamily":"Windows",
         "Product":"WindowsServer2012",
         "Vendor":"Microsoft",
         "Description":"A security issue has been identified that could 
           allow an unauthenticated remote attacker to compromise your 
           system and gain control over it. You can help protect your 
           system by installing this update from Microsoft. After you 
           install this update, you may have to restart your system.",
         "Classification":"SecurityUpdates",
         "Title":"Security Update for Microsoft .NET Framework 3.5 on 
           Windows 8 and Windows Server 2012 for x64-based Systems (KB2729462)",
         "ReleaseDate":1352829600.0,
         "MsrcClassification":"Critical",
         "Language":"All",
         "KbNumber":"KB2729462",
         "MsrcNumber":"MS12-074",
         "Id":"af873760-c97c-4088-ab7e-5219e120eab4"
      }
     
---output truncated---
```

### Get all available patches


```
aws ssm describe-available-patches --region us-east-2
```

The system returns information like the following.

```
{
   "NextToken":"--token string truncated--",
   "Patches":[
      {
            "Classification": "SecurityUpdates",
            "ContentUrl": "https://support.microsoft.com/en-us/kb/4074588",
            "Description": "A security issue has been identified in a Microsoft software 
            product that could affect your system. You can help protect your system by 
            installing this update from Microsoft. For a complete listing of the issues 
            that are included in this update, see the associated Microsoft Knowledge Base 
            article. After you install this update, you may have to restart your system.",
            "Id": "11adea10-0701-430e-954f-9471595ae246",
            "KbNumber": "KB4074588",
            "Language": "All",
            "MsrcNumber": "",
            "MsrcSeverity": "Critical",
            "Product": "WindowsServer2016",
            "ProductFamily": "Windows",
            "ReleaseDate": 1518548400,
            "Title": "2018-02 Cumulative Update for Windows Server 2016 (1709) for x64-based 
            Systems (KB4074588)",
            "Vendor": "Microsoft"
        },
        {
            "Classification": "SecurityUpdates",
            "ContentUrl": "https://support.microsoft.com/en-us/kb/4074590",
            "Description": "A security issue has been identified in a Microsoft software 
            product that could affect your system. You can help protect your system by 
            installing this update from Microsoft. For a complete listing of the issues that are included in this update, see the associated Microsoft Knowledge Base article. After you install this update, you may have to restart your system.",
            "Id": "f5f58231-ac5d-4640-ab1b-9dc8d857c265",
            "KbNumber": "KB4074590",
            "Language": "All",
            "MsrcNumber": "",
            "MsrcSeverity": "Critical",
            "Product": "WindowsServer2016",
            "ProductFamily": "Windows",
            "ReleaseDate": 1518544805,
            "Title": "2018-02 Cumulative Update for Windows Server 2016 for x64-based 
            Systems (KB4074590)",
            "Vendor": "Microsoft"
        }
      ---output truncated---
```

### Get patch summary states per-managed node


The per-managed node summary gives you the number of patches in the following states per node: "NotApplicable", "Missing", "Failed", "InstalledOther" and "Installed". 

------
#### [ Linux & macOS ]

```
aws ssm describe-instance-patch-states \
    --instance-ids i-08ee91c0b17045407 i-09a618aec652973a9
```

------
#### [ Windows Server ]

```
aws ssm describe-instance-patch-states ^
    --instance-ids i-08ee91c0b17045407 i-09a618aec652973a9
```

------

The system returns information like the following.

```
{
   "InstancePatchStates":[
      {
            "InstanceId": "i-08ee91c0b17045407",
            "PatchGroup": "",
            "BaselineId": "pb-0c10e65780EXAMPLE",
            "SnapshotId": "6d03d6c5-f79d-41d0-8d0e-00a9aEXAMPLE",
            "InstalledCount": 50,
            "InstalledOtherCount": 353,
            "InstalledPendingRebootCount": 0,
            "InstalledRejectedCount": 0,
            "MissingCount": 0,
            "FailedCount": 0,
            "UnreportedNotApplicableCount": -1,
            "NotApplicableCount": 671,
            "OperationStartTime": "2020-01-24T12:37:56-08:00",
            "OperationEndTime": "2020-01-24T12:37:59-08:00",
            "Operation": "Scan",
            "RebootOption": "NoReboot"
        },
        {
            "InstanceId": "i-09a618aec652973a9",
            "PatchGroup": "",
            "BaselineId": "pb-0c10e65780EXAMPLE",
            "SnapshotId": "c7e0441b-1eae-411b-8aa7-973e6EXAMPLE",
            "InstalledCount": 36,
            "InstalledOtherCount": 396,
            "InstalledPendingRebootCount": 0,
            "InstalledRejectedCount": 0,
            "MissingCount": 3,
            "FailedCount": 0,
            "UnreportedNotApplicableCount": -1,
            "NotApplicableCount": 420,
            "OperationStartTime": "2020-01-24T12:37:34-08:00",
            "OperationEndTime": "2020-01-24T12:37:37-08:00",
            "Operation": "Scan",
            "RebootOption": "NoReboot"
        }
     ---output truncated---
```

### Get patch compliance details for a managed node


```
aws ssm describe-instance-patches --instance-id i-08ee91c0b17045407
```

The system returns information like the following.

```
{
   "NextToken":"--token string truncated--",
   "Patches":[
      {
            "Title": "bind-libs.x86_64:32:9.8.2-0.68.rc1.60.amzn1",
            "KBId": "bind-libs.x86_64",
            "Classification": "Security",
            "Severity": "Important",
            "State": "Installed",
            "InstalledTime": "2019-08-26T11:05:24-07:00"
        },
        {
            "Title": "bind-utils.x86_64:32:9.8.2-0.68.rc1.60.amzn1",
            "KBId": "bind-utils.x86_64",
            "Classification": "Security",
            "Severity": "Important",
            "State": "Installed",
            "InstalledTime": "2019-08-26T11:05:32-07:00"
        },
        {
            "Title": "dhclient.x86_64:12:4.1.1-53.P1.28.amzn1",
            "KBId": "dhclient.x86_64",
            "Classification": "Security",
            "Severity": "Important",
            "State": "Installed",
            "InstalledTime": "2019-08-26T11:05:31-07:00"
        },
    ---output truncated---
```

### View patching compliance results (AWS CLI)


**To view patch compliance results for a single managed node**

Run the following command in the AWS Command Line Interface (AWS CLI) to view patch compliance results for a single managed node.

```
aws ssm describe-instance-patch-states --instance-id instance-id
```

Replace *instance-id* with the ID of the managed node for which you want to view results, in the format `i-02573cafcfEXAMPLE` or `mi-0282f7c436EXAMPLE`.

The systems returns information like the following.

```
{
    "InstancePatchStates": [
        {
            "InstanceId": "i-02573cafcfEXAMPLE",
            "PatchGroup": "mypatchgroup",
            "BaselineId": "pb-0c10e65780EXAMPLE",            
            "SnapshotId": "a3f5ff34-9bc4-4d2c-a665-4d1c1EXAMPLE",
            "CriticalNonCompliantCount": 2,
            "SecurityNonCompliantCount": 2,
            "OtherNonCompliantCount": 1,
            "InstalledCount": 123,
            "InstalledOtherCount": 334,
            "InstalledPendingRebootCount": 0,
            "InstalledRejectedCount": 0,
            "MissingCount": 1,
            "FailedCount": 2,
            "UnreportedNotApplicableCount": 11,
            "NotApplicableCount": 2063,
            "OperationStartTime": "2021-05-03T11:00:56-07:00",
            "OperationEndTime": "2021-05-03T11:01:09-07:00",
            "Operation": "Scan",
            "LastNoRebootInstallOperationTime": "2020-06-14T12:17:41-07:00",
            "RebootOption": "RebootIfNeeded"
        }
    ]
}
```

**To view a patch count summary for all EC2 instances in a Region**

The `describe-instance-patch-states` supports retrieving results for just one managed instance at a time. However, using a custom script with the `describe-instance-patch-states` command, you can generate a more granular report.

For example, if the [jq filter tool](https://stedolan.github.io/jq/download/) is installed on you local machine, you could run the following command to identify which of your EC2 instances in a particular AWS Region have a status of `InstalledPendingReboot`.

```
aws ssm describe-instance-patch-states \
    --instance-ids $(aws ec2 describe-instances --region region | jq '.Reservations[].Instances[] | .InstanceId' | tr '\n|"' ' ') \
    --output text --query 'InstancePatchStates[*].{Instance:InstanceId, InstalledPendingRebootCount:InstalledPendingRebootCount}'
```

*region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

For example:

```
aws ssm describe-instance-patch-states \
    --instance-ids $(aws ec2 describe-instances --region us-east-2 | jq '.Reservations[].Instances[] | .InstanceId' | tr '\n|"' ' ') \
    --output text --query 'InstancePatchStates[*].{Instance:InstanceId, InstalledPendingRebootCount:InstalledPendingRebootCount}'
```

The system returns information like the following.

```
1       i-02573cafcfEXAMPLE
0       i-0471e04240EXAMPLE
3       i-07782c72faEXAMPLE
6       i-083b678d37EXAMPLE
0       i-03a530a2d4EXAMPLE
1       i-01f68df0d0EXAMPLE
0       i-0a39c0f214EXAMPLE
7       i-0903a5101eEXAMPLE
7       i-03823c2fedEXAMPLE
```

In addition to `InstalledPendingRebootCount`, the list of count types you can search for include the following:
+ `CriticalNonCompliantCount`
+ `SecurityNonCompliantCount`
+ `OtherNonCompliantCount`
+ `UnreportedNotApplicableCount `
+ `InstalledPendingRebootCount`
+ `FailedCount`
+ `NotApplicableCount`
+ `InstalledRejectedCount`
+ `InstalledOtherCount`
+ `MissingCount`
+ `InstalledCount`

## AWS CLI commands for scanning and patching managed nodes


After running the following commands to scan for patch compliance or install patches, you can use commands in the [AWS CLI commands for viewing patch summaries and details](#patch-details-cli-commands) section to view information about patch status and compliance.

**Topics**
+ [

### Scan managed nodes for patch compliance (AWS CLI)
](#patch-operations-scan)
+ [

### Install patches on managed nodes (AWS CLI)
](#patch-operations-install-cli)

### Scan managed nodes for patch compliance (AWS CLI)


**To scan specific managed nodes for patch compliance**

Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key=InstanceIds,Values='i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE' \
    --parameters 'Operation=Scan' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key=InstanceIds,Values="i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE" ^
    --parameters "Operation=Scan" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "a04ed06c-8545-40f4-87c2-a0babEXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621974475.267,
        "Parameters": {
            "Operation": [
                "Scan"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "InstanceIds",
                "Values": [
                    "i-02573cafcfEXAMPLE,
                     i-0471e04240EXAMPLE"
                ]
            }
        ],
        "RequestedDateTime": 1621952275.267,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

**To scan managed nodes for patch compliance by patch group tag**

Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key='tag:PatchGroup',Values='Web servers' \
    --parameters 'Operation=Scan' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key="tag:PatchGroup",Values="Web servers" ^
    --parameters "Operation=Scan" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "87a448ee-8adc-44e0-b4d1-6b429EXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621974983.128,
        "Parameters": {
            "Operation": [
                "Scan"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "tag:PatchGroup",
                "Values": [
                    "Web servers"
                ]
            }
        ],
        "RequestedDateTime": 1621952783.128,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

### Install patches on managed nodes (AWS CLI)


**To install patches on specific managed nodes**

Run the following command. 

**Note**  
The target managed nodes reboot as needed to complete patch installation. For more information, see [SSM Command document for patching: `AWS-RunPatchBaseline`](patch-manager-aws-runpatchbaseline.md).

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key=InstanceIds,Values='i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE' \
    --parameters 'Operation=Install' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key=InstanceIds,Values="i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE" ^
    --parameters "Operation=Install" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "5f403234-38c4-439f-a570-93623EXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621975301.791,
        "Parameters": {
            "Operation": [
                "Install"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "InstanceIds",
                "Values": [
                    "i-02573cafcfEXAMPLE,
                     i-0471e04240EXAMPLE"
                ]
            }
        ],
        "RequestedDateTime": 1621953101.791,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

**To install patches on managed nodes in a specific patch group**

Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name 'AWS-RunPatchBaseline' \
    --targets Key='tag:PatchGroup',Values='Web servers' \
    -parameters 'Operation=Install' \
    --timeout-seconds 600
```

------
#### [ Windows Server ]

```
aws ssm send-command ^
    --document-name "AWS-RunPatchBaseline" ^
    --targets Key="tag:PatchGroup",Values="Web servers" ^
    --parameters "Operation=Install" ^
    --timeout-seconds 600
```

------

The system returns information like the following.

```
{
    "Command": {
        "CommandId": "fa44b086-7d36-4ad5-ac8d-627ecEXAMPLE",
        "DocumentName": "AWS-RunPatchBaseline",
        "DocumentVersion": "$DEFAULT",
        "Comment": "",
        "ExpiresAfter": 1621975407.865,
        "Parameters": {
            "Operation": [
                "Install"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "tag:PatchGroup",
                "Values": [
                    "Web servers"
                ]
            }
        ],
        "RequestedDateTime": 1621953207.865,
        "Status": "Pending",
        "StatusDetails": "Pending",
        "TimeoutSeconds": 600,

    ---output truncated---

    }
}
```

# AWS Systems Manager Patch Manager tutorials
Patch Manager tutorials

The tutorials in this section demonstrate how to use Patch Manager, a tool in AWS Systems Manager, for several patching scenarios.

**Topics**
+ [

# Tutorial: Patching a server in an IPv6 only environment
](patch-manager-server-patching-iPv6-tutorial.md)
+ [

# Tutorial: Create a patch baseline for installing Windows Service Packs using the console
](patch-manager-windows-service-pack-patch-baseline-tutorial.md)
+ [

# Tutorial: Update application dependencies, patch a managed node, and perform an application-specific health check using the console
](aws-runpatchbaselinewithhooks-tutorial.md)
+ [

# Tutorial: Patch a server environment using the AWS CLI
](patch-manager-patch-servers-using-the-aws-cli.md)

# Tutorial: Patching a server in an IPv6 only environment


Patch Manager supports the patching of nodes in environments that only have IPv6. By updating the SSM Agent configuration, patching operations can be configured to only make calls to IPv6 service endpoints.

**To patch a server in an IPv6 only environment**

1. Ensure that SSM Agent version 3.3270.0 or later is installed on the managed node.

1. On the managed node, navigate to the SSM Agent configuration file. You can find the `amazon-ssm-agent.json` file in the following directories:
   + Linux: `/etc/amazon/ssm/`
   + macOS: `/opt/aws/ssm/`
   + Windows Server: `C:\Program Files\Amazon\SSM`

   If `amazon-ssm-agent.json` doesn't exist yet, copy the contents of `amazon-ssm-agent.json.template` under the same directory to `amazon-ssm-agent.json`.

1. Update the following entry to set the correct Region and set `UseDualStackEndpoint` to `true`:

   ```
   {
    --------
       "Agent": {
           "Region": "region",
           "UseDualStackEndpoint": true
       },
   --------
   }
   ```

1. Restart the SSM Agent service using the appropriate command for your operating system:
   + Linux: `sudo systemctl restart amazon-ssm-agent`
   + Ubuntu Server using Snap: `sudo snap restart amazon-ssm-agent`
   + macOS: `sudo launchctl stop com.amazon.aws.ssm` followed by `sudo launchctl start com.amazon.aws.ssm`
   + Windows Server: `Stop-Service AmazonSSMAgent` followed by `Start-Service AmazonSSMAgent`

   For the full list of commands per operating system, see [Checking SSM Agent status and starting the agent](ssm-agent-status-and-restart.md).

1. Execute any patching operation to verify patching operations succeed in your IPv6-only environment. Ensure that the nodes being patched have connectivity to the patch source. You can check the Run Command output from the patching execution to check for warnings about inaccessible repositories. When patching a node that is running in an IPv6 only environment, ensure that the node has connectivity to the patch source. You can check the Run Command output from the patching execution to check for warnings about inaccessible repositories. For DNF-based operating systems, it is possible to configure unavailable repositories to be skipped during patching if the `skip_if_unavailable` option is set to `True` under `/etc/dnf/dnf.conf`. DNF-based operating systems include Amazon Linux 2023, Red Hat Enterprise Linux 8 and later versions, Oracle Linux 8 and later versions, Rocky Linux, AlmaLinux, & CentOS 8 and later versions. On Amazon Linux 2023, the `skip_if_unavailable` option is set to `True` by default.
**Note**  
 When using the Install Override List or Baseline Override features, ensure that the provided URL is reachable from the node. If the SSM Agent config option `UseDualStackEndpoint` is set to `true`, then a dualstack S3 client is used when an S3 URL is provided.

# Tutorial: Create a patch baseline for installing Windows Service Packs using the console


When you create a custom patch baseline, you can specify that all, some, or only one type of supported patch is installed.

In patch baselines for Windows, you can select `ServicePacks` as the only **Classification** option in order to limit patching updates to Service Packs only. Service Packs can be installed automatically by Patch Manager, a tool in AWS Systems Manager, provided that the update is available in Windows Update or Windows Server Update Services (WSUS).

You can configure a patch baseline to control whether Service Packs for all Windows versions are installed, or just those for specific versions, such as Windows 7 or Windows Server 2016. 

Use the following procedure to create a custom patch baseline to be used exclusively for installing all Service Packs on your Windows managed nodes. 

**To create a patch baseline for installing Windows Service Packs (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose **Create patch baseline**. 

1. For **Name**, enter a name for your new patch baseline, for example, `MyWindowsServicePackPatchBaseline`.

1. (Optional) For **Description**, enter a description for this patch baseline.

1. For **Operating system**, choose `Windows`.

1. If you want to begin using this patch baseline as the default for Windows as soon as you create it, select **Set this patch baseline as the default patch baseline for Windows Server instances** .
**Note**  
This option is available only if you first accessed Patch Manager before the [patch policies](patch-manager-policies.md) release on December 22, 2022.  
For information about setting an existing patch baseline as the default, see [Setting an existing patch baseline as the default](patch-manager-default-patch-baseline.md).

1. In the **Approval rules for operating systems** section, use the fields to create one or more auto-approval rules.
   + **Products**: The operating system versions that the approval rule applies to, such as `WindowsServer2012`. You can choose one, more than one, or all supported versions of Windows. The default selection is `All`.
   + **Classification**: Choose `ServicePacks`. 
   + **Severity**: The severity value of patches the rule is to apply to. To ensure that all Service Packs are included by the rule, choose `All`. 
   + **Auto-approval**: The method for selecting patches for automatic approval.
     + **Approve patches after a specified number of days**: The number of days for Patch Manager to wait after a patch is released or updated before a patch is automatically approved. You can enter any integer from zero (0) to 360. For most scenarios, we recommend waiting no more than 100 days.
     + **Approve patches released up to a specific date**: The patch release date for which Patch Manager automatically applies all patches released or updated on or before that date. For example, if you specify July 7, 2023, no patches released or last updated on or after July 8, 2023, are installed automatically.
   + (Optional) **Compliance reporting**: The severity level you want to assign to Service Packs approved by the baseline, such as `High`.
**Note**  
If you specify a compliance reporting level and the patch state of any approved Service Pack is reported as `Missing`, then the patch baseline's overall reported compliance severity is the severity level you specified.

1. (Optional) For **Manage tags**, apply one or more tag key name/value pairs to the patch baseline.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For this patch baseline dedicated to updating Service Packs, you could specify key-value pairs such as the following:
   + `Key=OS,Value=Windows`
   + `Key=Classification,Value=ServicePacks`

1. Choose **Create patch baseline**.

# Tutorial: Update application dependencies, patch a managed node, and perform an application-specific health check using the console


In many cases, a managed node must be rebooted after it has been patched with the latest software update. However, rebooting a node in production without safeguards in place can cause several problems, such as invoking alarms, recording incorrect metric data, and interrupting data synchronizations.

This tutorial demonstrates how to avoid problems like these by using the AWS Systems Manager document (SSM document) `AWS-RunPatchBaselineWithHooks` to achieve a complex, multi-step patching operation that accomplishes the following:

1. Prevent new connections to the application

1. Install operating system updates

1. Update the package dependencies of the application

1. Restart the system

1. Perform an application-specific health check

For this example, we have set up our infrastructure this way:
+ The virtual machines targeted are registered as managed nodes with Systems Manager.
+ `Iptables` is used as a local firewall.
+ The application hosted on the managed nodes is running on port 443.
+ The application hosted on the managed nodes is a `nodeJS` application.
+ The application hosted on the managed nodes is managed by the pm2 process manager.
+ The application already has a specified health check endpoint.
+ The application’s health check endpoint requires no end user authentication. The endpoint allows for a health check that meets the organization’s requirements in establishing availability. (In your environments, it might be enough to simply ascertain that the `nodeJS` application is running and able to listen for requests. In other cases, you might want to also verify that a connection to the caching layer or database layer has already been established.)

The examples in this tutorial are for demonstration purposes only and not meant to be implemented as-is into production environments. Also, keep in mind that the lifecycle hooks feature of Patch Manager, a tool in Systems Manager, with the `AWS-RunPatchBaselineWithHooks` document can support numerous other scenarios. Here are several examples.
+ Stop a metrics reporting agent before patching and restarting it after the managed node reboots.
+ Detach the managed node from a CRM or PCS cluster before patching and reattach after the node reboots.
+ Update third-party software (for example, Java, Tomcat, Adobe applications, and so on) on Windows Server machines after operating system (OS) updates are applied, but before the managed node reboots.

**To update application dependencies, patch a managed node, and perform an application-specific health check**

1. Create an SSM document for your preinstallation script with the following contents and name it `NodeJSAppPrePatch`. Replace *your\$1application* with the name of your application.

   This script immediately blocks new incoming requests and provides five seconds for already active ones to complete before beginning the patching operation. For the `sleep` option, specify a number of seconds greater than it usually takes for incoming requests to complete.

   ```
   # exit on error
   set -e
   # set up rule to block incoming traffic
   iptables -I INPUT -j DROP -p tcp --syn --destination-port 443 || exit 1
   # wait for current connections to end. Set timeout appropriate to your application's latency
   sleep 5 
   # Stop your application
   pm2 stop your_application
   ```

   For information about creating SSM documents, see [Creating SSM document content](documents-creating-content.md).

1. Create another SSM document with the following content for your postinstall script to update your application dependencies and name it `NodeJSAppPostPatch`. Replace */your/application/path* with the path to your application.

   ```
   cd /your/application/path
   npm update 
   # you can use npm-check-updates if you want to upgrade major versions
   ```

1. Create another SSM document with the following content for your `onExit` script to bring your application back up and perform a health check. Name this SSM document `NodeJSAppOnExitPatch`. Replace *your\$1application* with the name of your application.

   ```
   # exit on error
   set -e
   # restart nodeJs application
   pm2 start your_application
   # sleep while your application starts and to allow for a crash
   sleep 10
   # check with pm2 to see if your application is running
   pm2 pid your_application
   # re-enable incoming connections
   iptables -D INPUT -j DROP -p tcp --syn --destination-port 
   # perform health check
   /usr/bin/curl -m 10 -vk -A "" http://localhost:443/health-check || exit 1
   ```

1. Create an association in State Manager, a tool in AWS Systems Manager, to issue the operation by performing the following steps:

   1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

   1. In the navigation pane, choose **State Manager**, and then choose **Create association**.

   1. For **Name**, provide a name to help identify the purpose of the association.

   1. In the **Document** list, choose `AWS-RunPatchBaselineWithHooks`.

   1. For **Operation**, choose **Install**.

   1. (Optional) For **Snapshot Id**, provide a GUID that you generate to help speed up the operation and ensure consistency. The GUID value can be as simple as `00000000-0000-0000-0000-111122223333`.

   1. For **Pre Install Hook Doc Name**, enter `NodeJSAppPrePatch`. 

   1. For **Post Install Hook Doc Name**, enter `NodeJSAppPostPatch`. 

   1. For **On ExitHook Doc Name**,enter `NodeJSAppOnExitPatch`. 

1. For **Targets**, identify your managed nodes by specifying tags, choosing nodes manually, choosing a resource group, or choosing all managed nodes.

1. For **Specify schedule**, specify how often to run the association. For managed node patching, once per week is a common cadence.

1. In the **Rate control** section, choose options to control how the association runs on multiple managed nodes. Ensure that only a portion of managed nodes are updated at a time. Otherwise, all or most of your fleet could be taken offline at once. For more information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**.

# Tutorial: Patch a server environment using the AWS CLI


The following procedure describes how to patch a server environment by using a custom patch baseline, patch groups, and a maintenance window.

**Before you begin**
+ Install or update the SSM Agent on your managed nodes. To patch Linux managed nodes, your nodes must be running SSM Agent version 2.0.834.0 or later. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).
+ Configure roles and permissions for Maintenance Windows, a tool in AWS Systems Manager. For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).
+ Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

  For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**To configure Patch Manager and patch managed nodes (command line)**

1. Run the following command to create a patch baseline for Windows named `Production-Baseline`. This patch baseline approves patches for a production environment 7 days after they're released or last updated. That is, we have tagged the patch baseline to indicate that it's for a production environment.
**Note**  
The `OperatingSystem` parameter and `PatchFilters` vary depending on the operating system of the target managed nodes the patch baseline applies to. For more information, see [OperatingSystem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreatePatchBaseline.html#systemsmanager-CreatePatchBaseline-request-OperatingSystem) and [PatchFilter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PatchFilter.html).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-patch-baseline \
       --name "Production-Baseline" \
       --operating-system "WINDOWS" \
       --tags "Key=Environment,Value=Production" \
       --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Critical,Important]},{Key=CLASSIFICATION,Values=[SecurityUpdates,Updates,ServicePacks,UpdateRollups,CriticalUpdates]}]},ApproveAfterDays=7}]" \
       --description "Baseline containing all updates approved for production systems"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm create-patch-baseline ^
       --name "Production-Baseline" ^
       --operating-system "WINDOWS" ^
       --tags "Key=Environment,Value=Production" ^
       --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Critical,Important]},{Key=CLASSIFICATION,Values=[SecurityUpdates,Updates,ServicePacks,UpdateRollups,CriticalUpdates]}]},ApproveAfterDays=7}]" ^
       --description "Baseline containing all updates approved for production systems"
   ```

------

   The system returns information like the following.

   ```
   {
      "BaselineId":"pb-0c10e65780EXAMPLE"
   }
   ```

1. Run the following commands to register the "Production-Baseline" patch baseline for two patch groups. The groups are named "Database Servers" and "Front-End Servers".

------
#### [ Linux & macOS ]

   ```
   aws ssm register-patch-baseline-for-patch-group \
       --baseline-id pb-0c10e65780EXAMPLE \
       --patch-group "Database Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-patch-baseline-for-patch-group ^
       --baseline-id pb-0c10e65780EXAMPLE ^
       --patch-group "Database Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "PatchGroup":"Database Servers",
      "BaselineId":"pb-0c10e65780EXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm register-patch-baseline-for-patch-group \
       --baseline-id pb-0c10e65780EXAMPLE \
       --patch-group "Front-End Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-patch-baseline-for-patch-group ^
       --baseline-id pb-0c10e65780EXAMPLE ^
       --patch-group "Front-End Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "PatchGroup":"Front-End Servers",
      "BaselineId":"pb-0c10e65780EXAMPLE"
   }
   ```

1. Run the following commands to create two maintenance windows for the production servers. The first window runs every Tuesday at 10 PM. The second window runs every Saturday at 10 PM. In addition, the maintenance window is tagged to indicate that it's for a production environment.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-maintenance-window \
       --name "Production-Tuesdays" \
       --tags "Key=Environment,Value=Production" \
       --schedule "cron(0 0 22 ? * TUE *)" \
       --duration 1 \
       --cutoff 0 \
       --no-allow-unassociated-targets
   ```

------
#### [ Windows Server ]

   ```
   aws ssm create-maintenance-window ^
       --name "Production-Tuesdays" ^
       --tags "Key=Environment,Value=Production" ^
       --schedule "cron(0 0 22 ? * TUE *)" ^
       --duration 1 ^
       --cutoff 0 ^
       --no-allow-unassociated-targets
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowId":"mw-0c50858d01EXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm create-maintenance-window \
       --name "Production-Saturdays" \
       --tags "Key=Environment,Value=Production" \
       --schedule "cron(0 0 22 ? * SAT *)" \
       --duration 2 \
       --cutoff 0 \
       --no-allow-unassociated-targets
   ```

------
#### [ Windows Server ]

   ```
   aws ssm create-maintenance-window ^
       --name "Production-Saturdays" ^
       --tags "Key=Environment,Value=Production" ^
       --schedule "cron(0 0 22 ? * SAT *)" ^
       --duration 2 ^
       --cutoff 0 ^
       --no-allow-unassociated-targets
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowId":"mw-9a8b7c6d5eEXAMPLE"
   }
   ```

1. Run the following commands to register the `Database` and `Front-End` servers patch groups with their respective maintenance windows.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-target-with-maintenance-window \
       --window-id mw-0c50858d01EXAMPLE \
       --targets "Key=tag:PatchGroup,Values=Database Servers" \
       --owner-information "Database Servers" \
       --resource-type "INSTANCE"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-target-with-maintenance-window ^
       --window-id mw-0c50858d01EXAMPLE ^
       --targets "Key=tag:PatchGroup,Values=Database Servers" ^
       --owner-information "Database Servers" ^
       --resource-type "INSTANCE"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTargetId":"e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm register-target-with-maintenance-window \
   --window-id mw-9a8b7c6d5eEXAMPLE \
   --targets "Key=tag:PatchGroup,Values=Front-End Servers" \
   --owner-information "Front-End Servers" \
   --resource-type "INSTANCE"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-target-with-maintenance-window ^
       --window-id mw-9a8b7c6d5eEXAMPLE ^
       --targets "Key=tag:PatchGroup,Values=Front-End Servers" ^
       --owner-information "Front-End Servers" ^
       --resource-type "INSTANCE"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTargetId":"faa01c41-1d57-496c-ba77-ff9caEXAMPLE"
   }
   ```

1. Run the following commands to register a patch task that installs missing updates on the `Database` and `Front-End` servers during their respective maintenance windows.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
       --window-id mw-0c50858d01EXAMPLE \
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --task-arn "AWS-RunPatchBaseline" \
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" \
       --task-type "RUN_COMMAND" \
       --max-concurrency 2 \
       --max-errors 1 \
       --priority 1 \
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-task-with-maintenance-window ^
       --window-id mw-0c50858d01EXAMPLE ^
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --task-arn "AWS-RunPatchBaseline" ^
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" ^
       --task-type "RUN_COMMAND" ^
       --max-concurrency 2 ^
       --max-errors 1 ^
       --priority 1 ^
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTaskId":"4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
   }
   ```

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
       --window-id mw-9a8b7c6d5eEXAMPLE \
       --targets "Key=WindowTargetIds,Values=faa01c41-1d57-496c-ba77-ff9caEXAMPLE" \
       --task-arn "AWS-RunPatchBaseline" \
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" \
       --task-type "RUN_COMMAND" \
       --max-concurrency 2 \
       --max-errors 1 \
       --priority 1 \
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm register-task-with-maintenance-window ^
       --window-id mw-9a8b7c6d5eEXAMPLE ^
       --targets "Key=WindowTargetIds,Values=faa01c41-1d57-496c-ba77-ff9caEXAMPLE" ^
       --task-arn "AWS-RunPatchBaseline" ^
       --service-role-arn "arn:aws:iam::123456789012:role/MW-Role" ^
       --task-type "RUN_COMMAND" ^
       --max-concurrency 2 ^
       --max-errors 1 ^
       --priority 1 ^
       --task-invocation-parameters "RunCommand={Parameters={Operation=Install}}"
   ```

------

   The system returns information like the following.

   ```
   {
      "WindowTaskId":"8a5c4629-31b0-4edd-8aea-33698EXAMPLE"
   }
   ```

1. Run the following command to get the high-level patch compliance summary for a patch group. The high-level patch compliance summary includes the number of managed nodes with patches in the respective patch states.
**Note**  
It's expected to see zeroes for the number of managed nodes in the summary until the patch task runs during the first maintenance window.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-patch-group-state \
       --patch-group "Database Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm describe-patch-group-state ^
       --patch-group "Database Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "Instances": number,
      "InstancesWithFailedPatches": number,
      "InstancesWithInstalledOtherPatches": number,
      "InstancesWithInstalledPatches": number,
      "InstancesWithInstalledPendingRebootPatches": number,
      "InstancesWithInstalledRejectedPatches": number,
      "InstancesWithMissingPatches": number,
      "InstancesWithNotApplicablePatches": number,
      "InstancesWithUnreportedNotApplicablePatches": number
   }
   ```

1. Run the following command to get patch summary states per-managed node for a patch group. The per-managed node summary includes a number of patches in the respective patch states per managed node for a patch group.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-instance-patch-states-for-patch-group \
       --patch-group "Database Servers"
   ```

------
#### [ Windows Server ]

   ```
   aws ssm describe-instance-patch-states-for-patch-group ^
       --patch-group "Database Servers"
   ```

------

   The system returns information like the following.

   ```
   {
      "InstancePatchStates": [ 
         { 
            "BaselineId": "string",
            "FailedCount": number,
            "InstalledCount": number,
            "InstalledOtherCount": number,
            "InstalledPendingRebootCount": number,
            "InstalledRejectedCount": number,
            "InstallOverrideList": "string",
            "InstanceId": "string",
            "LastNoRebootInstallOperationTime": number,
            "MissingCount": number,
            "NotApplicableCount": number,
            "Operation": "string",
            "OperationEndTime": number,
            "OperationStartTime": number,
            "OwnerInformation": "string",
            "PatchGroup": "string",
            "RebootOption": "string",
            "SnapshotId": "string",
            "UnreportedNotApplicableCount": number
         }
      ]
   }
   ```

For examples of other AWS CLI commands you can use for your Patch Manager configuration tasks, see [Working with Patch Manager resources using the AWS CLI](patch-manager-cli-commands.md).

# Troubleshooting Patch Manager


Use the following information to help you troubleshoot problems with Patch Manager, a tool in AWS Systems Manager.

**Topics**
+ [

## Issue: "Invoke-PatchBaselineOperation : Access Denied" error or "Unable to download file from S3" error for `baseline_overrides.json`
](#patch-manager-troubleshooting-patch-policy-baseline-overrides)
+ [

## Issue: Patching fails without an apparent cause or error message
](#race-condition-conflict)
+ [

## Issue: Unexpected patch compliance results
](#patch-manager-troubleshooting-compliance)
+ [

## Errors when running `AWS-RunPatchBaseline` on Linux
](#patch-manager-troubleshooting-linux)
+ [

## Errors when running `AWS-RunPatchBaseline` on Windows Server
](#patch-manager-troubleshooting-windows)
+ [

## Errors when running `AWS-RunPatchBaseline` on macOS
](#patch-manager-troubleshooting-macos)
+ [

## Using AWS Support Automation runbooks
](#patch-manager-troubleshooting-using-support-runbooks)
+ [

## Contacting AWS Support
](#patch-manager-troubleshooting-contact-support)

## Issue: "Invoke-PatchBaselineOperation : Access Denied" error or "Unable to download file from S3" error for `baseline_overrides.json`


**Problem**: When the patching operations specified by your patch policy run, you receive an error similar to the following example. 

------
#### [ Example error on Windows Server ]

```
----------ERROR-------
Invoke-PatchBaselineOperation : Access Denied
At C:\ProgramData\Amazon\SSM\InstanceData\i-02573cafcfEXAMPLE\document\orchestr
ation\792dd5bd-2ad3-4f1e-931d-abEXAMPLE\PatchWindows\_script.ps1:219 char:13
+ $response = Invoke-PatchBaselineOperation -Operation Install -Snapsho ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOpera
tion:InstallWindowsUpdateOperation) [Invoke-PatchBaselineOperation], Amazo
nS3Exception
+ FullyQualifiedErrorId : PatchBaselineOperations,Amazon.Patch.Baseline.Op
erations.PowerShellCmdlets.InvokePatchBaselineOperation
failed to run commands: exit status 0xffffffff
```

------
#### [ Example error on Linux ]

```
[INFO]: Downloading Baseline Override from s3://aws-quicksetup-patchpolicy-123456789012-abcde/baseline_overrides.json
[ERROR]: Unable to download file from S3: s3://aws-quicksetup-patchpolicy-123456789012-abcde/baseline_overrides.json.
[ERROR]: Error loading entrance module.
```

------

**Cause**: You created a patch policy in Quick Setup, and some of your managed nodes already had an instance profile attached (for EC2 instances) or a service role attached (for non-EC2 machines). 

However, as shown in the following image, you didn't select the **Add required IAM policies to existing instance profiles attached to your instances** check box.

![\[\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/QS-instance-profile-option.png)


When you create a patch policy, an Amazon S3 bucket is also created to store the policy's configuration `baseline_overrides.json` file. If you don't select the **Add required IAM policies to existing instance profiles attached to your instances** check box when creating the policy, the IAM policies and resource tags that are needed to access `baseline_overrides.json` in the S3 bucket are not automatically added to your existing IAM instance profiles and service roles.

**Solution 1**: Delete the existing patch policy configuration, then create a replacement, making sure to select the **Add required IAM policies to existing instance profiles attached to your instances** check box. This selection applies the IAM policies created by this Quick Setup configuration to nodes that already have an instance profile or service role attached. (By default, Quick Setup adds the required policies to instances and nodes that do *not* already have instance profiles or service roles.) For more information, see [Automate organization-wide patching using a Quick Setup patch policy](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-patch-manager.html). 

**Solution 2**: Manually add the required permissions and tags to each IAM instance profile and IAM service role that you use with Quick Setup. For instructions, see [Permissions for the patch policy S3 bucket](quick-setup-patch-manager.md#patch-policy-s3-bucket-permissions).

## Issue: Patching fails without an apparent cause or error message


**Problem**: A patching operation fails without returning an error message.

**Possible cause**: When patching managed nodes, the document execution may be interrupted and marked as failed even though patches were successfully installed. This can occur if the system initiates an unexpected reboot during the patching operation (for example, to apply updates to firmware or features like SecureBoot). The SSM Agent cannot persist and resume the document execution state across external reboots, resulting in the execution being reported as failed. This can occur with the `AWS-RunPatchBaseline`, `AWS-RunPatchBaselineAssociation`, `AWS-RunPatchBaselineWithHooks`, and `AWS-InstallWindowsUpdates` documents.

**Solution**: To verify patch installation status after a failed execution, run a `Scan` patching operations, then check the patch compliance data in Patch Manager to assess the current compliance state.

If you determine that external reboots weren't the cause of the failure in this scenario, we recommend contacting [AWS Support](#patch-manager-troubleshooting-contact-support).

## Issue: Unexpected patch compliance results


**Problem**: When reviewing the patching compliance details generated after a `Scan` operation, the results include information that don't reflect the rules set up in your patch baseline. For example, an exception you added to the **Rejected patches** list in a patch baseline is listed as `Missing`. Or patches classified as `Important` are listed as missing even though your patch baseline specifies `Critical` patches only.

**Cause**: Patch Manager currently supports multiple methods of running `Scan` operations:
+ A patch policy configured in Quick Setup
+ A Host Management option configured in Quick Setup
+ A maintenance window to run a patch `Scan` or `Install` task
+ An on-demand **Patch now** operation

When a `Scan` operation runs, it overwrites the compliance details from the most recent scan. If you have more than one method set up to run a `Scan` operation, and they use different patch baselines with different rules, they will result in differing patch compliance results. 

**Solution**: To avoid unexpected patch compliance results, we recommend using only one method at a time for running the Patch Manager `Scan` operation. For more information, see [Identifying the execution that created patch compliance data](patch-manager-compliance-data-overwrites.md).

## Errors when running `AWS-RunPatchBaseline` on Linux


**Topics**
+ [

### Issue: 'No such file or directory' error
](#patch-manager-troubleshooting-linux-1)
+ [

### Issue: 'another process has acquired yum lock' error
](#patch-manager-troubleshooting-linux-2)
+ [

### Issue: 'Permission denied / failed to run commands' error
](#patch-manager-troubleshooting-linux-3)
+ [

### Issue: 'Unable to download payload' error
](#patch-manager-troubleshooting-linux-4)
+ [

### Issue: 'unsupported package manager and python version combination' error
](#patch-manager-troubleshooting-linux-5)
+ [

### Issue: Patch Manager isn't applying rules specified to exclude certain packages
](#patch-manager-troubleshooting-linux-6)
+ [

### Issue: Patching fails and Patch Manager reports that the Server Name Indication extension to TLS is not available
](#patch-manager-troubleshooting-linux-7)
+ [

### Issue: Patch Manager reports 'No more mirrors to try'
](#patch-manager-troubleshooting-linux-8)
+ [

### Issue: Patching fails with 'Error code returned from curl is 23'
](#patch-manager-troubleshooting-linux-9)
+ [

### Issue: Patching fails with ‘Error unpacking rpm package…’ message
](#error-unpacking-rpm)
+ [

### Issue: Patching fails with 'Encounter service side error when uploading the inventory'
](#inventory-upload-error)
+ [

### Issue: Patching fails with ‘Errors were encountered while downloading packages’ message
](#errors-while-downloading)
+ [

### Issue: Patching fails with an out of memory (OOM) error
](#patch-manager-troubleshooting-linux-oom)
+ [

### Issue: Patching fails with a message that 'The following signatures couldn't be verified because the public key is not available'
](#public-key-unavailable)
+ [

### Issue: Patching fails with a 'NoMoreMirrorsRepoError' message
](#no-more-mirrors-repo-error)
+ [

### Issue: Patching fails with an 'Unable to download payload' message
](#payload-download-error)
+ [

### Issue: Patching fails with a message 'install errors: dpkg: error: dpkg frontend is locked by another process'
](#dpkg-frontend-locked)
+ [

### Issue: Patching on Ubuntu Server fails with a 'dpkg was interrupted' error
](#dpkg-interrupted)
+ [

### Issue: The package manager utility can't resolve a package dependency
](#unresolved-dependency)
+ [

### Issue: Zypper package lock dependency failures on SLES managed nodes
](#patch-manager-troubleshooting-linux-zypper-locks)
+ [

### Issue: Cannot acquire lock. Another patching operation is in progress.
](#patch-manager-troubleshooting-linux-concurrent-lock)

### Issue: 'No such file or directory' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with one of the following errors.

```
IOError: [Errno 2] No such file or directory: 'patch-baseline-operations-X.XX.tar.gz'
```

```
Unable to extract tar file: /var/log/amazon/ssm/patch-baseline-operations/patch-baseline-operations-1.75.tar.gz.failed to run commands: exit status 155
```

```
Unable to load and extract the content of payload, abort.failed to run commands: exit status 152
```

**Cause 1**: Two commands to run `AWS-RunPatchBaseline` were running at the same time on the same managed node. This creates a race condition that results in the temporary `file patch-baseline-operations*` not being created or accessed properly.

**Cause 2**: Insufficient storage space remains under the `/var` directory. 

**Solution 1**: Ensure that no maintenance window has two or more Run Command tasks that run `AWS-RunPatchBaseline` with the same Priority level and that run on the same target IDs. If this is the case, reorder the priority. Run Command is a tool in AWS Systems Manager.

**Solution 2**: Ensure that only one maintenance window at a time is running Run Command tasks that use `AWS-RunPatchBaseline` on the same targets and on the same schedule. If this is the case, change the schedule.

**Solution 3**: Ensure that only one State Manager association is running `AWS-RunPatchBaseline` on the same schedule and targeting the same managed nodes. State Manager is a tool in AWS Systems Manager.

**Solution 4**: Free up sufficient storage space under the `/var` directory for the update packages.

### Issue: 'another process has acquired yum lock' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
12/20/2019 21:41:48 root [INFO]: another process has acquired yum lock, waiting 2 s and retry.
```

**Cause**: The `AWS-RunPatchBaseline` document has started running on a managed node where it's already running in another operation and has acquired the package manager `yum` process.

**Solution**: Ensure that no State Manager association, maintenance window tasks, or other configurations that run `AWS-RunPatchBaseline` on a schedule are targeting the same managed node around the same time.

### Issue: 'Permission denied / failed to run commands' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
sh: 
/var/lib/amazon/ssm/instanceid/document/orchestration/commandid/PatchLinux/_script.sh: Permission denied
failed to run commands: exit status 126
```

**Cause**: `/var/lib/amazon/` might be mounted with `noexec` permissions. This is an issue because SSM Agent downloads payload scripts to `/var/lib/amazon/ssm` and runs them from that location.

**Solution**: Ensure that you have configured exclusive partitions to `/var/log/amazon` and `/var/lib/amazon`, and that they're mounted with `exec` permissions.

### Issue: 'Unable to download payload' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
Unable to download payload: https://s3.amzn-s3-demo-bucket.region.amazonaws.com/aws-ssm-region/patchbaselineoperations/linux/payloads/patch-baseline-operations-X.XX.tar.gz.failed to run commands: exit status 156
```

**Cause**: The managed node doesn't have the required permissions to access the specified Amazon Simple Storage Service (Amazon S3) bucket.

**Solution**: Update your network configuration so that S3 endpoints are reachable. For more details, see information about required access to S3 buckets for Patch Manager in [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions).

### Issue: 'unsupported package manager and python version combination' error


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with the following error.

```
An unsupported package manager and python version combination was found. Apt requires Python3 to be installed.
failed to run commands: exit status 1
```

**Cause**: A supported version of python3 isn't installed on the Debian Server or Ubuntu Server instance.

**Solution**: Install a supported version of python3 (3.0 - 3.12) on the server, which is required for Debian Server and Ubuntu Server managed nodes.

### Issue: Patch Manager isn't applying rules specified to exclude certain packages


**Problem**: You have attempted to exclude certain packages by specifying them in the `/etc/yum.conf` file, in the format `exclude=package-name`, but they aren't excluded during the Patch Manager `Install` operation.

**Cause**: Patch Manager doesn't incorporate exclusions specified in the `/etc/yum.conf` file.

**Solution**: To exclude specific packages, create a custom patch baseline and create a rule to exclude the packages you don't want installed.

### Issue: Patching fails and Patch Manager reports that the Server Name Indication extension to TLS is not available


**Problem**: The patching operation issues the following message.

```
/var/log/amazon/ssm/patch-baseline-operations/urllib3/util/ssl_.py:369: 
SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension
to TLS is not available on this platform. This might cause the server to present an incorrect TLS 
certificate, which can cause validation failures. You can upgrade to a newer version of Python 
to solve this. 
For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
```

**Cause**: This message doesn't indicate an error. Instead, it's a warning that the older version of Python distributed with the operating system doesn't support TLS Server Name Indication. The Systems Manager patch payload script issues this warning when connecting to AWS APIs that support SNI.

**Solution**: To troubleshoot any patching failures when this message is reported, review the contents of the `stdout` and `stderr` files. If you haven't configured the patch baseline to store these files in an S3 bucket or in Amazon CloudWatch Logs, you can locate the files in the following location on your Linux managed node. 

`/var/lib/amazon/ssm/instance-id/document/orchestration/Run-Command-execution-id/awsrunShellScript/PatchLinux`

### Issue: Patch Manager reports 'No more mirrors to try'


**Problem**: The patching operation issues the following message.

```
[Errno 256] No more mirrors to try.
```

**Cause**: The repositories configured on the managed node are not working correctly. Possible causes for this include:
+ The `yum` cache is corrupted.
+ A repository URL can't be reached due to network-related issues.

**Solution**: Patch Manager uses the managed node’s default package manager to perform patching operation. Double-check that repositories are configured and operating correctly.

### Issue: Patching fails with 'Error code returned from curl is 23'


**Problem**: A patching operating that uses `AWS-RunPatchBaseline` fails with an error similar to the following:

```
05/01/2025 17:04:30 root [ERROR]: Error code returned from curl is 23
```

**Cause**: The curl tool in use on your systems lacks the permissions needed to write to the filesystem. This can occur when if the package manager's default curl tool was replaced by a different version, such as one installed with snap.

**Solution**: If the curl version provided by the package manager was uninstalled when a different version was installed, reinstall it.

If you need to keep multiple curl versions installed, ensure that the version associated with the package manager is in the first directory listed in the `PATH` variable. You can check this by running the command `echo $PATH` to see the current order of directories that are checked for executable files on your system.

### Issue: Patching fails with ‘Error unpacking rpm package…’ message


**Problem**: A patching operation fails with an error similar to the following:

```
Error : Error unpacking rpm package python-urllib3-1.25.9-1.amzn2.0.2.noarch
python-urllib3-1.25.9-1.amzn2.0.1.noarch was supposed to be removed but is not!
failed to run commands: exit status 1
```

**Cause 1**: When a particular package is present in multiple package installers, such as both `pip` and `yum` or `dnf`, conflicts can occur when using the default package manager.

A common example occurs with the `urllib3` package, which is found in `pip`, `yum`, and `dnf`.

**Cause 2**: The `python-urllib3` package is corrupted. This can happen if the package files were installed or updated by `pip` after the `rpm` was package was previously installed by `yum` or `dnf`.

**Solution**: Remove the `python-urllib3` package from pip by running the command `sudo pip uninstall urllib3`, keeping the package only in the default package manager (`yum` or `dnf`). 

### Issue: Patching fails with 'Encounter service side error when uploading the inventory'


**Problem**: When running the `AWS-RunPatchBaseline` document, you receive the following error message:

```
Encounter service side error when uploading the inventory
```

**Cause**: Two commands to run `AWS-RunPatchBaseline` were running at the same time on the same managed node. This creates a race condition when initializing boto3 client during patching operations.

**Solution**: Ensure that no State Manager association, maintenance window tasks, or other configurations that run `AWS-RunPatchBaseline` on a schedule are targeting the same managed node around the same time.

### Issue: Patching fails with ‘Errors were encountered while downloading packages’ message


**Problem**: During patching, you receive an error similar to the following:

```
YumDownloadError: [u'Errors were encountered while downloading 
packages.', u'libxml2-2.9.1-6.el7_9.6.x86_64: [Errno 5] [Errno 12] 
Cannot allocate memory', u'libxslt-1.1.28-6.el7.x86_64: [Errno 5] 
[Errno 12] Cannot allocate memory', u'libcroco-0.6.12-6.el7_9.x86_64: 
[Errno 5] [Errno 12] Cannot allocate memory', u'openldap-2.4.44-25.el7_9.x86_64: 
[Errno 5] [Errno 12] Cannot allocate memory',
```

**Cause**: This error can occur when insufficient memory is available on a managed node.

**Solution**: Configure the swap memory, or upgrade the instance to a different type to increase the memory support. Then start a new patching operation.

### Issue: Patching fails with an out of memory (OOM) error


**Problem**: When you run `AWS-RunPatchBaseline`, the patching operation fails due to insufficient memory on the managed node. You might see errors such as `Cannot allocate memory`, `Killed` (from the Linux OOM killer), or the operation fails unexpectedly. This error is more likely to occur on instances with less than 1 GB of RAM, but can also affect instances with more memory when a large number of updates are available.

**Cause**: Patch Manager runs patching operations using the native package manager on the managed node. The memory required during a patching operation depends on several factors, including:
+ The number of packages installed and available updates on the managed node.
+ The package manager in use and its memory characteristics.
+ Other processes running on the managed node at the time of the patching operation.

Managed nodes with a large number of installed packages or a large number of available updates require more memory during patching operations. When available memory is insufficient, the patching process will fail and exit with an error. The operating system can also terminate the patching process.

**Solution**: Try one or more of the following:
+ Schedule patching operations during periods of low workload activity on the managed node, such as by using maintenance windows.
+ Upgrade the instance to a type with more memory.
+ Configure swap memory on the managed node. Note that on instances with limited EBS throughput, heavy swap usage may cause performance degradation.
+ Review and reduce the number of processes running on the managed node during patching operations.

### Issue: Patching fails with a message that 'The following signatures couldn't be verified because the public key is not available'


**Problem**: Patching fails on Ubuntu Server with an error similar to the following:

```
02/17/2022 21:08:43 root [ERROR]: W:GPG error: 
http://repo.mysql.com/apt/ubuntu  bionic InRelease: The following 
signatures couldn't be verified because the public key is not available: 
NO_PUBKEY 467B942D3A79BD29, E:The repository ' http://repo.mysql.com/apt/ubuntu bionic
```

**Cause**: The GNU Privacy Guard (GPG) key has expired or is missing.

**Solution**: Refresh the GPG key, or add the key again.

For example, using the error shown previously, we see that the `467B942D3A79BD29` key is missing and must be added. To do so, run either of the following commands:

```
sudo apt-key adv --keyserver hkps://keyserver.ubuntu.com --recv-keys 467B942D3A79BD29
```

```
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 467B942D3A79BD29
```

Or, to refresh all keys, run the following command:

```
sudo apt-key adv --keyserver hkps://keyserver.ubuntu.com --refresh-keys
```

If the error recurs after this, we recommend reporting the issue to the organization that maintains the repository. Until a fix is available, you can edit the `/etc/apt/sources.list` file to omit the repository during the patching process.

To do so, open the `sources.list` file for editing, locate the line for the repository, and insert a `#` character at the beginning of the line to comment it out. Then save and close the file.

### Issue: Patching fails with a 'NoMoreMirrorsRepoError' message


**Problem**: You receive an error similar to the following:

```
NoMoreMirrorsRepoError: failure: repodata/repomd.xml from pgdg94: [Errno 256] No more mirrors to try.
```

**Cause**: There is an error in the source repository.

**Solution**: We recommend reporting the issue to the organization that maintains the repository. Until the error is fixed, you can disable the repository at the operating system level. To do so, run the following command, replacing the value for *repo-name* with your repository name:

```
yum-config-manager --disable repo-name
```

Following is an example.

```
yum-config-manager --disable pgdg94
```

After you run this command, run another patching operation.

### Issue: Patching fails with an 'Unable to download payload' message


**Problem**: You receive an error similar to the following:

```
Unable to download payload: 
https://s3.dualstack.eu-west-1.amazonaws.com/aws-ssm-eu-west-1/patchbaselineoperations/linux/payloads/patch-baseline-operations-1.83.tar.gz.
failed  to run commands: exit status 156
```

**Cause**: The managed node configuration contains errors or is incomplete.

**Solution**: Make sure that the managed node is configured with the following:
+ Outbound TCP 443 rule in security group.
+ Egress TCP 443 rule in NACL.
+ Ingress TCP 1024-65535 rule in NACL.
+ NAT/IGW in route table to provide connectivity to an S3 endpoint. If the instance doesn't have internet access, provide it connectivity with the S3 endpoint. To do that, add an S3 gateway endpoint in the VPC and integrate it with the route table of the managed node.

### Issue: Patching fails with a message 'install errors: dpkg: error: dpkg frontend is locked by another process'


**Problem**: Patching fails with an error similar to the following:

```
install errors: dpkg: error: dpkg frontend is locked by another process
failed to run commands: exit status 2
Failed to install package; install status Failed
```

**Cause**: The package manager is already running another process on a managed node at the operating system level. If that other process takes a long time to complete, the Patch Manager patching operation can time out and fail.

**Solution**: After the other process that’s using the package manager completes, run a new patching operation.

### Issue: Patching on Ubuntu Server fails with a 'dpkg was interrupted' error


**Problem**: On Ubuntu Server, patching fails with an error similar to the following:

```
E: dpkg was interrupted, you must manually run
'dpkg --configure -a' to correct the problem.
```

**Cause**: One or more packages is misconfigured.

**Solution**: Perform the following steps:

1. Check to see which packages are affected, and what the issues are with each package by running the following commands, one at a time:

   ```
   sudo apt-get check
   ```

   ```
   sudo dpkg -C
   ```

   ```
   dpkg-query -W -f='${db:Status-Abbrev} ${binary:Package}\n' | grep -E ^.[^nci]
   ```

1. Correct the packages with issues by running the following command:

   ```
   sudo dpkg --configure -a
   ```

1. If the previous command didn't fully resolve the issue, run the following command:

   ```
   sudo apt --fix-broken install
   ```

### Issue: The package manager utility can't resolve a package dependency


**Problem**: The native package manager on the managed node is unable to resolve a package dependency and patching fails. The following error message example indicates this type of failure on an operating system that uses `yum` as the package manager.

```
09/22/2020 08:56:09 root [ERROR]: yum update failed with result code: 1, 
message: [u'rpm-python-4.11.3-25.amzn2.0.3.x86_64 requires rpm = 4.11.3-25.amzn2.0.3', 
u'awscli-1.18.107-1.amzn2.0.1.noarch requires python2-botocore = 1.17.31']
```

**Cause**: On Linux operating systems, Patch Manager uses the native package manager on the machine to run patching operations. such as `yum`, `dnf`, `apt`, and `zypper`. The applications automatically detect, install, update, or remove dependent packages as required. However, some conditions can result in the package manager being unable to complete a dependency operation, such as:
+ Multiple conflicting repositories are configured on the operating system.
+ A remote repository URL is inaccessible due to network-related issues.
+ A package for the wrong architecture is found in the repository.

**Solution**: Patching might fail because of a dependency issue for a wide variety of reasons. Therefore, we recommend that you contact AWS Support to assist with troubleshooting.

### Issue: Zypper package lock dependency failures on SLES managed nodes


**Problem**: When you run `AWS-RunPatchBaseline` with the `Install` operation on SUSE Linux Enterprise Server instances, patching fails with dependency check errors related to package locks. You might see error messages similar to the following:

```
Problem: mock-pkg-has-dependencies-0.2.0-21.adistro.noarch requires mock-pkg-standalone = 0.2.0, but this requirement cannot be provided
  uninstallable providers: mock-pkg-standalone-0.2.0-21.adistro.noarch[local-repo]
 Solution 1: remove lock to allow installation of mock-pkg-standalone-0.2.0-21.adistro.noarch[local-repo]
 Solution 2: do not install mock-pkg-has-dependencies-0.2.0-21.adistro.noarch
 Solution 3: break mock-pkg-has-dependencies-0.2.0-21.adistro.noarch by ignoring some of its dependencies

Choose from above solutions by number or cancel [1/2/3/c] (c): c
```

In this example, the package `mock-pkg-standalone` is locked, which you could verify by running `sudo zypper locks` and looking for this package name in the output.

Or you might see log entries indicating dependency check failures:

```
Encountered a known exception in the CLI Invoker: CLIInvokerError(error_message='Dependency check failure during commit process', error_code='4')
```

**Note**  
This issue only occurs during `Install` operations. `Scan` operations do not apply package locks and are not affected by existing locks."

**Cause**: This error occurs when zypper package locks prevent the installation or update of packages due to dependency conflicts. Package locks can be present for several reasons:
+ **Customer-applied locks**: You or your system administrator manually locked packages using zypper commands such as `zypper addlock`.
+ **Patch Manager rejected patches**: Patch Manager automatically applies package locks when you specify packages in the **Rejected patches** list of your patch baseline to prevent their installation.
+ **Residual locks from interrupted operations**: In rare cases, if a patch operation was interrupted (such as by a system reboot) before Patch Manager could clean up temporary locks, residual package locks might remain on your managed node.

**Solution**: To resolve zypper package lock issues, follow these steps based on the cause:

**Step 1: Identify locked packages**

Connect to your SLES managed node and run the following command to list all currently locked packages:

```
sudo zypper locks
```

**Step 2: Determine the source of locks**
+ If the locked packages are ones you intentionally locked for system stability, consider whether they need to remain locked or if they can be temporarily unlocked for patching.
+ If the locked packages match entries in your patch baseline's **Rejected patches** list, these are likely residual locks from an interrupted patch operation. During normal operations, Patch Manager applies these locks temporarily and removes them automatically when the operation completes. You can either remove the packages from the rejected list or modify your patch baseline rules.
+ If you don't recognize the locked packages and they weren't intentionally locked, they might be residual locks from a previous interrupted patch operation.

**Step 3: Remove locks as appropriate**

To remove specific package locks, use the following command:

```
sudo zypper removelock package-name
```

To remove all package locks (use with caution), run:

```
sudo zypper cleanlocks
```

**Step 4: Update your patch baseline (if applicable)**

If the locks were caused by rejected patches in your patch baseline:

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Patch Manager**.

1. Choose the **Patch baselines** tab, and then choose your custom patch baseline.

1. Choose **Actions**, **Modify patch baseline**.

1. In the **Rejected patches** section, review the listed packages and remove any that should be allowed to install.

1. Choose **Save changes**.

**Step 5: Retry the patch operation**

After removing the problematic locks and updating your patch baseline if necessary, run the `AWS-RunPatchBaseline` document again.

**Note**  
When Patch Manager applies locks for rejected patches during `Install` operations, it's designed to clean up these locks automatically after the patch operation completes. If you see these locks when running `sudo zypper locks`, it indicates a previous patch operation was interrupted before cleanup could occur. However, if a patch operation is interrupted, manual cleanup might be required as described in this procedure.

**Prevention**: To avoid future zypper lock conflicts:
+ Carefully review your patch baseline's rejected patches list to ensure it only includes packages you truly want to exclude.
+ Avoid manually locking packages that might be required as dependencies for security updates.
+ If you must lock packages manually, document the reasons and review the locks periodically.
+ Ensure patch operations complete successfully and aren't interrupted by system reboots or other factors.
+ Monitor patch operations to completion and avoid interrupting them with system reboots or other actions that could prevent proper cleanup of temporary locks.

### Issue: Cannot acquire lock. Another patching operation is in progress.


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with error code 4 and the following error message.

```
[ERROR]: Cannot acquire lock on /var/log/amazon/ssm/patch-baseline-concurrent.lock. Another patching operation is in progress.
```

**Cause**: This error occurs when multiple patching operations are attempting to run on the same managed node at the same time. The lock file prevents concurrent patching operations to avoid conflicts and ensure system stability.

**Solution**: Ensure that patching operations are not scheduled to run at the same time on the same managed node. Review the following configurations to identify and resolve scheduling conflicts:
+ **Patch policies**: Check your Quick Setup patch policy configurations to ensure they don't overlap with other patching schedules.
+ **Maintenance windows**: Review your maintenance window associations to verify that multiple windows aren't targeting the same managed nodes with patching tasks at overlapping times.
+ **Manual Patch now operations**: Avoid initiating manual **Patch now** operations while scheduled patching is in progress.

## Errors when running `AWS-RunPatchBaseline` on Windows Server


**Topics**
+ [

### Issue: mismatched product family/product pairs
](#patch-manager-troubleshooting-product-family-mismatch)
+ [

### Issue: `AWS-RunPatchBaseline` output returns an `HRESULT` (Windows Server)
](#patch-manager-troubleshooting-hresult)
+ [

### Issue: managed node doesn't have access to Windows Update Catalog or WSUS
](#patch-manager-troubleshooting-instance-access)
+ [

### Issue: PatchBaselineOperations PowerShell module is not downloadable
](#patch-manager-troubleshooting-module-not-downloadable)
+ [

### Issue: missing patches
](#patch-manager-troubleshooting-missing-patches)
+ [

### Issue: Cannot acquire lock. Another patching operation is in progress.
](#patch-manager-troubleshooting-windows-concurrent-lock)

### Issue: mismatched product family/product pairs


**Problem**: When you create a patch baseline in the Systems Manager console, you specify a product family and a product. For example, you might choose:
+ **Product family**: `Office`

  **Product**: `Office 2016`

**Cause**: If you attempt to create a patch baseline with a mismatched product family/product pair, an error message is displayed. The following are reasons this can occur:
+ You selected a valid product family and product pair but then removed the product family selection.
+ You chose a product from the **Obsolete or mismatched options** sublist instead of the **Available and matching options** sublist. 

  Items in the product **Obsolete or mismatched options** sublist might have been entered in error through an SDK or AWS Command Line Interface (AWS CLI) `create-patch-baseline` command. This could mean a typo was introduced or a product was assigned to the wrong product family. A product is also included in the **Obsolete or mismatched options** sublist if it was specified for a previous patch baseline but has no patches available from Microsoft. 

**Solution**: To avoid this issue in the console, always choose options from the **Currently available options** sublists.

You can also view the products that have available patches by using the `[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-patch-properties.html)` command in the AWS CLI or the `[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html)` API command.

### Issue: `AWS-RunPatchBaseline` output returns an `HRESULT` (Windows Server)


**Problem**: You received an error like the following.

```
----------ERROR-------
Invoke-PatchBaselineOperation : Exception Details: An error occurred when 
attempting to search Windows Update.
Exception Level 1:
 Error Message: Exception from HRESULT: 0x80240437
 Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria)..
(Windows updates)
11/22/2020 09:17:30 UTC | Info | Searching for Windows Updates.
11/22/2020 09:18:59 UTC | Error | Searching for updates resulted in error: Exception from HRESULT: 0x80240437
----------ERROR-------
failed to run commands: exit status 4294967295
```

**Cause**: This output indicates that the native Windows Update APIs were unable to run the patching operations.

**Solution**: Check the `HResult` code in the following microsoft.com topics to identify troubleshooting steps for resolving the error:
+ [Windows Update error codes by component](https://learn.microsoft.com/en-us/windows/deployment/update/windows-update-error-reference) 
+ [Windows Update common errors and mitigation](https://learn.microsoft.com/en-us/troubleshoot/windows-client/deployment/common-windows-update-errors) 

### Issue: managed node doesn't have access to Windows Update Catalog or WSUS


**Problem**: You received an error like the following.

```
Downloading PatchBaselineOperations PowerShell module from https://s3.aws-api-domain/path_to_module.zip to C:\Windows\TEMP\Amazon.PatchBaselineOperations-1.29.zip.

Extracting PatchBaselineOperations zip file contents to temporary folder.

Verifying SHA 256 of the PatchBaselineOperations PowerShell module files.

Successfully downloaded and installed the PatchBaselineOperations PowerShell module.

Patch Summary for

PatchGroup :

BaselineId :

Baseline : null

SnapshotId :

RebootOption : RebootIfNeeded

OwnerInformation :

OperationType : Scan

OperationStartTime : 1970-01-01T00:00:00.0000000Z

OperationEndTime : 1970-01-01T00:00:00.0000000Z

InstalledCount : -1

InstalledRejectedCount : -1

InstalledPendingRebootCount : -1

InstalledOtherCount : -1

FailedCount : -1

MissingCount : -1

NotApplicableCount : -1

UnreportedNotApplicableCount : -1

EC2AMAZ-VL3099P - PatchBaselineOperations Assessment Results - 2020-12-30T20:59:46.169

----------ERROR-------

Invoke-PatchBaselineOperation : Exception Details: An error occurred when attempting to search Windows Update.

Exception Level 1:

Error Message: Exception from HRESULT: 0x80072EE2

Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria)

at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates(String

searchCriteria)

At C:\ProgramData\Amazon\SSM\InstanceData\i-02573cafcfEXAMPLE\document\orchestration\3d2d4864-04b7-4316-84fe-eafff1ea58

e3\PatchWindows\_script.ps1:230 char:13

+ $response = Invoke-PatchBaselineOperation -Operation Install -Snapsho ...

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : OperationStopped: (Amazon.Patch.Ba...UpdateOperation:InstallWindowsUpdateOperation) [Inv

oke-PatchBaselineOperation], Exception

+ FullyQualifiedErrorId : Exception Level 1:

Error Message: Exception Details: An error occurred when attempting to search Windows Update.

Exception Level 1:

Error Message: Exception from HRESULT: 0x80072EE2

Stack Trace: at WUApiLib.IUpdateSearcher.Search(String criteria)

at Amazon.Patch.Baseline.Operations.PatchNow.Implementations.WindowsUpdateAgent.SearchForUpdates(String searc

---Error truncated----
```

**Cause**: This error could be related to the Windows Update components, or to a lack of connectivity to the Windows Update Catalog or Windows Server Update Services (WSUS).

**Solution**: Confirm that the managed node has connectivity to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx) through an internet gateway, NAT gateway, or NAT instance. If you're using WSUS, confirm that the managed node has connectivity to the WSUS server in your environment. If connectivity is available to the intended destination, check the Microsoft documentation for other potential causes of `HResult 0x80072EE2`. This might indicate an operating system level issue. 

### Issue: PatchBaselineOperations PowerShell module is not downloadable


**Problem**: You received an error like the following.

```
Preparing to download PatchBaselineOperations PowerShell module from S3.
                    
Downloading PatchBaselineOperations PowerShell module from https://s3.aws-api-domain/path_to_module.zip to C:\Windows\TEMP\Amazon.PatchBaselineOperations-1.29.zip.
----------ERROR-------

C:\ProgramData\Amazon\SSM\InstanceData\i-02573cafcfEXAMPLE\document\orchestration\aaaaaaaa-bbbb-cccc-dddd-4f6ed6bd5514\

PatchWindows\_script.ps1 : An error occurred when executing PatchBaselineOperations: Unable to connect to the remote server

+ CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException

+ FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,_script.ps1

failed to run commands: exit status 4294967295
```

**Solution**: Check the managed node connectivity and permissions to Amazon Simple Storage Service (Amazon S3). The managed node's AWS Identity and Access Management (IAM) role must use the minimum permissions cited in [SSM Agent communications with AWS managed S3 buckets](ssm-agent-technical-details.md#ssm-agent-minimum-s3-permissions). The node must communicate with the Amazon S3 endpoint through the Amazon S3 gateway endpoint, NAT gateway, or internet gateway. For more information about the VPC Endpoint requirements for AWS Systems Manager SSM Agent (SSM Agent), see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md). 

### Issue: missing patches


**Problem**: `AWS-RunPatchbaseline` completed successfully, but there are some missing patches.

The following are some common causes and their solutions.

**Cause 1**: The baseline isn't effective.

**Solution 1**: To check if this is the cause, use the following procedure.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Select the **Command history** tab and then select the command whose baseline you want to check.

1. Select the managed node that has missing patches.

1. Select **Step 1 - Output** and find the `BaselineId` value.

1. Check the assigned [patch baseline configuration](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-custom), that is, the operating system, product name, classification, and severity for the patch baseline.

1. Go to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx).

1. Search the Microsoft Knowledge Base (KB) article IDs (for example, KB3216916).

1. Verify that the value under **Product** matches that of your managed node and select the corresponding **Title**. A new **Update Details** window will open.

1. In the **Overview** tab, the **classification** and **MSRC severity** must match the patch baseline configuration you found earlier.

**Cause 2**: The patch was replaced.

**Solution 2**: To check if this is true, use the following procedure.

1. Go to the [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/home.aspx).

1. Search the Microsoft Knowledge Base (KB) article IDs (for example, KB3216916).

1. Verify that the value under **Product** matches that of your managed node and select the corresponding **Title**. A new **Update Details** window will open.

1. Go to the **Package Details** tab. Look for an entry under the **This update has been replaced by the following updates:** header.

**Cause 3**: The same patch might have different KB numbers because the WSUS and Window online updates are handled as independent Release Channels by Microsoft.

**Solution 3**: Check the patch eligibility. If the package isn't available under WSUS, install [OS Build 14393.3115](https://support.microsoft.com/en-us/topic/july-16-2019-kb4507459-os-build-14393-3115-511a3df6-c07e-14e3-dc95-b9898a7a7a57). If the package is available for all operating system builds, install [OS Builds 18362.1256 and 18363.1256](https://support.microsoft.com/en-us/topic/december-8-2020-kb4592449-os-builds-18362-1256-and-18363-1256-c448f3df-a5f1-1d55-aa31-0e1cf7a440a9).

### Issue: Cannot acquire lock. Another patching operation is in progress.


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with error code 4 and the following error message.

```
Cannot acquire lock on C:\ProgramData\Amazon\SSM\patch-baseline-concurrent.lock. Another patching operation is in progress.
```

**Cause**: This error occurs when multiple patching operations are attempting to run on the same managed node at the same time. The lock file prevents concurrent patching operations to avoid conflicts and ensure system stability.

**Solution**: Ensure that patching operations are not scheduled to run at the same time on the same managed node. Review the following configurations to identify and resolve scheduling conflicts:
+ **Patch policies**: Check your Quick Setup patch policy configurations to ensure they don't overlap with other patching schedules.
+ **Maintenance windows**: Review your maintenance window associations to verify that multiple windows aren't targeting the same managed nodes with patching tasks at overlapping times.
+ **Manual Patch now operations**: Avoid initiating manual **Patch now** operations while scheduled patching is in progress.

## Errors when running `AWS-RunPatchBaseline` on macOS


**Topics**
+ [

### Issue: Cannot acquire lock. Another patching operation is in progress.
](#patch-manager-troubleshooting-macos-concurrent-lock)

### Issue: Cannot acquire lock. Another patching operation is in progress.


**Problem**: When you run `AWS-RunPatchBaseline`, patching fails with error code 4 and the following error message.

```
[ERROR]: Cannot acquire lock on /var/log/amazon/ssm/patch-baseline-concurrent.lock. Another patching operation is in progress.
```

**Cause**: This error occurs when multiple patching operations are attempting to run on the same managed node at the same time. The lock file prevents concurrent patching operations to avoid conflicts and ensure system stability.

**Solution**: Ensure that patching operations are not scheduled to run at the same time on the same managed node. Review the following configurations to identify and resolve scheduling conflicts:
+ **Patch policies**: Check your Quick Setup patch policy configurations to ensure they don't overlap with other patching schedules.
+ **Maintenance windows**: Review your maintenance window associations to verify that multiple windows aren't targeting the same managed nodes with patching tasks at overlapping times.
+ **Manual Patch now operations**: Avoid initiating manual **Patch now** operations while scheduled patching is in progress.

## Using AWS Support Automation runbooks


AWS Support provides two Automation runbooks you can use to troubleshoot certain issues related to patching.
+ `AWSSupport-TroubleshootWindowsUpdate` – The [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/awssupport-troubleshoot-windows-update.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/awssupport-troubleshoot-windows-update.html) runbook is used to identify issues that could fail the Windows Server updates for Amazon Elastic Compute Cloud (Amazon EC2) Windows Server instances.
+ `AWSSupport-TroubleshootPatchManagerLinux` – The [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-troubleshoot-patch-manager-linux.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-troubleshoot-patch-manager-linux.html) runbook troubleshoots common issues that can cause a patch failure on Linux-based managed nodes using Patch Manager. The main goal of this runbook is to identify the patch command failure root cause and suggest a remediation plan.

**Note**  
There is a charge to run Automation runbooks. For information, see [AWS Systems Manager Pricing for Automation](https://aws.amazon.com/systems-manager/pricing/#Automation).

## Contacting AWS Support


If you can't find troubleshooting solutions in this section or in the Systems Manager issues in [AWS re:Post](https://repost.aws/tags/TA-UbbRGVYRWCDaCvae6itYg/aws-systems-manager), and you have a [Developer, Business, or Enterprise Support plan](https://aws.amazon.com/premiumsupport/plans), you can create a technical support case at [AWS Support](https://aws.amazon.com/premiumsupport/).

Before you contact Support, collect the following items:
+ [SSM agent logs](ssm-agent-logs.md)
+ Run Command command ID, maintenance window ID, or Automation execution ID
+ For Windows Server managed nodes, also collect the following:
  + `%PROGRAMDATA%\Amazon\PatchBaselineOperations\Logs` as described on the **Windows** tab of [How patches are installed](patch-manager-installing-patches.md)
  + Windows update logs: For Windows Server 2012 R2 and older, use `%windir%/WindowsUpdate.log`. For Windows Server 2016 and newer, first run the PowerShell command [https://docs.microsoft.com/en-us/powershell/module/windowsupdate/get-windowsupdatelog?view=win10-ps](https://docs.microsoft.com/en-us/powershell/module/windowsupdate/get-windowsupdatelog?view=win10-ps) before using `%windir%/WindowsUpdate.log`
+ For Linux managed nodes, also collect the following:
  + The contents of the directory `/var/lib/amazon/ssm/instance-id/document/orchestration/Run-Command-execution-id/awsrunShellScript/PatchLinux`

# AWS Systems Manager Run Command
Run Command

Using Run Command, a tool in AWS Systems Manager, you can remotely and securely manage the configuration of your managed nodes. A *managed node* is any Amazon Elastic Compute Cloud (Amazon EC2) instance or non-EC2 machine in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that has been configured for Systems Manager. Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. You can use Run Command from the AWS Management Console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost. To get started with Run Command, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/run-command). In the navigation pane, choose **Run Command**.

Administrators use Run Command to install or bootstrap applications, build a deployment pipeline, capture log files when an instance is removed from an Auto Scaling group, join instances to a Windows domain, and more.

The Run Command API follows an eventual consistency model, due to the distributed nature of the system supporting the API. This means that the result of an API command you run that affects your resources might not be immediately visible to all subsequent commands you run. You should keep this in mind when you carry out an API command that immediately follows a previous API command.

**Getting Started**  
The following table includes information to help you get started with Run Command.


****  

| Topic | Details | 
| --- | --- | 
|  [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md)  |  Verify that you have completed the setup requirements for your Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 machines in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.  | 
|  [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md)  |  (Optional) Register on-premises servers and VMs with AWS so you can manage them using Run Command.  | 
|  [Managing edge devices with Systems Manager](systems-manager-setting-up-edge-devices.md)  |  (Optional) Configure edge devices so you can manage them using Run Command.  | 
|  [Running commands on managed nodes](running-commands.md)  |  Learn how to run a command that targets one or more managed nodes by using the AWS Management Console.  | 
|  [Run Command walkthroughs](run-command-walkthroughs.md)  |  Learn how to run commands using either Tools for Windows PowerShell or the AWS CLI.  | 

**EventBridge support**  
This Systems Manager tool is supported as both an *event* type and a *target* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**More info**  
+ [Remotely Run Command on an EC2 Instance (10 minute tutorial)](https://aws.amazon.com/getting-started/hands-on/remotely-run-commands-ec2-instance-systems-manager/)
+ [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*
+ [AWS Systems Manager API Reference](https://docs.aws.amazon.com/systems-manager/latest/APIReference/) 

**Topics**
+ [

# Setting up Run Command
](run-command-setting-up.md)
+ [

# Running commands on managed nodes
](running-commands.md)
+ [

# Using exit codes in commands
](run-command-handle-exit-status.md)
+ [

# Understanding command statuses
](monitor-commands.md)
+ [

# Run Command walkthroughs
](run-command-walkthroughs.md)
+ [

# Troubleshooting Systems Manager Run Command
](troubleshooting-remote-commands.md)

# Setting up Run Command


Before you can manage nodes by using Run Command, a tool in AWS Systems Manager, configure an AWS Identity and Access Management (IAM) policy for any user who will run commands. If you use any global condition keys for the `SendCommand` action in your IAM policies, you must include the `aws:ViaAWSService` condition key and set the boolean value to `true`. The following is an example.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/YourDocument"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:SourceVpce": [
                        "vpce-1234567890abcdef0"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/YourDocument"
            ],
            "Condition": {
                "Bool": {
                    "aws:ViaAWSService": "true"
                }
            }
        }
    ]
}
```

------

You must also configure your nodes for Systems Manager. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

We recommend completing the following optional setup tasks to help minimize the security posture and day-to-day management of your managed nodes.

Monitor command executions using Amazon EventBridge  
You can use EventBridge to log command execution status changes. You can create a rule that runs whenever there is a state transition, or when there is a transition to one or more states that are of interest. You can also specify Run Command as a target action when an EventBridge event occurs. For more information, see [Configuring EventBridge for Systems Manager events](monitoring-systems-manager-events.md).

Monitor command executions using Amazon CloudWatch Logs  
You can configure Run Command to periodically send all command output and error logs to an Amazon CloudWatch log group. You can monitor these output logs in near real-time, search for specific phrases, values, or patterns, and create alarms based on the search. For more information, see [Configuring Amazon CloudWatch Logs for Run Command](sysman-rc-setting-up-cwlogs.md).

Restrict Run Command access to specific managed nodes  
You can restrict a user's ability to run commands on managed nodes by using AWS Identity and Access Management (IAM). Specifically, you can create an IAM policy with a condition that the user can only run commands on managed nodes that are tagged with specific tags. For more information, see [Restricting Run Command access based on tags](#tag-based-access).

## Restricting Run Command access based on tags


This section describes how to restrict a user's ability to run commands on managed nodes by specifying a tag condition in an IAM policy. Managed nodes include Amazon EC2 instances and non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that are configured for Systems Manager. Though the information is not explicitly presented, you can also restrict access to managed AWS IoT Greengrass core devices. To get started, you must tag your AWS IoT Greengrass devices. For more information, see [Tag your AWS IoT Greengrass Version 2 resources](https://docs.aws.amazon.com/greengrass/v2/developerguide/tag-resources.html) in the *AWS IoT Greengrass Version 2 Developer Guide*.

You can restrict command execution to specific managed nodes by creating an IAM policy that includes a condition that the user can only run commands on nodes with specific tags. In the following example, the user is allowed to use Run Command (`Effect: Allow, Action: ssm:SendCommand`) by using any SSM document (`Resource: arn:aws:ssm:*:*:document/*`) on any node (`Resource: arn:aws:ec2:*:*:instance/*`) with the condition that the node is a Finance WebServer (`ssm:resourceTag/Finance: WebServer`). If the user sends a command to a node that isn't tagged or that has any tag other than `Finance: WebServer`, the execution results show `AccessDenied`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ssm:*:*:document/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ec2:*:*:instance/*"
         ],
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/Finance":[
                  "WebServers"
               ]
            }
         }
      }
   ]
}
```

------

You can create IAM policies that allow a user to run commands on managed nodes that are tagged with multiple tags. The following policy allows the user to run commands on managed nodes that have two tags. If a user sends a command to a node that isn't tagged with both of these tags, the execution results show `AccessDenied`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag_key1":[
                  "tag_value1"
               ],
               "ssm:resourceTag/tag_key2":[
                  "tag_value2"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ssm:us-west-1::document/AWS-*",
            "arn:aws:ssm:us-east-2::document/AWS-*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:UpdateInstanceInformation",
            "ssm:ListCommands",
            "ssm:ListCommandInvocations",
            "ssm:GetDocument"
         ],
         "Resource":"*"
      }
   ]
}
```

------

You can also create IAM policies that allows a user to run commands on multiple groups of tagged managed nodes. The following example policy allows the user to run commands on either group of tagged nodes, or both groups.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag_key1":[
                  "tag_value1"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag_key2":[
                  "tag_value2"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:SendCommand"
         ],
         "Resource":[
            "arn:aws:ssm:us-west-1::document/AWS-*",
            "arn:aws:ssm:us-east-2::document/AWS-*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ssm:UpdateInstanceInformation",
            "ssm:ListCommands",
            "ssm:ListCommandInvocations",
            "ssm:GetDocument"
         ],
         "Resource":"*"
      }
   ]
}
```

------

For more information about creating IAM policies, see [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*. For more information about tagging managed nodes, see [Tag Editor](https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html) in the *AWS Resource Groups User Guide*. 

# Running commands on managed nodes


This section includes information about how to send commands from the AWS Systems Manager console to managed nodes. This section also includes information about how to cancel a command.

Note that if your node is configured with the `noexec` mount option for the var directory, Run Command is unable to successfuly run commands.

**Important**  
When you send a command using Run Command, don't include sensitive information formatted as plaintext, such as passwords, configuration data, or other secrets. All Systems Manager API activity in your account is logged in an S3 bucket for AWS CloudTrail logs. This means that any user with access to S3 bucket can view the plaintext values of those secrets. For this reason, we recommend creating and using `SecureString` parameters to encrypt sensitive data you use in your Systems Manager operations.  
For more information, see [Restricting access to Parameter Store parameters using IAM policies](sysman-paramstore-access.md).

**Execution history retention**  
The history of each command is available for up to 30 days. In addition, you can store a copy of all log files in Amazon Simple Storage Service or have an audit trail of all API calls in AWS CloudTrail.

**Related information**  
For information about sending commands using other tools, see the following topics: 
+ [Walkthrough: Use the AWS Tools for Windows PowerShell with Run Command](walkthrough-powershell.md) or the examples in the [AWS Systems Manager section of the AWS Tools for PowerShell Cmdlet Reference](https://docs.aws.amazon.com/powershell/latest/reference/items/AWS_Systems_Manager_cmdlets.html).
+ [Walkthrough: Use the AWS CLI with Run Command](walkthrough-cli.md) or the examples in the [SSM CLI Reference](https://docs.aws.amazon.com/cli/latest/reference/ssm/)

**Topics**
+ [

# Running commands from the console
](running-commands-console.md)
+ [

# Running commands using a specific document version
](run-command-version.md)
+ [

# Run commands at scale
](send-commands-multiple.md)
+ [

# Canceling a command
](cancel-run-command.md)

# Running commands from the console


You can use Run Command, a tool in AWS Systems Manager, from the AWS Management Console to configure managed nodes without having to log into them. This topic includes an example that shows how to [update SSM Agent](run-command-tutorial-update-software.md#rc-console-agentexample) on a managed node by using Run Command.

**Before you begin**  
Before you send a command using Run Command, verify that your managed nodes meet all Systems Manager [setup requirements](systems-manager-setting-up-nodes.md).

**To send a command using Run Command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose a Systems Manager document.

1. In the **Command parameters** section, specify values for required parameters.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) Choose a CloudWatch alarm to apply to your command for monitoring. To attach a CloudWatch alarm to your command, the IAM principal that runs the command must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). Note that if your alarm activates, any pending command invocations do not run.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

For information about canceling a command, see [Canceling a command](cancel-run-command.md). 

## Rerunning commands


Systems Manager includes two options to help you rerun a command from the **Run Command** page in the Systems Manager console. 
+ **Rerun**: This button allows you to run the same command without making changes to it.
+ **Copy to new**: This button copies the settings of one command to a new command and gives you the option to edit those settings before you run it.

**To rerun a command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose a command to rerun. You can rerun a command immediately after executing it from the command details page. Or, you can choose a command that you previously ran from the **Command history** tab.

1. Choose either **Rerun** to run the same command without changes, or choose **Copy to new** to edit the command settings before you run it.

# Running commands using a specific document version


You can use the document version parameter to specify which version of an AWS Systems Manager document to use when the command runs. You can specify one of the following options for this parameter:
+ \$1DEFAULT
+ \$1LATEST
+ Version number

Run the following procedure to run a command using the document version parameter. 

------
#### [ Linux ]

**To run commands using the AWS CLI on local Linux machines**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. List all available documents

   This command lists all of the documents available for your account based on AWS Identity and Access Management (IAM) permissions.

   ```
   aws ssm list-documents
   ```

1. Run the following command to view the different versions of a document. Replace *document name* with your own information.

   ```
   aws ssm list-document-versions \
       --name "document name"
   ```

1. Run the following command to run a command that uses an SSM document version. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm send-command \
       --document-name "AWS-RunShellScript" \
       --parameters commands="echo Hello" \
       --instance-ids instance-ID \
       --document-version '$LATEST'
   ```

------
#### [ Windows ]

**To run commands using the AWS CLI on local Windows machines**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. List all available documents

   This command lists all of the documents available for your account based on AWS Identity and Access Management (IAM) permissions.

   ```
   aws ssm list-documents
   ```

1. Run the following command to view the different versions of a document. Replace *document name* with your own information.

   ```
   aws ssm list-document-versions ^
       --name "document name"
   ```

1. Run the following command to run a command that uses an SSM document version. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm send-command ^
       --document-name "AWS-RunShellScript" ^
       --parameters commands="echo Hello" ^
       --instance-ids instance-ID ^
       --document-version "$LATEST"
   ```

------
#### [ PowerShell ]

**To run commands using the Tools for PowerShell**

1. Install and configure the AWS Tools for PowerShell (Tools for Windows PowerShell), if you haven't already.

   For information, see [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. List all available documents

   This command lists all of the documents available for your account based on AWS Identity and Access Management (IAM) permissions.

   ```
   Get-SSMDocumentList
   ```

1. Run the following command to view the different versions of a document. Replace *document name* with your own information.

   ```
   Get-SSMDocumentVersionList `
       -Name "document name"
   ```

1. Run the following command to run a command that uses an SSM document version. Replace each *example resource placeholder* with your own information.

   ```
   Send-SSMCommand `
       -DocumentName "AWS-RunShellScript" `
       -Parameter @{commands = "echo helloWorld"} `
       -InstanceIds "instance-ID" `
       -DocumentVersion $LATEST
   ```

------

# Run commands at scale


You can use Run Command, a tool in AWS Systems Manager, to run commands on a fleet of managed nodes by using the `targets`. The `targets` parameter accepts a `Key,Value` combination based on tags that you specified for your managed nodes. When you run the command, the system locates and attempts to run the command on all managed nodes that match the specified tags. For more information about tagging managed instances, see [Tagging your AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tag-editor.html) in the *Tagging AWS Resources User Guide*. For information about tagging your managed IoT devices, see [Tag your AWS IoT Greengrass Version 2 resources](https://docs.aws.amazon.com/greengrass/v2/developerguide/tag-resources.html) in the *AWS IoT Greengrass Version 2 Developer Guide*. 

You can also use the `targets` parameter to target a list of specific managed node IDs, as described in the next section.

To control how commands run across hundreds or thousands of managed nodes, Run Command also includes parameters for restricting how many nodes can simultaneously process a request and how many errors can be thrown by a command before the command is canceled.

**Topics**
+ [

## Targeting multiple managed nodes
](#send-commands-targeting)
+ [

## Using rate controls
](#send-commands-rate)

## Targeting multiple managed nodes


You can run a command and target managed nodes by specifying tags, AWS resource group names, or managed node IDs. 

The following examples show the command format when using Run Command from the AWS Command Line Interface (AWS CLI ). Replace each *example resource placeholder* with your own information. Sample commands in this section are truncated using `[...]`.

**Example 1: Targeting tags**

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:tag-name,Values=tag-value \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:tag-name,Values=tag-value ^
    [...]
```

------

**Example 2: Targeting an AWS resource group by name**

You can specify a maximum of one resource group name per command. When you create a resource group, we recommend including `AWS::SSM:ManagedInstance` and `AWS::EC2::Instance` as resource types in your grouping criteria. 

**Note**  
In order to send commands that target a resource group, you must have been granted AWS Identity and Access Management (IAM) permissions to list or view the resources that belong to that group. For more information, see [Set up permissions](https://docs.aws.amazon.com/ARG/latest/userguide/gettingstarted-prereqs.html#gettingstarted-prereqs-permissions) in the *AWS Resource Groups User Guide*. 

------
#### [ Linux & macOS ]

```
aws ssm send-command \    
    --document-name document-name \
    --targets Key=resource-groups:Name,Values=resource-group-name \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^    
    --document-name document-name ^
    --targets Key=resource-groups:Name,Values=resource-group-name ^
    [...]
```

------

**Example 3: Targeting an AWS resource group by resource type**

You can specify a maximum of five resource group types per command. When you create a resource group, we recommend including `AWS::SSM:ManagedInstance` and `AWS::EC2::Instance` as resource types in your grouping criteria.

**Note**  
In order to send commands that target a resource group, you must have been granted IAM permissions to list, or view, the resources that belong to that group. For more information, see [Set up permissions](https://docs.aws.amazon.com/ARG/latest/userguide/gettingstarted-prereqs.html#gettingstarted-prereqs-permissions) in the *AWS Resource Groups User Guide*. 

------
#### [ Linux & macOS ]

```
aws ssm send-command \    
    --document-name document-name \
    --targets Key=resource-groups:ResourceTypeFilters,Values=resource-type-1,resource-type-2 \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^    
    --document-name document-name ^
    --targets Key=resource-groups:ResourceTypeFilters,Values=resource-type-1,resource-type-2 ^
    [...]
```

------

**Example 4: Targeting instance IDs**

The following examples show how to target managed nodes by using the `instanceids` key with the `targets` parameter. You can use this key to target managed AWS IoT Greengrass core devices because each device is assigned an mi-*ID\$1number*. You can view device IDs in Fleet Manager, a tool in AWS Systems Manager.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=instanceids,Values=instance-ID-1,instance-ID-2,instance-ID-3 \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=instanceids,Values=instance-ID-1,instance-ID-2,instance-ID-3 ^
    [...]
```

------

If you tagged managed nodes for different environments using a `Key` named `Environment` and `Values` of `Development`, `Test`, `Pre-production` and `Production`, then you could send a command to all managed nodes in *one* of these environments by using the `targets` parameter with the following syntax.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Environment,Values=Development \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Environment,Values=Development ^
    [...]
```

------

You could target additional managed nodes in other environments by adding to the `Values` list. Separate items using commas.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Environment,Values=Development,Test,Pre-production \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Environment,Values=Development,Test,Pre-production ^
    [...]
```

------

**Variation**: Refining your targets using multiple `Key` criteria

You can refine the number of targets for your command by including multiple `Key` criteria. If you include more than one `Key` criteria, the system targets managed nodes that meet *all* of the criteria. The following command targets all managed nodes tagged for the Finance Department *and* tagged for the database server role.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values=Finance Key=tag:ServerRole,Values=Database \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values=Finance Key=tag:ServerRole,Values=Database ^
    [...]
```

------

**Variation**: Using multiple `Key` and `Value` criteria

Expanding on the previous example, you can target multiple departments and multiple server roles by including additional items in the `Values` criteria.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database ^
    [...]
```

------

**Variation**: Targeting tagged managed nodes using multiple `Values` criteria

If you tagged managed nodes for different environments using a `Key` named `Department` and `Values` of `Sales` and `Finance`, then you could send a command to all of the nodes in these environments by using the `targets` parameter with the following syntax.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values=Sales,Finance \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values=Sales,Finance ^
    [...]
```

------

You can specify a maximum of five keys, and five values for each key.

If either a tag key (the tag name) or a tag value includes spaces, enclose the tag key or the value in quotation marks, as shown in the following examples.

**Example**: Spaces in `Value` tag

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:OS,Values="Windows Server 2016" \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:OS,Values="Windows Server 2016" ^
    [...]
```

------

**Example**: Spaces in `tag` key and `Value`

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key="tag:Operating System",Values="Windows Server 2016" \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key="tag:Operating System",Values="Windows Server 2016" ^
    [...]
```

------

**Example**: Spaces in one item in a list of `Values`

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --targets Key=tag:Department,Values="Sales","Finance","Systems Mgmt" \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --targets Key=tag:Department,Values="Sales","Finance","Systems Mgmt" ^
    [...]
```

------

## Using rate controls


You can control the rate at which commands are sent to managed nodes in a group by using *concurrency controls *and *error controls*.

**Topics**
+ [

### Using concurrency controls
](#send-commands-velocity)
+ [

### Using error controls
](#send-commands-maxerrors)

### Using concurrency controls


You can control the number of managed nodes that run a command simultaneously by using the `max-concurrency` parameter (the **Concurrency** options in the **Run a command** page). You can specify either an absolute number of managed nodes, for example **10**, or a percentage of the target set, for example **10%**. The queueing system delivers the command to a single node and waits until the system acknowledges the initial invocation before sending the command to two more nodes. The system exponentially sends commands to more nodes until the system meets the value of `max-concurrency`. The default for value `max-concurrency` is 50. The following examples show you how to specify values for the `max-concurrency` parameter.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --max-concurrency 10 \
    --targets Key=tag:Environment,Values=Development \
    [...]
```

```
aws ssm send-command \
    --document-name document-name \
    --max-concurrency 10% \
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --max-concurrency 10 ^
    --targets Key=tag:Environment,Values=Development ^
    [...]
```

```
aws ssm send-command ^
    --document-name document-name ^
    --max-concurrency 10% ^
    --targets Key=tag:Department,Values=Finance,Marketing Key=tag:ServerRole,Values=WebServer,Database ^
    [...]
```

------

### Using error controls


You can also control the execution of a command to hundreds or thousands of managed nodes by setting an error limit using the `max-errors` parameters (the **Error threshold** field in the **Run a command** page). The parameter specifies how many errors are allowed before the system stops sending the command to additional managed nodes. You can specify either an absolute number of errors, for example **10**, or a percentage of the target set, for example **10%**. If you specify **3**, for example, the system stops sending the command when the fourth error is received. If you specify **0**, then the system stops sending the command to additional managed nodes after the first error result is returned. If you send a command to 50 managed nodes and set `max-errors` to **10%**, then the system stops sending the command to additional nodes when the sixth error is received.

Invocations that are already running a command when `max-errors` is reached are allowed to complete, but some of these invocations might fail as well. If you need to ensure that there won’t be more than `max-errors` failed invocations, set `max-concurrency` to **1** so the invocations proceed one at a time. The default for max-errors is 0. The following examples show you how to specify values for the `max-errors` parameter.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name document-name \
    --max-errors 10 \
    --targets Key=tag:Database,Values=Development \
    [...]
```

```
aws ssm send-command \
    --document-name document-name \
    --max-errors 10% \
    --targets Key=tag:Environment,Values=Development \
    [...]
```

```
aws ssm send-command \
    --document-name document-name \
    --max-concurrency 1 \
    --max-errors 1 \
    --targets Key=tag:Environment,Values=Production \
    [...]
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name document-name ^
    --max-errors 10 ^
    --targets Key=tag:Database,Values=Development ^
    [...]
```

```
aws ssm send-command ^
    --document-name document-name ^
    --max-errors 10% ^
    --targets Key=tag:Environment,Values=Development ^
    [...]
```

```
aws ssm send-command ^
    --document-name document-name ^
    --max-concurrency 1 ^
    --max-errors 1 ^
    --targets Key=tag:Environment,Values=Production ^
    [...]
```

------

# Canceling a command


You can attempt to cancel a command as long as the service shows that it's in either a Pending or Executing state. However, even if a command is still in one of these states, we can't guarantee that the command will be canceled and the underlying process stopped. 

**To cancel a command using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Select the command invocation that you want to cancel.

1. Choose **Cancel command**.

**To cancel a command using the AWS CLI**  
Run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm cancel-command \
    --command-id "command-ID" \
    --instance-ids "instance-ID"
```

------
#### [ Windows ]

```
aws ssm cancel-command ^
    --command-id "command-ID" ^
    --instance-ids "instance-ID"
```

------

For information about the status of a canceled command, see [Understanding command statuses](monitor-commands.md).

# Using exit codes in commands


In some cases, you might need to manage how your commands are handled by using exit codes.

## Specify exit codes in commands


Using Run Command, a tool in AWS Systems Manager, you can specify exit codes to determine how commands are handled. By default, the exit code of the last command run in a script is reported as the exit code for the entire script. For example, you have a script that contains three commands. The first one fails but the following ones succeed. Because the final command succeeded, the status of the execution is reported as `succeeded`.

**Shell scripts**  
To fail the entire script at the first command failure, you can include a shell conditional statement to exit the script if any command before the final one fails. Use the following approach.

```
<command 1>
    if [ $? != 0 ]
    then
        exit <N>
    fi
    <command 2>
    <command 3>
```

In the following example, the entire script fails if the first command fails.

```
cd /test
    if [ $? != 0 ]
    then
        echo "Failed"
        exit 1
    fi
    date
```

**PowerShell scripts**  
PowerShell requires that you call `exit` explicitly in your scripts for Run Command to successfully capture the exit code.

```
<command 1>
    if ($?) {<do something>}
    else {exit <N>}
    <command 2>
    <command 3>
    exit <N>
```

Here is an example:

```
cd C:\
    if ($?) {echo "Success"}
    else {exit 1}
    date
```

# Handling reboots when running commands


If you use Run Command, a tool in AWS Systems Manager, to run scripts that reboot managed nodes, we recommend that you specify an exit code in your script. If you attempt to reboot a node from a script by using some other mechanism, the script execution status might not be updated correctly, even if the reboot is the last step in your script. For Windows managed nodes, you specify `exit 3010` in your script. For Linux and macOS managed nodes, you specify `exit 194`. The exit code instructs AWS Systems Manager Agent (SSM Agent) to reboot the managed node, and then restart the script after the reboot completed. Before starting the reboot, SSM Agent informs the Systems Manager service in the cloud that communication will be disrupted during the server reboot.

**Note**  
The reboot script can't be part of an `aws:runDocument` plugin. If a document contains the reboot script and another document tries to run that document through the `aws:runDocument` plugin, SSM Agent returns an error.

**Create idempotent scripts**

When developing scripts that reboot managed nodes, make the scripts idempotent so the script execution continues where it left off after the reboot. Idempotent scripts manage state and validate if the action was performed or not. This prevents a step from running multiple times when it's only intended to run once.

Here is an outline example of an idempotent script that reboots a managed node multiple times.

```
$name = Get current computer name
If ($name –ne $desiredName) 
    {
        Rename computer
        exit 3010
    }
            
$domain = Get current domain name
If ($domain –ne $desiredDomain) 
    {
        Join domain
        exit 3010
    }
            
If (desired package not installed) 
    {
        Install package
        exit 3010
    }
```

**Examples**

The following script samples use exit codes to restart managed nodes. The Linux example installs package updates on Amazon Linux, and then restarts the node. The Windows Server example installs the Telnet-Client on the node, and then restarts it. 

------
#### [ Amazon Linux 2 ]

```
#!/bin/bash
yum -y update
needs-restarting -r
if [ $? -eq 1 ]
then
        exit 194
else
        exit 0
fi
```

------
#### [ Windows ]

```
$telnet = Get-WindowsFeature -Name Telnet-Client
if (-not $telnet.Installed)
    { 
        # Install Telnet and then send a reboot request to SSM Agent.
        Install-WindowsFeature -Name "Telnet-Client"
        exit 3010 
    }
```

------

# Understanding command statuses


Run Command, a tool in AWS Systems Manager, reports detailed status information about the different states a command experiences during processing and for each managed node that processed the command. You can monitor command statuses using the following methods:
+ Choose the **Refresh** icon on the **Commands** tab in the Run Command console interface.
+ Call [list-commands](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-commands.html) or [list-command-invocations](https://docs.aws.amazon.com/cli/latest/reference/ssm/list-command-invocations.html) using the AWS Command Line Interface (AWS CLI). Or call [Get-SSMCommand](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SSMCommand.html) or [Get-SSMCommandInvocation](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SSMCommandInvocation.html) using AWS Tools for Windows PowerShell.
+ Configure Amazon EventBridge to respond to state or status changes.
+ Configure Amazon Simple Notification Service (Amazon SNS) to send notifications for all status changes or specific statuses such as `Failed` or `TimedOut`.

## Run Command status


Run Command reports status details for three areas: plugins, invocations, and an overall command status. A *plugin* is a code-execution block that is defined in your command's SSM document. For more information about plugins, see [Command document plugin reference](documents-command-ssm-plugin-reference.md).

When you send a command to multiple managed nodes at the same time, each copy of the command targeting each node is a *command invocation*. For example, if you use the `AWS-RunShellScript` document and send an `ifconfig` command to 20 Linux instances, that command has 20 invocations. Each command invocation individually reports status. The plugins for a given command invocation individually report status as well. 

Lastly, Run Command includes an aggregated command status for all plugins and invocations. The aggregated command status can be different than the status reported by plugins or invocations, as noted in the following tables.

**Note**  
If you run commands to large numbers of managed nodes using the `max-concurrency` or `max-errors` parameters, command status reflects the limits imposed by those parameters, as described in the following tables. For more information about these parameters, see [Run commands at scale](send-commands-multiple.md).


**Detailed status for command plugins and invocations**  

| Status | Details | 
| --- | --- | 
| Pending | The command hasn't yet been sent to the managed node or hasn't been received by SSM Agent. If the command isn't received by the agent before the length of time passes that is equal to the sum of the Timeout (seconds) parameter and the Execution timeout parameter, the status changes to Delivery Timed Out. | 
| InProgress | Systems Manager is attempting to send the command to the managed node, or the command was received by SSM Agent and has started running on the instance. Depending on the result of all command plugins, the status changes to Success, Failed, Delivery Timed Out, or Execution Timed Out. Exception: If the agent isn't running or available on the node, the command status remains at In Progress until the agent is available again, or until the execution timeout limit is reached. The status then changes to a terminal state. | 
| Delayed | The system attempted to send the command to the managed node but wasn't successful. The system retries again. | 
| Success | This status is returned under a variety of conditions. This status doesn't mean the command was processed on the node. For example, the command can be received by SSM Agent on the managed node and return an exit code of zero as a result of your PowerShell ExecutionPolicy preventing the command from running. This is a terminal state. Conditions that result in a command returning a Success status are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/monitor-commands.html)  The same conditions apply when targeting resource groups. To troubleshoot errors or get more information about the command execution, send a command that handles errors or exceptions by returning appropriate exit codes (non-zero exit codes for command failure).  | 
| DeliveryTimedOut | The command wasn't delivered to the managed node before the total timeout expired. Total timeouts don't count against the parent command’s max-errors limit, but they do contribute to whether the parent command status is Success, Incomplete, or Delivery Timed Out. This is a terminal state. | 
| ExecutionTimedOut | Command automation started on the managed node, but the command wasn’t completed before the execution timeout expired. Execution timeouts count as a failure, which will send a non-zero reply and Systems Manager will exit the attempt to run the command automation, and report a failure status. | 
| Failed |  The command wasn't successful on the managed node. For a plugin, this indicates that the result code wasn't zero. For a command invocation, this indicates that the result code for one or more plugins wasn't zero. Invocation failures count against the max-errors limit of the parent command. This is a terminal state. | 
| Cancelled | The command was canceled before it was completed. This is a terminal state. | 
| Undeliverable | The command can't be delivered to the managed node. The node might not exist or it might not be responding. Undeliverable invocations don't count against the parent command’s max-errors limit, but they do contribute to whether the parent command status is Success or Incomplete. For example, if all invocations in a command have the status Undeliverable, then the command status returned is Failed. However, if a command has five invocations, four of which return the status Undeliverable and one of which returns the status Success, then the parent command's status is Success. This is a terminal state. | 
| Terminated | The parent command exceeded its max-errors limit and subsequent command invocations were canceled by the system. This is a terminal state. | 
| InvalidPlatform | The command was sent to a managed node that didn't match the required platforms specified by the chosen document. Invalid Platform doesn't count against the parent command’s max-errors limit, but it does contribute to whether the parent command status is Success or Failed. For example, if all invocations in a command have the status Invalid Platform, then the command status returned is Failed. However, if a command has five invocations, four of which return the status Invalid Platform and one of which returns the status Success, then the parent command's status is Success. This is a terminal state. | 
| AccessDenied | The AWS Identity and Access Management (IAM) user or role initiating the command doesn't have access to the targeted managed node. Access Denied doesn't count against the parent command’s max-errors limit, but it does contribute to whether the parent command status is Success or Failed. For example, if all invocations in a command have the status Access Denied, then the command status returned is Failed. However, if a command has five invocations, four of which return the status Access Denied and one of which returns the status Success, then the parent command's status is Success. This is a terminal state. | 


**Detailed status for a command**  

| Status | Details | 
| --- | --- | 
| Pending | The command wasn't yet received by an agent on any managed nodes. | 
| InProgress | The command has been sent to at least one managed node but hasn't reached a final state on all nodes.  | 
| Delayed | The system attempted to send the command to the node but wasn't successful. The system retries again. | 
| Success | The command was received by SSM Agent on all specified or targeted managed nodes and returned an exit code of zero. All command invocations have reached a terminal state, and the value of max-errors wasn't reached. This status doesn't mean the command was successfully processed on all specified or targeted managed nodes. This is a terminal state.  To troubleshoot errors or get more information about the command execution, send a command that handles errors or exceptions by returning appropriate exit codes (non-zero exit codes for command failure).  | 
| DeliveryTimedOut | The command wasn't delivered to the managed node before the total timeout expired. The value of max-errors or more command invocations shows a status of Delivery Timed Out. This is a terminal state. | 
| Failed |  The command wasn't successful on the managed node. The value of `max-errors` or more command invocations shows a status of `Failed`. This is a terminal state.  | 
| Incomplete | The command was attempted on all managed nodes and one or more of the invocations doesn't have a value of Success. However, not enough invocations failed for the status to be Failed. This is a terminal state. | 
| Cancelled | The command was canceled before it was completed. This is a terminal state. | 
| RateExceeded | The number of managed nodes targeted by the command exceeded the account quota for pending invocations. The system has canceled the command before executing it on any node. This is a terminal state. | 
| AccessDenied | The user or role initiating the command doesn't have access to the targeted resource group. AccessDenied doesn't count against the parent command’s max-errors limit, but does contribute to whether the parent command status is Success or Failed. (For example, if all invocations in a command have the status AccessDenied, then the command status returned is Failed. However, if a command has 5 invocations, 4 of which return the status AccessDenied and 1 of which returns the status Success, then the parent command's status is Success.) This is a terminal state. | 
| No Instances In Tag | The tag key-pair value or resource group targeted by the command doesn't match any managed nodes. This is a terminal state. | 

## Understanding command timeout values


Systems Manager enforces the following timeout values when running commands.

**Total Timeout**  
In the Systems Manager console, you specify the timeout value in the **Timeout (seconds)** field. After a command is sent, Run Command checks whether the command has expired or not. If a command reaches the command expiration limit (total timeout), it changes status to `DeliveryTimedOut` for all invocations that have the status `InProgress`, `Pending` or `Delayed`.

![\[The Timeout (seconds) field in the Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/run-command-delivery-time-out-time-out-seconds.png)


On a more technical level, total timeout (**Timeout (seconds)**) is a combination of two timeout values, as shown here: 

`Total timeout = "Timeout(seconds)" from the console + "timeoutSeconds": "{{ executionTimeout }}" from your SSM document`

For example, the default value of **Timeout (seconds)** in the Systems Manager console is 600 seconds. If you run a command by using the `AWS-RunShellScript` SSM document, the default value of **"timeoutSeconds": "\$1\$1 executionTimeout \$1\$1"** is 3600 seconds, as shown in the following document sample:

```
  "executionTimeout": {
      "type": "String",
      "default": "3600",

  "runtimeConfig": {
    "aws:runShellScript": {
      "properties": [
        {
          "timeoutSeconds": "{{ executionTimeout }}"
```

This means the command runs for 4,200 seconds (70 minutes) before the system sets the command status to `DeliveryTimedOut`.

**Execution Timeout**  
In the Systems Manager console, you specify the execution timeout value in the **Execution Timeout** field, if available. Not all SSM documents require that you specify an execution timeout. The **Execution Timeout** field is only displayed when a corresponding input parameter has been defined in the SSM document. If specified, the command must complete within this time period.

**Note**  
Run Command relies on the SSM Agent document terminal response to determine whether or not the command was delivered to the agent. SSM Agent must send an `ExecutionTimedOut` signal for an invocation or command to be marked as `ExecutionTimedOut`.

![\[The Execution Timeout field in the Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/run-command-execution-timeout-console.png)


**Default Execution Timeout**  
If a SSM document doesn't require that you explicitly specify an execution timeout value, then Systems Manager enforces the hard-coded default execution timeout.

**How Systems Manager reports timeouts**  
If Systems Manager receives an `execution timeout` reply from SSM Agent on a target, then Systems Manager marks the command invocation as `executionTimeout`.

If Run Command doesn't receive a document terminal response from SSM Agent, the command invocation is marked as `deliveryTimeout`.

To determine timeout status on a target, SSM Agent combines all parameters and the content of the SSM document to calculate for `executionTimeout`. When SSM Agent determines that a command has timed out, it sends `executionTimeout` to the service.

The default for **Timeout (seconds)** is 3600 seconds. The default for **Execution Timeout** is also 3600 seconds. Therefore, the total default timeout for a command is 7200 seconds.

**Note**  
SSM Agent processes `executionTimeout` differently depending on the type of SSM document and the document version. 

# Run Command walkthroughs


The walkthroughs in this section show you how to run commands with Run Command, a tool in AWS Systems Manager, using either the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell.

**Topics**
+ [

# Updating software using Run Command
](run-command-tutorial-update-software.md)
+ [

# Walkthrough: Use the AWS CLI with Run Command
](walkthrough-cli.md)
+ [

# Walkthrough: Use the AWS Tools for Windows PowerShell with Run Command
](walkthrough-powershell.md)

You can also view sample commands in the following references.
+ [Systems Manager AWS CLI Reference](https://docs.aws.amazon.com/cli/latest/reference/ssm/)
+ [AWS Tools for Windows PowerShell - AWS Systems Manager](https://docs.aws.amazon.com/powershell/latest/reference/items/SimpleSystemsManagement_cmdlets.html)

# Updating software using Run Command


The following procedures describe how to update software on your managed nodes.

## Updating the SSM Agent using Run Command


The following procedure describes how to update the SSM Agent running on your managed nodes. You can update to either the latest version of SSM Agent or downgrade to an older version. When you run the command, the system downloads the version from AWS, installs it, and then uninstalls the version that existed before the command was run. If an error occurs during this process, the system rolls back to the version on the server before the command was run and the command status shows that the command failed.

**Note**  
If an instance is running macOS version 13.0 (Ventura) or later, the instance must have the SSM Agent version 3.1.941.0 or higher to run the AWS-UpdateSSMAgent document. If the instance is running a version of SSM Agent released before 3.1.941.0, you can update your SSM Agent to run the AWS-UpdateSSMAgent document by running `brew update` and `brew upgrade amazon-ssm-agent` commands.

To be notified about SSM Agent updates, subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub.

**To update SSM Agent using Run Command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose **`AWS-UpdateSSMAgent`**.

1. In the **Command parameters** section, specify values for the following parameters, if you want:

   1. (Optional) For **Version**, enter the version of SSM Agent to install. You can install [older versions](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) of the agent. If you don't specify a version, the service installs the latest version.

   1. (Optional) For **Allow Downgrade**, choose **true** to install an earlier version of SSM Agent. If you choose this option, specify the [earlier](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) version number. Choose **false** to install only the newest version of the service.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

## Updating PowerShell using Run Command


The following procedure describes how to update PowerShell to version 5.1 on your Windows Server 2012 and 2012 R2 managed nodes. The script provided in this procedure downloads the Windows Management Framework (WMF) version 5.1 update, and starts the installation of the update. The node reboots during this process because this is required when installing WMF 5.1. The download and installation of the update takes approximately five minutes to complete.

**To update PowerShell using Run Command**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose **`AWS-RunPowerShellScript`**.

1. In the **Commands** section, paste the following commands for your operating system.

------
#### [ Windows Server 2012 R2 ]

   ```
   Set-Location -Path "C:\Windows\Temp"
   
   Invoke-WebRequest "https://go.microsoft.com/fwlink/?linkid=839516" -OutFile "Win8.1AndW2K12R2-KB3191564-x64.msu"
   
   Start-Process -FilePath "$env:systemroot\system32\wusa.exe" -Verb RunAs -ArgumentList ('Win8.1AndW2K12R2-KB3191564-x64.msu', '/quiet')
   ```

------
#### [ Windows Server 2012 ]

   ```
   Set-Location -Path "C:\Windows\Temp"
   
   Invoke-WebRequest "https://go.microsoft.com/fwlink/?linkid=839513" -OutFile "W2K12-KB3191565-x64.msu"
   
   Start-Process -FilePath "$env:systemroot\system32\wusa.exe" -Verb RunAs -ArgumentList ('W2K12-KB3191565-x64.msu', '/quiet')
   ```

------

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

After the managed node reboots and the installation of the update is complete, connect to your node to confirm that PowerShell successfully upgraded to version 5.1. To check the version of PowerShell on your node, open PowerShell and enter `$PSVersionTable`. The `PSVersion` value in the output table shows 5.1 if the upgrade was successful.

If the `PSVersion` value is different than 5.1, for example 3.0 or 4.0, review the **Setup** logs in Event Viewer under **Windows Logs**. These logs indicate why the update installation failed.

# Walkthrough: Use the AWS CLI with Run Command
Use the AWS CLI

The following sample walkthrough shows you how to use the AWS Command Line Interface (AWS CLI) to view information about commands and command parameters, how to run commands, and how to view the status of those commands. 

**Important**  
Only trusted administrators should be allowed to use AWS Systems Manager pre-configured documents shown in this topic. The commands or scripts specified in Systems Manager documents run with administrative permissions on your managed nodes. If a user has permission to run any of the pre-defined Systems Manager documents (any document that begins with `AWS-`), then that user also has administrator access to the node. For all other users, you should create restrictive documents and share them with specific users.

**Topics**
+ [

## Step 1: Getting started
](#walkthrough-cli-settings)
+ [

## Step 2: Run shell scripts to view resource details
](#walkthrough-cli-run-scripts)
+ [

## Step 3: Send simple commands using the `AWS-RunShellScript` document
](#walkthrough-cli-example-1)
+ [

## Step 4: Run a simple Python script using Run Command
](#walkthrough-cli-example-2)
+ [

## Step 5: Run a Bash script using Run Command
](#walkthrough-cli-example-3)

## Step 1: Getting started


You must either have administrator permissions on the managed node you want to configure or you must have been granted the appropriate permission in AWS Identity and Access Management (IAM). Also note, this example uses the US East (Ohio) Region (us-east-2). Run Command is available in the AWS Regions listed in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

**To run commands using the AWS CLI**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. List all available documents.

   This command lists all of the documents available for your account based on IAM permissions. 

   ```
   aws ssm list-documents
   ```

1. Verify that an managed node is ready to receive commands.

   The output of the following command shows if managed nodes are online.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-instance-information \
       --output text --query "InstanceInformationList[*]"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-instance-information ^
       --output text --query "InstanceInformationList[*]"
   ```

------

1. Run the following command to view details about a particular managed node.
**Note**  
To run the commands in this walkthrough, replace the instance and command IDs. For managed AWS IoT Greengrass core devices, use the mi-*ID\$1number* for instance ID. The command ID is returned as a response to **send-command**. Instance IDs are available from Fleet Manager, a tool in AWS Systems Manager..

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-instance-information \
       --instance-information-filter-list key=InstanceIds,valueSet=instance-ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-instance-information ^
       --instance-information-filter-list key=InstanceIds,valueSet=instance-ID
   ```

------

## Step 2: Run shell scripts to view resource details


Using Run Command and the `AWS-RunShellScript` document, you can run any command or script on a managed node as if you were logged on locally.

**View the description and available parameters**

Run the following command to view a description of the Systems Manager JSON document.

------
#### [ Linux & macOS ]

```
aws ssm describe-document \
    --name "AWS-RunShellScript" \
    --query "[Document.Name,Document.Description]"
```

------
#### [ Windows ]

```
aws ssm describe-document ^
    --name "AWS-RunShellScript" ^
    --query "[Document.Name,Document.Description]"
```

------

Run the following command to view the available parameters and details about those parameters.

------
#### [ Linux & macOS ]

```
aws ssm describe-document \
    --name "AWS-RunShellScript" \
    --query "Document.Parameters[*]"
```

------
#### [ Windows ]

```
aws ssm describe-document ^
    --name "AWS-RunShellScript" ^
    --query "Document.Parameters[*]"
```

------

## Step 3: Send simple commands using the `AWS-RunShellScript` document


Run the following command to get IP information for a Linux managed node.

If you're targeting a Windows Server managed node, change the `document-name` to `AWS-RunPowerShellScript` and change the `command` from `ifconfig` to `ipconfig`.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "IP config" \
    --parameters commands=ifconfig \
    --output text
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --instance-ids "instance-ID" ^
    --document-name "AWS-RunShellScript" ^
    --comment "IP config" ^
    --parameters commands=ifconfig ^
    --output text
```

------

**Get command information with response data**  
The following command uses the Command ID that was returned from the previous command to get the details and response data of the command execution. The system returns the response data if the command completed. If the command execution shows `"Pending"` or `"InProgress"` you run this command again to see the response data.

------
#### [ Linux & macOS ]

```
aws ssm list-command-invocations \
    --command-id $sh-command-id \
    --details
```

------
#### [ Windows ]

```
aws ssm list-command-invocations ^
    --command-id $sh-command-id ^
    --details
```

------

**Identify user**

The following command displays the default user running the commands. 

------
#### [ Linux & macOS ]

```
sh_command_id=$(aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "Demo run shell script on Linux managed node" \
    --parameters commands=whoami \
    --output text \
    --query "Command.CommandId")
```

------

**Get command status**  
The following command uses the Command ID to get the status of the command execution on the managed node. This example uses the Command ID that was returned in the previous command. 

------
#### [ Linux & macOS ]

```
aws ssm list-commands \
    --command-id "command-ID"
```

------
#### [ Windows ]

```
aws ssm list-commands ^
    --command-id "command-ID"
```

------

**Get command details**  
The following command uses the Command ID from the previous command to get the status of the command execution on a per managed node basis.

------
#### [ Linux & macOS ]

```
aws ssm list-command-invocations \
    --command-id "command-ID" \
    --details
```

------
#### [ Windows ]

```
aws ssm list-command-invocations ^
    --command-id "command-ID" ^
    --details
```

------

**Get command information with response data for a specific managed node**  
The following command returns the output of the original `aws ssm send-command` request for a specific managed node. 

------
#### [ Linux & macOS ]

```
aws ssm list-command-invocations \
    --instance-id instance-ID \
    --command-id "command-ID" \
    --details
```

------
#### [ Windows ]

```
aws ssm list-command-invocations ^
    --instance-id instance-ID ^
    --command-id "command-ID" ^
    --details
```

------

**Display Python version**

The following command returns the version of Python running on a node.

------
#### [ Linux & macOS ]

```
sh_command_id=$(aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "Demo run shell script on Linux Instances" \
    --parameters commands='python -V' \
    --output text --query "Command.CommandId") \
    sh -c 'aws ssm list-command-invocations \
    --command-id "$sh_command_id" \
    --details \
    --query "CommandInvocations[].CommandPlugins[].{Status:Status,Output:Output}"'
```

------

## Step 4: Run a simple Python script using Run Command


The following command runs a simple Python "Hello World" script using Run Command.

------
#### [ Linux & macOS ]

```
sh_command_id=$(aws ssm send-command \
    --instance-ids "instance-ID" \
    --document-name "AWS-RunShellScript" \
    --comment "Demo run shell script on Linux Instances" \
    --parameters '{"commands":["#!/usr/bin/python","print \"Hello World from python\""]}' \
    --output text \
    --query "Command.CommandId") \
    sh -c 'aws ssm list-command-invocations \
    --command-id "$sh_command_id" \
    --details \
    --query "CommandInvocations[].CommandPlugins[].{Status:Status,Output:Output}"'
```

------

## Step 5: Run a Bash script using Run Command


The examples in this section demonstrate how to run the following bash script using Run Command.

For examples of using Run Command to run scripts stored in remote locations, see [Running scripts from Amazon S3](integration-s3.md) and [Running scripts from GitHub](integration-remote-scripts.md).

```
#!/bin/bash
yum -y update
yum install -y ruby
cd /home/ec2-user
curl -O https://aws-codedeploy-us-east-2.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto
```

This script installs the AWS CodeDeploy agent on Amazon Linux and Red Hat Enterprise Linux (RHEL) instances, as described in [Create an Amazon EC2 instance for CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-create.html) in the *AWS CodeDeploy User Guide*.

The script installs the CodeDeploy agent from an AWS managed S3 bucket in thee US East (Ohio) Region (us-east-2), `aws-codedeploy-us-east-2`.

**Run a bash script in an AWS CLI command**

The following sample demonstrates how to include the bash script in a CLI command using the `--parameters` option.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name "AWS-RunShellScript" \
    --targets '[{"Key":"InstanceIds","Values":["instance-id"]}]' \
    --parameters '{"commands":["#!/bin/bash","yum -y update","yum install -y ruby","cd /home/ec2-user","curl -O https://aws-codedeploy-us-east-2.s3.amazonaws.com/latest/install","chmod +x ./install","./install auto"]}'
```

------

**Run a bash script in a JSON file**

In the following example, the content of the bash script is stored in a JSON file, and the file is included in the command using the `--cli-input-json` option.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name "AWS-RunShellScript" \
    --targets "Key=InstanceIds,Values=instance-id" \
    --cli-input-json file://installCodeDeployAgent.json
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name "AWS-RunShellScript" ^
    --targets "Key=InstanceIds,Values=instance-id" ^
    --cli-input-json file://installCodeDeployAgent.json
```

------

The contents of the referenced `installCodeDeployAgent.json` file is shown in the following example.

```
{
    "Parameters": {
        "commands": [
            "#!/bin/bash",
            "yum -y update",
            "yum install -y ruby",
            "cd /home/ec2-user",
            "curl -O https://aws-codedeploy-us-east-2.s3.amazonaws.com/latest/install",
            "chmod +x ./install",
            "./install auto"
        ]
    }
}
```

# Walkthrough: Use the AWS Tools for Windows PowerShell with Run Command
Use the Tools for Windows PowerShell

The following examples show how to use the AWS Tools for Windows PowerShell to view information about commands and command parameters, how to run commands, and how to view the status of those commands. This walkthrough includes an example for each of the pre-defined AWS Systems Manager documents.

**Important**  
Only trusted administrators should be allowed to use Systems Manager pre-configured documents shown in this topic. The commands or scripts specified in Systems Manager documents run with administrative permission on your managed nodes. If a user has permission to run any of the predefined Systems Manager documents (any document that begins with AWS), then that user also has administrator access to the node. For all other users, you should create restrictive documents and share them with specific users.

**Topics**
+ [

## Configure AWS Tools for Windows PowerShell session settings
](#walkthrough-powershell-settings)
+ [

## List all available documents
](#walkthrough-powershell-all-documents)
+ [

## Run PowerShell commands or scripts
](#walkthrough-powershell-run-script)
+ [

## Install an application using the `AWS-InstallApplication` document
](#walkthrough-powershell-install-application)
+ [

## Install a PowerShell module using the `AWS-InstallPowerShellModule` JSON document
](#walkthrough-powershell-install-module)
+ [

## Join a managed node to a Domain using the `AWS-JoinDirectoryServiceDomain` JSON document
](#walkthrough-powershell-domain-join)
+ [

## Send Windows metrics to Amazon CloudWatch Logs using the `AWS-ConfigureCloudWatch` document
](#walkthrough-powershell-windows-metrics)
+ [

## Turn on or turn off Windows automatic update using the `AWS-ConfigureWindowsUpdate` document
](#walkthrough-powershell-enable-windows-update)
+ [

## Manage Windows updates using Run Command
](#walkthough-powershell-windows-updates)

## Configure AWS Tools for Windows PowerShell session settings


**Specify your credentials**  
Open **Tools for Windows PowerShell** on your local computer and run the following command to specify your credentials. You must either have administrator permissions on the managed nodes you want to configure or you must have been granted the appropriate permission in AWS Identity and Access Management (IAM). For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md).

```
Set-AWSCredentials –AccessKey key-name –SecretKey key-name
```

**Set a default AWS Region**  
Run the following command to set the region for your PowerShell session. The example uses the US East (Ohio) Region (us-east-2). Run Command is available in the AWS Regions listed in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

```
Set-DefaultAWSRegion `
    -Region us-east-2
```

## List all available documents


This command lists all documents available for your account.

```
Get-SSMDocumentList
```

## Run PowerShell commands or scripts


Using Run Command and the `AWS-RunPowerShell` document, you can run any command or script on a managed node as if you were logged on locally. You can issue commands or enter a path to a local script to run the command. 

**Note**  
For information about rebooting managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-RunPowerShellScript"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-RunPowerShellScript" | Select -ExpandProperty Parameters
```

### Send a command using the `AWS-RunPowerShellScript` document


The following command shows the contents of the `"C:\Users"` directory and the contents of the `"C:\"` directory on two managed nodes. 

```
$runPSCommand = Send-SSMCommand `
    -InstanceIds @("instance-ID-1", "instance-ID-2") `
    -DocumentName "AWS-RunPowerShellScript" `
    -Comment "Demo AWS-RunPowerShellScript with two instances" `
    -Parameter @{'commands'=@('dir C:\Users', 'dir C:\')}
```

**Get command request details**  
The following command uses the `CommandId` to get the status of the command execution on both managed nodes. This example uses the `CommandId` that was returned in the previous command. 

```
Get-SSMCommand `
    -CommandId $runPSCommand.CommandId
```

The status of the command in this example can be Success, Pending, or InProgress.

**Get command information per managed node**  
The following command uses the `CommandId` from the previous command to get the status of the command execution on a per managed node basis.

```
Get-SSMCommandInvocation `
    -CommandId $runPSCommand.CommandId
```

**Get command information with response data for a specific managed node**  
The following command returns the output of the original `Send-SSMCommand` for a specific managed node. 

```
Get-SSMCommandInvocation `
    -CommandId $runPSCommand.CommandId `
    -Details $true `
    -InstanceId instance-ID | Select -ExpandProperty CommandPlugins
```

### Cancel a command


The following command cancels the `Send-SSMCommand` for the `AWS-RunPowerShellScript` document.

```
$cancelCommand = Send-SSMCommand `
    -InstanceIds @("instance-ID-1","instance-ID-2") `
    -DocumentName "AWS-RunPowerShellScript" `
    -Comment "Demo AWS-RunPowerShellScript with two instances" `
    -Parameter @{'commands'='Start-Sleep –Seconds 120; dir C:\'}

Stop-SSMCommand -CommandId $cancelCommand.CommandId
```

**Check the command status**  
The following command checks the status of the `Cancel` command.

```
Get-SSMCommand `
    -CommandId $cancelCommand.CommandId
```

## Install an application using the `AWS-InstallApplication` document


Using Run Command and the `AWS-InstallApplication` document, you can install, repair, or uninstall applications on managed nodes. The command requires the path or address to an MSI.

**Note**  
For information about rebooting managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallApplication"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallApplication" | Select -ExpandProperty Parameters
```

### Send a command using the `AWS-InstallApplication` document


The following command installs a version of Python on your managed node in unattended mode, and logs the output to a local text file on your `C:` drive.

```
$installAppCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallApplication" `
    -Parameter @{'source'='https://www.python.org/ftp/python/2.7.9/python-2.7.9.msi'; 'parameters'='/norestart /quiet /log c:\pythoninstall.txt'}
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution.

```
Get-SSMCommandInvocation `
    -CommandId $installAppCommand.CommandId `
    -Details $true
```

**Get command information with response data for a specific managed node**  
The following command returns the results of the Python installation.

```
Get-SSMCommandInvocation `
    -CommandId $installAppCommand.CommandId `
    -Details $true `
    -InstanceId instance-ID | Select -ExpandProperty CommandPlugins
```

## Install a PowerShell module using the `AWS-InstallPowerShellModule` JSON document


You can use Run Command to install PowerShell modules on managed nodes. For more information about PowerShell modules, see [Windows PowerShell Modules](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_modules?view=powershell-6).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallPowerShellModule"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-InstallPowerShellModule" | Select -ExpandProperty Parameters
```

### Install a PowerShell module


The following command downloads the EZOut.zip file, installs it, and then runs an additional command to install XPS viewer. Lastly, the output of this command is uploaded to an S3 bucket named "amzn-s3-demo-bucket". 

```
$installPSCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallPowerShellModule" `
    -Parameter @{'source'='https://gallery.technet.microsoft.com/EZOut-33ae0fb7/file/110351/1/EZOut.zip';'commands'=@('Add-WindowsFeature -name XPS-Viewer -restart')} `
    -OutputS3BucketName amzn-s3-demo-bucket
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution. 

```
Get-SSMCommandInvocation `
    -CommandId $installPSCommand.CommandId `
    -Details $true
```

**Get command information with response data for the managed node**  
The following command returns the output of the original `Send-SSMCommand` for the specific `CommandId`. 

```
Get-SSMCommandInvocation `
    -CommandId $installPSCommand.CommandId `
    -Details $true | Select -ExpandProperty CommandPlugins
```

## Join a managed node to a Domain using the `AWS-JoinDirectoryServiceDomain` JSON document


Using Run Command, you can quickly join a managed node to an AWS Directory Service domain. Before executing this command, [create a directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started_create_directory.html). We also recommend that you learn more about the Directory Service. For more information, see the [AWS Directory Service Administration Guide](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/).

You can only join a managed node to a domain. You can't remove a node from a domain.

**Note**  
For information about managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-JoinDirectoryServiceDomain"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-JoinDirectoryServiceDomain" | Select -ExpandProperty Parameters
```

### Join a managed node to a domain


The following command joins a managed node to the given Directory Service domain and uploads any generated output to the example Amazon Simple Storage Service (Amazon S3) bucket. 

```
$domainJoinCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-JoinDirectoryServiceDomain" `
    -Parameter @{'directoryId'='d-example01'; 'directoryName'='ssm.example.com'; 'dnsIpAddresses'=@('192.168.10.195', '192.168.20.97')} `
    -OutputS3BucketName amzn-s3-demo-bucket
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution. 

```
Get-SSMCommandInvocation `
    -CommandId $domainJoinCommand.CommandId `
    -Details $true
```

**Get command information with response data for the managed node**  
This command returns the output of the original `Send-SSMCommand` for the specific `CommandId`.

```
Get-SSMCommandInvocation `
    -CommandId $domainJoinCommand.CommandId `
    -Details $true | Select -ExpandProperty CommandPlugins
```

## Send Windows metrics to Amazon CloudWatch Logs using the `AWS-ConfigureCloudWatch` document


You can send Windows Server messages in the application, system, security, and Event Tracing for Windows (ETW) logs to Amazon CloudWatch Logs. When you allow logging for the first time, Systems Manager sends all logs generated within one (1) minute from the time that you start uploading logs for the application, system, security, and ETW logs. Logs that occurred before this time aren't included. If you turn off logging and then later turn logging back on, Systems Manager sends logs from the time it left off. For any custom log files and Internet Information Services (IIS) logs, Systems Manager reads the log files from the beginning. In addition, Systems Manager can also send performance counter data to CloudWatch Logs.

If you previously turned on CloudWatch integration in EC2Config, the Systems Manager settings override any settings stored locally on the managed node in the `C:\Program Files\Amazon\EC2ConfigService\Settings\AWS.EC2.Windows.CloudWatch.json` file. For more information about using EC2Config to manage performance counters and logs on a single managed node, see [Collecting metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) in the *Amazon CloudWatch User Guide*.

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-ConfigureCloudWatch"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-ConfigureCloudWatch" | Select -ExpandProperty Parameters
```

### Send application logs to CloudWatch


The following command configures the managed node and moves Windows Applications logs to CloudWatch.

```
$cloudWatchCommand = Send-SSMCommand `
    -InstanceID instance-ID `
    -DocumentName "AWS-ConfigureCloudWatch" `
    -Parameter @{'properties'='{"engineConfiguration": {"PollInterval":"00:00:15", "Components":[{"Id":"ApplicationEventLog", "FullName":"AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters":{"LogName":"Application", "Levels":"7"}},{"Id":"CloudWatch", "FullName":"AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch", "Parameters":{"Region":"region", "LogGroup":"my-log-group", "LogStream":"instance-id"}}], "Flows":{"Flows":["ApplicationEventLog,CloudWatch"]}}}'}
```

**Get command information per managed node**  
The following command uses the `CommandId` to get the status of the command execution. 

```
Get-SSMCommandInvocation `
    -CommandId $cloudWatchCommand.CommandId `
    -Details $true
```

**Get command information with response data for a specific managed node**  
The following command returns the results of the Amazon CloudWatch configuration.

```
Get-SSMCommandInvocation `
    -CommandId $cloudWatchCommand.CommandId `
    -Details $true `
    -InstanceId instance-ID | Select -ExpandProperty CommandPlugins
```

### Send performance counters to CloudWatch using the `AWS-ConfigureCloudWatch` document


The following demonstration command uploads performance counters to CloudWatch. For more information, see the *[Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/)*.

```
$cloudWatchMetricsCommand = Send-SSMCommand `
    -InstanceID instance-ID `
    -DocumentName "AWS-ConfigureCloudWatch" `
    -Parameter @{'properties'='{"engineConfiguration": {"PollInterval":"00:00:15", "Components":[{"Id":"PerformanceCounter", "FullName":"AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch", "Parameters":{"CategoryName":"Memory", "CounterName":"Available MBytes", "InstanceName":"", "MetricName":"AvailableMemory", "Unit":"Megabytes","DimensionName":"", "DimensionValue":""}},{"Id":"CloudWatch", "FullName":"AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch", "Parameters":{"AccessKey":"", "SecretKey":"","Region":"region", "NameSpace":"Windows-Default"}}], "Flows":{"Flows":["PerformanceCounter,CloudWatch"]}}}'}
```

## Turn on or turn off Windows automatic update using the `AWS-ConfigureWindowsUpdate` document


Using Run Command and the `AWS-ConfigureWindowsUpdate` document, you can turn on or turn off automatic Windows updates on your Windows Server managed nodes. This command configures the Windows Update Agent to download and install Windows updates on the day and hour that you specify. If an update requires a reboot, the managed node reboots automatically 15 minutes after updates have been installed. With this command you can also configure Windows Update to check for updates but not install them. The `AWS-ConfigureWindowsUpdate` document is officially supported on Windows Server 2012 and later versions.

**View the description and available parameters**

```
Get-SSMDocumentDescription `
    –Name "AWS-ConfigureWindowsUpdate"
```

**View more information about parameters**

```
Get-SSMDocumentDescription `
    -Name "AWS-ConfigureWindowsUpdate" | Select -ExpandProperty Parameters
```

### Turn on Windows automatic update


The following command configures Windows Update to automatically download and install updates daily at 10:00 PM. 

```
$configureWindowsUpdateCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-ConfigureWindowsUpdate" `
    -Parameters @{'updateLevel'='InstallUpdatesAutomatically'; 'scheduledInstallDay'='Daily'; 'scheduledInstallTime'='22:00'}
```

**View command status for allowing Windows automatic update**  
The following command uses the `CommandId` to get the status of the command execution for allowing Windows automatic update.

```
Get-SSMCommandInvocation `
    -Details $true `
    -CommandId $configureWindowsUpdateCommand.CommandId | Select -ExpandProperty CommandPlugins
```

### Turn off Windows automatic update


The following command lowers the Windows Update notification level so the system checks for updates but doesn't automatically update the managed node.

```
$configureWindowsUpdateCommand = Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-ConfigureWindowsUpdate" `
    -Parameters @{'updateLevel'='NeverCheckForUpdates'}
```

**View command status for turning off Windows automatic update**  
The following command uses the `CommandId` to get the status of the command execution for turning off Windows automatic update.

```
Get-SSMCommandInvocation `
    -Details $true `
    -CommandId $configureWindowsUpdateCommand.CommandId | Select -ExpandProperty CommandPlugins
```

## Manage Windows updates using Run Command


Using Run Command and the `AWS-InstallWindowsUpdates` document, you can manage updates for Windows Server managed nodes. This command scans for or installs missing updates on your managed nodes and optionally reboots following installation. You can also specify the appropriate classifications and severity levels for updates to install in your environment.

**Note**  
For information about rebooting managed nodes when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

The following examples demonstrate how to perform the specified Windows Update management tasks.

### Search for all missing Windows updates


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Scan'}
```

### Install specific Windows updates


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Install';'IncludeKbs'='kb-ID-1,kb-ID-2,kb-ID-3';'AllowReboot'='True'}
```

### Install important missing Windows updates


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Install';'SeverityLevels'='Important';'AllowReboot'='True'}
```

### Install missing Windows updates with specific exclusions


```
Send-SSMCommand `
    -InstanceId instance-ID `
    -DocumentName "AWS-InstallWindowsUpdates" `
    -Parameters @{'Action'='Install';'ExcludeKbs'='kb-ID-1,kb-ID-2';'AllowReboot'='True'}
```

# Troubleshooting Systems Manager Run Command
Troubleshooting Run Command

Run Command, a tool in AWS Systems Manager, provides status details with each command execution. For more information about the details of command statuses, see [Understanding command statuses](monitor-commands.md). You can also use the information in this topic to help troubleshoot problems with Run Command.

**Topics**
+ [

## Some of my managed nodes are missing
](#where-are-instances)
+ [

## A step in my script failed, but the overall status is 'succeeded'
](#ts-exit-codes)
+ [

## SSM Agent isn't running properly
](#ts-ssmagent-linux)

## Some of my managed nodes are missing


In the **Run a command** page, after you choose an SSM document to run and select **Manually selecting instances** in the **Targets** section, a list is displayed of managed nodes you can choose to run the command on.

If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

After you create, activate, reboot, or restart a managed node, install Run Command on a node, or attach an AWS Identity and Access Management (IAM) instance profile to a node, it can take a few minutes for the managed node to be added to the list.

## A step in my script failed, but the overall status is 'succeeded'


Using Run Command, you can define how your scripts handle exit codes. By default, the exit code of the last command run in a script is reported as the exit code for the entire script. You can, however, include a conditional statement to exit the script if any command before the final one fails. For information and examples, see [Specify exit codes in commands](run-command-handle-exit-status.md#command-exit-codes). 

## SSM Agent isn't running properly


If you experience problems running commands using Run Command, there might be a problem with the SSM Agent. For information about investigating issues with SSM Agent, see [Troubleshooting SSM Agent](troubleshooting-ssm-agent.md). 

# AWS Systems Manager Session Manager
Session Manager

Session Manager is a fully managed AWS Systems Manager tool. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). You can use either an interactive one-click browser-based shell or the AWS Command Line Interface (AWS CLI). Session Manager provides secure node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to managed nodes, strict security practices, and logs with node access details, while providing end users with simple one-click cross-platform access to your managed nodes. To get started with Session Manager, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/session-manager). In the navigation pane, choose **Session Manager**.

## How can Session Manager benefit my organization?


Session Manager offers these benefits:
+  **Centralized access control to managed nodes using IAM policies** 

  Administrators have a single place to grant and revoke access to managed nodes. Using only AWS Identity and Access Management (IAM) policies, you can control which individual users or groups in your organization can use Session Manager and which managed nodes they can access. 
+  **No open inbound ports and no need to manage bastion hosts or SSH keys** 

  Leaving inbound SSH ports and remote PowerShell ports open on your managed nodes greatly increases the risk of entities running unauthorized or malicious commands on the managed nodes. Session Manager helps you improve your security posture by letting you close these inbound ports, freeing you from managing SSH keys and certificates, bastion hosts, and jump boxes.
+  **One-click access to managed nodes from the console and CLI** 

  Using the AWS Systems Manager console or Amazon EC2 console, you can start a session with a single click. Using the AWS CLI, you can also start a session that runs a single command or a sequence of commands. Because permissions to managed nodes are provided through IAM policies instead of SSH keys or other mechanisms, the connection time is greatly reduced.
+  **Connect to both Amazon EC2 instances and non-EC2 managed nodes in [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments** 

  You can connect to both Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. 

  To connect to non-EC2 nodes using Session Manager, you must first activate the advanced-instances tier. **There is a charge to use the advanced-instances tier.** However, there is no additional charge to connect to EC2 instances using Session Manager. For information, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).
+  **Port forwarding** 

  Redirect any port inside your managed node to a local port on a client. After that, connect to the local port and access the server application that is running inside the node.
+  **Cross-platform support for Windows, Linux, and macOS** 

  Session Manager provides support for Windows, Linux, and macOS from a single tool. For example, you don't need to use an SSH client for Linux and macOS managed nodes or an RDP connection for Windows Server managed nodes.
+  **Logging session activity** 

  To meet operational or security requirements in your organization, you might need to provide a record of the connections made to your managed nodes and the commands that were run on them. You can also receive notifications when a user in your organization starts or ends session activity. 

  Logging capabilities are provided through integration with the following AWS services:
  + **AWS CloudTrail** – AWS CloudTrail captures information about Session Manager API calls made in your AWS account and writes it to log files that are stored in an Amazon Simple Storage Service (Amazon S3) bucket you specify. One bucket is used for all CloudTrail logs for your account. For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md). 
  + **Amazon Simple Storage Service** – You can choose to store session log data in an Amazon S3 bucket of your choice for debugging and troubleshooting purposes. Log data can be sent to your Amazon S3 bucket with or without encryption using your AWS KMS key. For more information, see [Logging session data using Amazon S3 (console)](session-manager-logging-s3.md).
  + **Amazon CloudWatch Logs** – CloudWatch Logs allows you to monitor, store, and access log files from various AWS services. You can send session log data to a CloudWatch Logs log group for debugging and troubleshooting purposes. Log data can be sent to your log group with or without AWS KMS encryption using your KMS key. For more information, see [Logging session data using Amazon CloudWatch Logs (console)](session-manager-logging-cloudwatch-logs.md).
  + **Amazon EventBridge** and **Amazon Simple Notification Service** – EventBridge allows you to set up rules to detect when changes happen to AWS resources that you specify. You can create a rule to detect when a user in your organization starts or stops a session, and then receive a notification through Amazon SNS (for example, a text or email message) about the event. You can also configure a CloudWatch event to initiate other responses. For more information, see [Monitoring session activity using Amazon EventBridge (console)](session-manager-auditing.md#session-manager-auditing-eventbridge-events).
**Note**  
Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

## Who should use Session Manager?

+ Any AWS customer who wants to improve their security posture, reduce operational overhead by centralizing access control on managed nodes, and reduce inbound node access. 
+ Information Security experts who want to monitor and track managed node access and activity, close down inbound ports on managed nodes, or allow connections to managed nodes that don't have a public IP address. 
+ Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for Linux, macOS, and Windows Server managed nodes.
+ Users who want to connect to a managed node with just one click from the browser or AWS CLI without having to provide SSH keys.

## What are the main features of Session Manager?

+ **Support for Windows Server, Linux and macOS managed nodes**

  Session Manager enables you to establish secure connections to your Amazon Elastic Compute Cloud (EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). For a list of supported operating system types, see [Setting up Session Manager](session-manager-getting-started.md).
**Note**  
Session Manager support for on-premises machines is provided for the advanced-instances tier only. For information, see [Turning on the advanced-instances tier](fleet-manager-enable-advanced-instances-tier.md).
+  **Console, CLI, and SDK access to Session Manager capabilities** 

  You can work with Session Manager in the following ways:

  The **AWS Systems Manager console** includes access to all the Session Manager capabilities for both administrators and end users. You can perform any task that is related to your sessions by using the Systems Manager console. 

  The Amazon EC2 console provides the ability for end users to connect to EC2 instances for which they have been granted session permissions.

  The **AWS CLI** includes access to Session Manager capabilities for end users. You can start a session, view a list of sessions, and permanently end a session by using the AWS CLI. 
**Note**  
To use the AWS CLI to run session commands, you must be using version 1.16.12 of the CLI (or later), and you must have installed the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md). To view the plugin on GitHub, see [session-manager-plugin](https://github.com/aws/session-manager-plugin).
+  **IAM access control** 

  Through the use of IAM policies, you can control which members of your organization can initiate sessions to managed nodes and which nodes they can access. You can also provide temporary access to your managed nodes. For example, you might want to give an on-call engineer (or a group of on-call engineers) access to production servers only for the duration of their rotation.
+  **Logging support** 

  Session Manager provide you with options for logging session histories in your AWS account through integration with a number of other AWS services. For more information, see [Logging session activity](session-manager-auditing.md) and [Enabling and disabling session logging](session-manager-logging.md).
+  **Configurable shell profiles** 

  Session Manager provides you with options to configure preferences within sessions. These customizable profiles allow you to define preferences such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.
+  **Customer key data encryption support** 

  You can configure Session Manager to encrypt the session data logs that you send to an Amazon Simple Storage Service (Amazon S3) bucket or stream to a CloudWatch Logs log group. You can also configure Session Manager to further encrypt the data transmitted between client machines and your managed nodes during your sessions. For information, see [Enabling and disabling session logging](session-manager-logging.md) and [Configure session preferences](session-manager-getting-started-configure-preferences.md).
+  **AWS PrivateLink support for managed nodes without public IP addresses** 

  You can also set up VPC Endpoints for Systems Manager using AWS PrivateLink to further secure your sessions. AWS PrivateLink limits all network traffic between your managed nodes, Systems Manager, and Amazon EC2 to the Amazon network. For more information, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).
+  **Tunneling** 

  In a session, use a Session-type AWS Systems Manager (SSM) document to tunnel traffic, such as http or a custom protocol, between a local port on a client machine and a remote port on a managed node.
+  **Interactive commands** 

  Create a Session-type SSM document that uses a session to interactively run a single command, giving you a way to manage what users can do on a managed node.

## What is a session?


A session is a connection made to a managed node using Session Manager. Sessions are based on a secure bi-directional communication channel between the client (you) and the remote managed node that streams inputs and outputs for commands. Traffic between a client and a managed node is encrypted using TLS 1.2, and requests to create the connection are signed using Sigv4. This two-way communication allows interactive bash and PowerShell access to managed nodes. You can also use an AWS Key Management Service (AWS KMS) key to further encrypt data beyond the default TLS encryption.

For example, say that John is an on-call engineer in your IT department. He receives notification of an issue that requires him to remotely connect to a managed node, such as a failure that requires troubleshooting or a directive to change a simple configuration option on a node. Using the AWS Systems Manager console, the Amazon EC2 console, or the AWS CLI, John starts a session connecting him to the managed node, runs commands on the node needed to complete the task, and then ends the session.

When John sends that first command to start the session, the Session Manager service authenticates his ID, verifies the permissions granted to him by an IAM policy, checks configuration settings (such as verifying allowed limits for the sessions), and sends a message to SSM Agent to open the two-way connection. After the connection is established and John types the next command, the command output from SSM Agent is uploaded to this communication channel and sent back to his local machine.

**Topics**
+ [

## How can Session Manager benefit my organization?
](#session-manager-benefits)
+ [

## Who should use Session Manager?
](#session-manager-who)
+ [

## What are the main features of Session Manager?
](#session-manager-features)
+ [

## What is a session?
](#what-is-a-session)
+ [

# Setting up Session Manager
](session-manager-getting-started.md)
+ [

# Working with Session Manager
](session-manager-working-with.md)
+ [

# Logging session activity
](session-manager-auditing.md)
+ [

# Enabling and disabling session logging
](session-manager-logging.md)
+ [

# Session document schema
](session-manager-schema.md)
+ [

# Troubleshooting Session Manager
](session-manager-troubleshooting.md)

# Setting up Session Manager


Before you use AWS Systems Manager Session Manager to connect to the managed nodes in your account, complete the steps in the following topics.

**Topics**
+ [

# Step 1: Complete Session Manager prerequisites
](session-manager-prerequisites.md)
+ [

# Step 2: Verify or add instance permissions for Session Manager
](session-manager-getting-started-instance-profile.md)
+ [

# Step 3: Control session access to managed nodes
](session-manager-getting-started-restrict-access.md)
+ [

# Step 4: Configure session preferences
](session-manager-getting-started-configure-preferences.md)
+ [

# Step 5: (Optional) Restrict access to commands in a session
](session-manager-restrict-command-access.md)
+ [

# Step 6: (Optional) Use AWS PrivateLink to set up a VPC endpoint for Session Manager
](session-manager-getting-started-privatelink.md)
+ [

# Step 7: (Optional) Turn on or turn off ssm-user account administrative permissions
](session-manager-getting-started-ssm-user-permissions.md)
+ [

# Step 8: (Optional) Allow and control permissions for SSH connections through Session Manager
](session-manager-getting-started-enable-ssh-connections.md)

# Step 1: Complete Session Manager prerequisites


Before using Session Manager, make sure your environment meets the following requirements.


**Session Manager prerequisites**  

| Requirement | Description | 
| --- | --- | 
|  Supported operating systems  |  Session Manager supports connecting to Amazon Elastic Compute Cloud (Amazon EC2) instances, in addition to non-EC2 machines in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that use the *advanced-instances* tier. Session Manager supports the following operating system versions:  Session Manager supports EC2 instances, edge devices, and on-premises servers and virtual machines (VMs) in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment that use the *advanced-instances* tier. For more information about advanced instances, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).   **Linux and **macOS****  Session Manager supports all the versions of Linux and macOS that are supported by AWS Systems Manager. For information, see [Supported operating systems and machine types](operating-systems-and-machine-types.md).  ** Windows **  Session Manager supports Windows Server 2012 and later versions.  Microsoft Windows Server 2016 Nano isn't supported.   | 
|  SSM Agent  |  At minimum, AWS Systems Manager SSM Agent version 2.3.68.0 or later must be installed on the managed nodes you want to connect to through sessions.  To use the option to encrypt session data using a key created in AWS Key Management Service (AWS KMS), version 2.3.539.0 or later of SSM Agent must be installed on the managed node.  To use shell profiles in a session, SSM Agent version 3.0.161.0 or later must be installed on the managed node. To start a Session Manager port forwarding or SSH session, SSM Agent version 3.0.222.0 or later must be installed on the managed node. To stream session data using Amazon CloudWatch Logs, SSM Agent version 3.0.284.0 or later must be installed on the managed node. For information about how to determine the version number running on an instance, see [Checking the SSM Agent version number](ssm-agent-get-version.md). For information about manually installing or automatically updating SSM Agent, see [Working with SSM Agent](ssm-agent.md).  About the ssm-user account Starting with version 2.3.50.0 of SSM Agent, the agent creates a user account on the managed node, with root or administrator permissions, called `ssm-user`. (On versions before 2.3.612.0, the account is created when SSM Agent starts or restarts. On version 2.3.612.0 and later, `ssm-user` is created the first time a session starts on the managed node.) Sessions are launched using the administrative credentials of this user account. For information about restricting administrative control for this account, see [Turn off or turn on ssm-user account administrative permissions](session-manager-getting-started-ssm-user-permissions.md).   ssm-user on Windows Server domain controllers Beginning with SSM Agent version 2.3.612.0, the `ssm-user` account isn't created automatically on managed nodes that are used as Windows Server domain controllers. To use Session Manager on a Windows Server machine being used as a domain controller, you must create the `ssm-user` account manually if it isn't already present, and assign Domain Administrator permissions to the user. On Windows Server, SSM Agent sets a new password for the `ssm-user` account each time a session starts, so you don't need to specify a password when you create the account.   | 
|  Connectivity to endpoints  |  The managed nodes you connect to must also allow HTTPS (port 443) outbound traffic to the following endpoints: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-prerequisites.html) For more information, see the following topics: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-prerequisites.html) Alternatively, you can connect to the required endpoints by using interface endpoints. For more information, see [Step 6: (Optional) Use AWS PrivateLink to set up a VPC endpoint for Session Manager](session-manager-getting-started-privatelink.md).  | 
|  AWS CLI  |  (Optional) If you use the AWS Command Line Interface (AWS CLI) to start your sessions (instead of using the AWS Systems Manager console or Amazon EC2 console), version 1.16.12 or later of the CLI must be installed on your local machine. You can call `aws --version` to check the version. If you need to install or upgrade the CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) in the AWS Command Line Interface User Guide. An updated version of SSM Agent is released whenever new tools are added to Systems Manager or updates are made to existing tools. Failing to use the latest version of the agent can prevent your managed node from using various Systems Manager tools and features. For that reason, we recommend that you automate the process of keeping SSM Agent up to date on your machines. For information, see [Automating updates to SSM Agent](ssm-agent-automatic-updates.md). Subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/mainline/RELEASENOTES.md) page on GitHub to get notifications about SSM Agent updates. In addition, to use the CLI to manage your nodes with Session Manager, you must first install the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).  | 
|  Turn on advanced-instances tier ([hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments)  |  To connect to non-EC2 machines using Session Manager, you must turn on the advanced-instances tier in the AWS account and AWS Region where you create hybrid activations to register non-EC2 machines as managed nodes. There is a charge to use the advanced-instances tier. For more information about the advanced-instance tier, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).  | 
|  Verify IAM service role permissions ([hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environments)  |  Hybrid-activated nodes use the AWS Identity and Access Management (IAM) service role specified in the hybrid activation to communicate with Systems Manager API operations. This service role must contain the permissions required to connect to your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) machines using Session Manager. If your service role contains the AWS managed policy `AmazonSSMManagedInstanceCore` , the required permissions for Session Manager are already provided. If you find that the service role does not contain the required permissions, you must deregister the managed instance and register it with a new hybrid activation that uses an IAM service role with the required permissions. For more information about deregistering managed instances, see [Deregistering managed nodes in a hybrid and multicloud environment](fleet-manager-deregister-hybrid-nodes.md). For more information about creating IAM policies with Session Manager permissions, see [Step 2: Verify or add instance permissions for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-instance-profile.html).  | 

# Step 2: Verify or add instance permissions for Session Manager


By default, AWS Systems Manager doesn't have permission to perform actions on your instances. You can provide instance permissions at the account level using an AWS Identity and Access Management (IAM) role, or at the instance level using an instance profile. If your use case allows, we recommend granting access at the account level using the Default Host Management Configuration. If you've already set up the Default Host Management Configuration for your account using the `AmazonSSMManagedEC2InstanceDefaultPolicy` policy, you can proceed to the next step. For more information about the Default Host Management Configuration, see [Managing EC2 instances automatically with Default Host Management Configuration](fleet-manager-default-host-management-configuration.md).

Alternatively, you can use instance profiles to provide the required permissions to your instances. An instance profile passes an IAM role to an Amazon EC2 instance. You can attach an IAM instance profile to an Amazon EC2 instance as you launch it or to a previously launched instance. For more information, see [Using instance profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingrole-instanceprofile.html).

For on-premises servers or virtual machines (VMs), permissions are provided by the IAM service role associated with the hybrid activation used to register your on-premises servers and VMs with Systems Manager. On-premises servers and VMs do not use instance profiles.

If you already use other Systems Manager tools, such as Run Command or Parameter Store, an instance profile with the required basic permissions for Session Manager might already be attached to your Amazon EC2 instances. If an instance profile that contains the AWS managed policy `AmazonSSMManagedInstanceCore` is already attached to your instances, the required permissions for Session Manager are already provided. This is also true if the IAM service role used in your hybrid activation contains the `AmazonSSMManagedInstanceCore` managed policy.

However, in some cases, you might need to modify the permissions attached to your instance profile. For example, you want to provide a narrower set of instance permissions, you have created a custom policy for your instance profile, or you want to use Amazon Simple Storage Service (Amazon S3) encryption or AWS Key Management Service (AWS KMS) encryption options for securing session data. For these cases, do one of the following to allow Session Manager actions to be performed on your instances:
+  **Embed permissions for Session Manager actions in a custom IAM role** 

  To add permissions for Session Manager actions to an existing IAM role that doesn't rely on the AWS-provided default policy `AmazonSSMManagedInstanceCore`, follow the steps in [Add Session Manager permissions to an existing IAM role](getting-started-add-permissions-to-existing-profile.md).
+  **Create a custom IAM role with Session Manager permissions only** 

  To create an IAM role that contains permissions only for Session Manager actions, follow the steps in [Create a custom IAM role for Session Manager](getting-started-create-iam-instance-profile.md).
+  **Create and use a new IAM role with permissions for all Systems Manager actions** 

  To create an IAM role for Systems Manager managed instances that uses a default policy supplied by AWS to grant all Systems Manager permissions, follow the steps in [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

**Topics**
+ [

# Add Session Manager permissions to an existing IAM role
](getting-started-add-permissions-to-existing-profile.md)
+ [

# Create a custom IAM role for Session Manager
](getting-started-create-iam-instance-profile.md)

# Add Session Manager permissions to an existing IAM role


Use the following procedure to add Session Manager permissions to an existing AWS Identity and Access Management (IAM) role. By adding permissions to an existing role, you can enhance the security of your computing environment without having to use the AWS `AmazonSSMManagedInstanceCore` policy for instance permissions.

**Note**  
Note the following information:  
This procedure assumes that your existing role already includes other Systems Manager `ssm` permissions for actions you want to allow access to. This policy alone isn't enough to use Session Manager.
The following policy example includes an `s3:GetEncryptionConfiguration` action. This action is required if you chose the **Enforce S3 log encryption** option in Session Manager logging preferences.
If the `ssmmessages:OpenControlChannel` permission is removed from policies attached to your IAM instance profile or IAM service role,SSM Agent on the managed node loses connectivity to the Systems Manager service in the cloud. However, it can take up to 1 hour for a connection to be terminated after the permission is removed. This is the same behavior as when the IAM instance role or IAM service role is deleted.

**To add Session Manager permissions to an existing role (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Select the name of the role that you are adding the permissions to.

1. Choose the **Permissions** tab.

1. Choose **Add permissions**, and then select **Create inline policy**.

1. Choose the **JSON** tab.

1. Replace the default policy content with the following content. Replace *key-name* with the Amazon Resource Name (ARN) of the AWS Key Management Service key (AWS KMS key) that you want to use.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetEncryptionConfiguration"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
           }
       ]
   }
   ```

------

   For information about using a KMS key to encrypt session data, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).

   If you won't use AWS KMS encryption for your session data, you can remove the following content from the policy.

   ```
   ,
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "key-name"
           }
   ```

1. Choose **Next: Tags**.

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the policy.

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **SessionManagerPermissions**.

1. (Optional) For **Description**, enter a description for the policy. 

   Choose **Create policy**.

For information about the `ssmmessages` actions, see [Reference: ec2messages, ssmmessages, and other API operations](systems-manager-setting-up-messageAPIs.md).

# Create a custom IAM role for Session Manager


You can create an AWS Identity and Access Management (IAM) role that grants Session Manager the permission to perform actions on your Amazon EC2 managed instances. You can also include a policy to grant the permissions needed for session logs to be sent to Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch Logs.

After you create the IAM role, for information about how to attach the role to an instance, see [Attach or Replace an Instance Profile](https://aws.amazon.com/premiumsupport/knowledge-center/attach-replace-ec2-instance-profile/) at the AWS re:Post website. For more information about IAM instance profiles and roles, see [Using instance profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) in the *IAM User Guide* and [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) in the *Amazon Elastic Compute Cloud User Guide for Linux Instances*. For more information about creating an IAM service role for on-premises machines, see [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](https://docs.aws.amazon.com/systems-manager/latest/userguide/hybrid-multicloud-service-role.html).

**Topics**
+ [

## Creating an IAM role with minimal Session Manager permissions (console)
](#create-iam-instance-profile-ssn-only)
+ [

## Creating an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)
](#create-iam-instance-profile-ssn-logging)

## Creating an IAM role with minimal Session Manager permissions (console)


Use the following procedure to create a custom IAM role with a policy that provides permissions for only Session Manager actions on your instances.

**To create an instance profile with minimal Session Manager permissions (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**. (If a **Get Started** button is displayed, choose it, and then choose **Create Policy**.)

1. Choose the **JSON** tab.

1. Replace the default content with the following policy. To encrypt session data using AWS Key Management Service (AWS KMS), replace *key-name* with the Amazon Resource Name (ARN) of the AWS KMS key that you want to use.
**Note**  
If the `ssmmessages:OpenControlChannel` permission is removed from policies attached to your IAM instance profile or IAM service role,SSM Agent on the managed node loses connectivity to the Systems Manager service in the cloud. However, it can take up to 1 hour for a connection to be terminated after the permission is removed. This is the same behavior as when the IAM instance role or IAM service role is deleted.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:UpdateInstanceInformation",
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
           }
       ]
   }
   ```

------

   For information about using a KMS key to encrypt session data, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).

   If you won't use AWS KMS encryption for your session data, you can remove the following content from the policy.

   ```
   ,
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "key-name"
           }
   ```

1. Choose **Next: Tags**.

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the policy.

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **SessionManagerPermissions**.

1. (Optional) For **Description**, enter a description for the policy. 

1. Choose **Create policy**.

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. On the **Create role** page, choose **AWS service**, and for **Use case**, choose **EC2**.

1. Choose **Next**.

1. On the **Add permissions** page, select the check box to the left of name of the policy you just created, such as **SessionManagerPermissions**.

1. Choose **Next**.

1. On the **Name, review, and create** page, for **Role name**, enter a name for the IAM role, such as **MySessionManagerRole**.

1. (Optional) For **Role description**, enter a description for the instance profile. 

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the role.

   Choose **Create role**.

For information about `ssmmessages` actions, see [Reference: ec2messages, ssmmessages, and other API operations](systems-manager-setting-up-messageAPIs.md).

## Creating an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)


Use the following procedure to create a custom IAM role with a policy that provides permissions for Session Manager actions on your instances. The policy also provides the permissions needed for session logs to be stored in Amazon Simple Storage Service (Amazon S3) buckets and Amazon CloudWatch Logs log groups.

**Important**  
To output session logs to an Amazon S3 bucket owned by a different AWS account, you must add the `s3:PutObjectAcl` permission to the IAM role policy. Additionally, you must ensure that the bucket policy grants cross-account access to the IAM role used by the owning account to grant Systems Manager permissions for managed instances. If the bucket uses Key Management Service (KMS) encryption, then the bucket's KMS policy must also grant this cross-account access. For more information about configuring cross-account bucket permissions in Amazon S3, see [Granting cross-account bucket permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html) in the *Amazon Simple Storage Service User Guide*. If the cross-account permissions aren't added, the account that owns the Amazon S3 bucket can't access the session output logs.

For information about specifying preferences for storing session logs, see [Enabling and disabling session logging](session-manager-logging.md).

**To create an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**. (If a **Get Started** button is displayed, choose it, and then choose **Create Policy**.)

1. Choose the **JSON** tab.

1. Replace the default content with the following policy. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssmmessages:CreateControlChannel",
                   "ssmmessages:CreateDataChannel",
                   "ssmmessages:OpenControlChannel",
                   "ssmmessages:OpenDataChannel",
                   "ssm:UpdateInstanceInformation"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogStream",
                   "logs:PutLogEvents",
                   "logs:DescribeLogGroups",
                   "logs:DescribeLogStreams"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/s3-prefix/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetEncryptionConfiguration"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
           },
           {
               "Effect": "Allow",
               "Action": "kms:GenerateDataKey",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**.

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the policy.

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **SessionManagerPermissions**.

1. (Optional) For **Description**, enter a description for the policy. 

1. Choose **Create policy**.

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. On the **Create role** page, choose **AWS service**, and for **Use case**, choose **EC2**.

1. Choose **Next**.

1. On the **Add permissions** page, select the check box to the left of name of the policy you just created, such as **SessionManagerPermissions**.

1. Choose **Next**.

1. On the **Name, review, and create** page, for **Role name**, enter a name for the IAM role, such as **MySessionManagerRole**.

1. (Optional) For **Role description**, enter a description for the role. 

1. (Optional) Add tags by choosing **Add tag**, and entering the preferred tags for the role.

1. Choose **Create role**.

# Step 3: Control session access to managed nodes


You grant or revoke Session Manager access to managed nodes by using AWS Identity and Access Management (IAM) policies. You can create a policy and attach it to an IAM user or group that specifies which managed nodes the user or group can connect to. You can also specify the Session Manager API operations the user or groups can perform on those managed nodes. 

To help you get started with IAM permission policies for Session Manager, we've created sample policies for an end user and an administrator user. You can use these policies with only minor changes. Or, use them as a guide to create custom IAM policies. For more information, see [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md). For information about how to create IAM policies and attach them to users or groups, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Adding and Removing IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

**About session ID ARN formats**  
When you create an IAM policy for Session Manager access, you specify a session ID as part of the Amazon Resource Name (ARN). The session ID includes the user name as a variable. To help illustrate this, here's the format of a Session Manager ARN and an example: 

```
arn:aws:ssm:region-id:account-id:session/session-id
```

For example:

```
arn:aws:ssm:us-east-2:123456789012:session/JohnDoe-1a2b3c4d5eEXAMPLE
```

For more information about using variables in IAM policies, see [IAM Policy Elements: Variables](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html). 

**Topics**
+ [

# Start a default shell session by specifying the default session document in IAM policies
](getting-started-default-session-document.md)
+ [

# Start a session with a document by specifying the session documents in IAM policies
](getting-started-specify-session-document.md)
+ [

# Sample IAM policies for Session Manager
](getting-started-restrict-access-quickstart.md)
+ [

# Additional sample IAM policies for Session Manager
](getting-started-restrict-access-examples.md)

# Start a default shell session by specifying the default session document in IAM policies
Start a default shell session

When you configure Session Manager for your AWS account or when you change session preferences in the Systems Manager console, the system creates an SSM session document called `SSM-SessionManagerRunShell`. This is the default session document. Session Manager uses this document to store your session preferences, which include information like the following:
+ A location where you want to save session data, such an Amazon Simple Storage Service (Amazon S3) bucket or a Amazon CloudWatch Logs log group.
+ An AWS Key Management Service (AWS KMS) key ID for encrypting session data.
+ Whether Run As support is allowed for your sessions.

Here is an example of the information contained in the `SSM-SessionManagerRunShell` session preferences document.

```
{
  "schemaVersion": "1.0",
  "description": "Document to hold regional settings for Session Manager",
  "sessionType": "Standard_Stream",
  "inputs": {
    "s3BucketName": "amzn-s3-demo-bucket",
    "s3KeyPrefix": "MyS3Prefix",
    "s3EncryptionEnabled": true,
    "cloudWatchLogGroupName": "MyCWLogGroup",
    "cloudWatchEncryptionEnabled": false,
    "kmsKeyId": "1a2b3c4d",
    "runAsEnabled": true,
    "runAsDefaultUser": "RunAsUser"
  }
}
```

By default, Session Manager uses the default session document when a user starts a session from the AWS Management Console. This applies to either Fleet Manager or Session Manager in the Systems Manager console, or EC2 Connect in the Amazon EC2 console. Session Manager also uses the default session document when a user starts a session by using an AWS CLI command like the following example:

```
aws ssm start-session \
    --target i-02573cafcfEXAMPLE
```

To start a default shell session, you must specify the default session document in the IAM policy, as shown in the following example.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnableSSMSession",
      "Effect": "Allow",
      "Action": [
        "ssm:StartSession"
      ],
      "Resource": [
        "arn:aws:ec2:us-east-1:111122223333:instance/instance-id",
        "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ssmmessages:OpenDataChannel"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}
```

------

# Start a session with a document by specifying the session documents in IAM policies
Start a session with a document

If you use the [start-session](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) AWS CLI command using the default session document, you can omit the document name. The system automatically calls the `SSM-SessionManagerRunShell` session document.

In all other cases, you must specify a value for the `document-name` parameter. When a user specifies the name of a session document in a command, the systems checks their IAM policy to verify they have permission to access the document. If they don't have permission, the connection request fails. The following examples includes the `document-name` parameter with the `AWS-StartPortForwardingSession` session document.

```
aws ssm start-session \
    --target i-02573cafcfEXAMPLE \
    --document-name AWS-StartPortForwardingSession \
    --parameters '{"portNumber":["80"], "localPortNumber":["56789"]}'
```

For an example of how to specify a Session Manager session document in an IAM policy, see [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user).

**Note**  
To start a session using SSH, you must complete configuration steps on the target managed node *and* the user's local machine. For information, see [(Optional) Allow and control permissions for SSH connections through Session Manager](session-manager-getting-started-enable-ssh-connections.md).

# Sample IAM policies for Session Manager


Use the samples in this section to help you create AWS Identity and Access Management (IAM) policies that provide the most commonly needed permissions for Session Manager access. 

**Note**  
You can also use an AWS KMS key policy to control which IAM entities (users or roles) and AWS accounts are given access to your KMS key. For information, see [Overview of Managing Access to Your AWS KMS Resources](https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html) and [Using Key Policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

**Topics**
+ [

## Quickstart end user policies for Session Manager
](#restrict-access-quickstart-end-user)
+ [

## Quickstart administrator policy for Session Manager
](#restrict-access-quickstart-admin)

## Quickstart end user policies for Session Manager


Use the following examples to create IAM end user policies for Session Manager. 

You can create a policy that allows users to start sessions from only the Session Manager console and AWS Command Line Interface (AWS CLI), from only the Amazon Elastic Compute Cloud (Amazon EC2) console, or from all three.

These policies provide end users the ability to start a session to a particular managed node and the ability to end only their own sessions. Refer to [Additional sample IAM policies for Session Manager](getting-started-restrict-access-examples.md) for examples of customizations you might want to make to the policy.

In the following sample policies, replace each *example resource placeholder* with your own information. 

Choose from the following tabs to view the sample policy for the range of session access you want to provide.

------
#### [ Session Manager and Fleet Manager ]

Use this sample policy to give users the ability to start and resume sessions from only the Session Manager and Fleet Manager consoles. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-02573cafcfEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
        }
    ]
}
```

------

------
#### [ Amazon EC2 ]

Use this sample policy to give users the ability to start and resume sessions from only the Amazon EC2 console. This policy doesn't provide all the permissions needed to start sessions from the Session Manager console and the AWS CLI.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-02573cafcfEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceInformation"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:username}-*"
            ]
        }
    ]
}
```

------

------
#### [ AWS CLI ]

Use this sample policy to give users the ability to start and resume sessions from the AWS CLI.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-02573cafcfEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-name"
        }
    ]
}
```

------

------

**Note**  
`SSM-SessionManagerRunShell` is the default name of the SSM document that Session Manager creates to store your session configuration preferences. You can create a custom Session document and specify it in this policy instead. You can also specify the AWS-provided document `AWS-StartSSHSession` for users who are starting sessions using SSH. For information about configuration steps needed to support sessions using SSH, see [(Optional) Allow and control permissions for SSH connections through Session Manager](session-manager-getting-started-enable-ssh-connections.md).  
The `kms:GenerateDataKey` permission enables the creation of a data encryption key that will be used to encrypt session data. If you will use AWS Key Management Service (AWS KMS) encryption for your session data, replace *key-name* with the Amazon Resource Name (ARN) of the KMS key you want to use, in the format `arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-12345EXAMPLE`. If you won't use KMS key encryption for your session data, remove the following content from the policy.  

```
{
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "key-name"
        }
```
For information about using AWS KMS for encrypting session data, see [Turn on KMS key encryption of session data (console)](session-preferences-enable-encryption.md).  
The permission for [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) is needed for cases where a user attempts to start a session from the Amazon EC2 console, but the SSM Agent must be updated to the minimum required version for Session Manager first. Run Command is used to send a command to the instance to update the agent.

## Quickstart administrator policy for Session Manager


Use the following examples to create IAM administrator policies for Session Manager. 

These policies provide administrators the ability to start a session to managed nodes that are tagged with `Key=Finance,Value=WebServers`, permission to create, update, and delete preferences, and permission to end only their own sessions. Refer to [Additional sample IAM policies for Session Manager](getting-started-restrict-access-examples.md) for examples of customizations you might want to make to the policy.

You can create a policy that allows administrators to perform these tasks from only the Session Manager console and AWS CLI, from only the Amazon EC2 console, or from all three.

In the following sample policies, replace each *example resource placeholder* with your own information. 

Choose from the following tabs to view the sample policy for the access scenario you want to support.

------
#### [ Session Manager and CLI ]

Use this sample policy to give administrators the ability to perform session-related tasks from only the Session Manager console and the AWS CLI. This policy doesn't provide all the permissions needed to perform session-related tasks from the Amazon EC2 console.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:*:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/Finance": [
                        "WebServers"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:CreateDocument",
                "ssm:UpdateDocument",
                "ssm:GetDocument",
                "ssm:StartSession"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        }
    ]
}
```

------

------
#### [ Amazon EC2 ]

Use this sample policy to give administrators the ability to perform session-related tasks from only the Amazon EC2 console. This policy doesn't provide all the permissions needed to perform session-related tasks from the Session Manager console and the AWS CLI.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/tag-key": [
                        "tag-value"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceInformation"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        }
    ]
}
```

------

------
#### [ Session Manager, CLI, and Amazon EC2 ]

Use this sample policy to give administrators the ability to perform session-related tasks from the Session Manager console, the AWS CLI, and the Amazon EC2 console.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/tag-key": [
                        "tag-value"
                    ]
                }
            }
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus",
                "ssm:DescribeInstanceInformation",
                "ssm:DescribeInstanceProperties",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:CreateDocument",
                "ssm:UpdateDocument",
                "ssm:GetDocument",
                "ssm:StartSession"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        }
    ]
}
```

------

------

**Note**  
The permission for [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) is needed for cases where a user attempts to start a session from the Amazon EC2 console, but a command must be sent to update SSM Agent first.

# Additional sample IAM policies for Session Manager


Refer to the following example policies to help you create a custom AWS Identity and Access Management (IAM) policy for any Session Manager user access scenarios you want to support.

**Topics**
+ [

## Example 1: Grant access to documents in the console
](#grant-access-documents-console-example)
+ [

## Example 2: Restrict access to specific managed nodes
](#restrict-access-example-instances)
+ [

## Example 3: Restrict access based on tags
](#restrict-access-example-instance-tags)
+ [

## Example 4: Allow a user to end only sessions they started
](#restrict-access-example-user-sessions)
+ [

## Example 5: Allow full (administrative) access to all sessions
](#restrict-access-example-full-access)

## Example 1: Grant access to documents in the console
Grant access to custom Session documents in the console

You can allow users to specify a custom document when they launch a session using the Session Manager console. The following example IAM policy grants permission to access documents with names that begin with **SessionDocument-** in the specified AWS Region and AWS account.

To use this policy, replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetDocument",
                "ssm:ListDocuments"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SessionDocument-*"
            ]
        }
    ]
}
```

------

**Note**  
The Session Manager console only supports Session documents that have a `sessionType` of `Standard_Stream` which are used to define session preferences. For more information, see [Session document schema](session-manager-schema.md).

## Example 2: Restrict access to specific managed nodes


You can create an IAM policy that defines which managed nodes that a user is allowed to connect to using Session Manager. For example, the following policy grants a user the permission to start, end, and resume their sessions on three specific nodes. The policy restricts the user from connecting to nodes other than those specified.

**Note**  
For federated users, see [Example 4: Allow a user to end only sessions they started](#restrict-access-example-user-sessions).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/i-1234567890EXAMPLE",
                "arn:aws:ec2:us-east-1:111122223333:instance/i-abcdefghijEXAMPLE",
                "arn:aws:ec2:us-east-1:111122223333:instance/i-0e9d8c7b6aEXAMPLE",
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       }
    ]
}
```

------

## Example 3: Restrict access based on tags


You can restrict access to managed nodes based on specific tags. In the following example, the user is allowed to start and resume sessions (`Effect: Allow, Action: ssm:StartSession, ssm:ResumeSession`) on any managed node (`Resource: arn:aws:ec2:region:987654321098:instance/*`) with the condition that the node is a Finance WebServer (`ssm:resourceTag/Finance: WebServer`). If the user sends a command to a managed node that isn't tagged or that has any tag other than `Finance: WebServer`, the command result will include `AccessDenied`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringLike": {
                    "ssm:resourceTag/Finance": [
                        "WebServers"
                    ]
                }
            }
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:TerminateSession",
                "ssm:ResumeSession"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:userid}-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        }
    ]
}
```

------

You can create IAM policies that allow a user to start sessions to managed nodes that are tagged with multiple tags. The following policy allows the user to start sessions to managed nodes that have both the specified tags applied to them. If a user sends a command to a managed node that isn't tagged with both of these tags, the command result will include `AccessDenied`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:StartSession"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/tag-key1":[
                  "tag-value1"
               ],
               "ssm:resourceTag/tag-key2":[
                  "tag-value2"
               ]
            }
         }
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       },
      {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
      }
   ]
}
```

------

For more information about creating IAM policies, see [Managed Policies and Inline Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) in the *IAM User Guide*. For more information about tagging managed nodes, see [Tagging your Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) in the *Amazon EC2 User Guide* (content applies to Windows and Linux managed nodes). For more information about increasing your security posture against unauthorized root-level commands on your managed nodes, see [Restricting access to root-level commands through SSM Agent](ssm-agent-restrict-root-level-commands.md)

## Example 4: Allow a user to end only sessions they started


Session Manager provides two methods to control which sessions a federated user in your AWS account is allowed to end.
+ Use the variable `{aws:userid}` in an AWS Identity and Access Management (IAM) permissions policy. Federated users can end only sessions they started. For unfederated users, use Method 1. For federated users, use Method 2.
+ Use tags supplied by AWS tags in an IAM permissions policy. In the policy, you include a condition that allows users to end only sessions that are tagged with specific tags that have been provided by AWS. This method works for all accounts, including those that use federated IDs to grant access to AWS.

### Method 1: Grant TerminateSession privileges using the variable `{aws:username}`


The following IAM policy allows a user to view the IDs of all sessions in your account. However, users can interact with managed nodes only through sessions they started. A user who is assigned the following policy can't connect to or end other users' sessions. The policy uses the variable `{aws:username}` to achieve this.

**Note**  
This method doesn't work for accounts that grant access to AWS using federated IDs.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:DescribeSessions"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        },
        {
            "Action": [
                "ssm:TerminateSession"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:*:session/${aws:username}-*"
            ]
        }
    ]
}
```

------

### Method 2: Grant TerminateSession privileges using tags supplied by AWS


You can control which sessions that a user can end by including conditional tag key variables in an IAM policy. The condition specifies that the user can only end sessions that are tagged with one or both of these specific tag key variables and a specified value.

When a user in your AWS account starts a session, Session Manager applies two resource tags to the session. The first resource tag is `aws:ssmmessages:target-id`, with which you specify the ID of the target the user is allowed to end. The other resource tag is `aws:ssmmessages:session-id`, with a value in the format of `role-id:caller-specified-role-name`.

**Note**  
Session Manager doesn’t support custom tags for this IAM access control policy. You must use the resource tags supplied by AWS, described below. 

 ** `aws:ssmmessages:target-id` **   
With this tag key, you include the managed node ID as the value in policy. In the following policy block, the condition statement allows a user to end only the node i-02573cafcfEXAMPLE.    
****  

```
{
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                "ssm:TerminateSession"
             ],
             "Resource": "*",
             "Condition": {
                 "StringLike": {
                     "ssm:resourceTag/aws:ssmmessages:target-id": [
                        "i-02573cafcfEXAMPLE"
                     ]
                 }
             }
         }
     ]
}
```
If the user tries to end a session for which they haven’t been granted this `TerminateSession` permission, they receive an `AccessDeniedException` error.

 ** `aws:ssmmessages:session-id` **   
This tag key includes a variable for the session ID as the value in the request to start a session.  
The following example demonstrates a policy for cases where the caller type is `User`. The value you supply for `aws:ssmmessages:session-id` is the ID of the user. In this example, `AIDIODR4TAW7CSEXAMPLE` represents the ID of a user in your AWS account. To retrieve the ID for a user in your AWS account, use the IAM command, `get-user`. For information, see [get-user](https://docs.aws.amazon.com/IAM/latest/UserGuide/get-user.html) in the AWS Identity and Access Management section of the *IAM User Guide*.     
****  

```
{
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                "ssm:TerminateSession"
             ],
             "Resource": "*",
             "Condition": {
                 "StringLike": {
                     "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "AIDIODR4TAW7CSEXAMPLE"
                     ]
                 }
             }
         }
     ]
}
```
The following example demonstrates a policy for cases where the caller type is `AssumedRole`. You can use the `{aws:userid}` variable for the value you supply for `aws:ssmmessages:session-id`. Alternatively, you can hardcode a role ID for the value you supply for `aws:ssmmessages:session-id`. If you hardcode a role ID, you must provide the value in the format `role-id:caller-specified-role-name`. For example, `AIDIODR4TAW7CSEXAMPLE:MyRole`.  
In order for system tags to be applied, the role ID you supply can contain the following characters only: Unicode letters, 0-9, space, `_`, `.`, `:`, `/`, `=`, `+`, `-`, `@`, and `\`.
To retrieve the role ID for a role in your AWS account, use the `get-caller-identity` command. For information, see [get-caller-identity](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) in the AWS CLI Command Reference.     
****  

```
{
     "Version":"2012-10-17",		 	 	 
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                "ssm:TerminateSession"
             ],
             "Resource": "*",
             "Condition": {
                 "StringLike": {
                     "ssm:resourceTag/aws:ssmmessages:session-id": [
                        "${aws:userid}*"
                     ]
                 }
             }
         }
     ]
}
```
If a user tries to end a session for which they haven’t been granted this `TerminateSession` permission, they receive an `AccessDeniedException` error.

**`aws:ssmmessages:target-id`** and **`aws:ssmmessages:session-id`**  
You can also create IAM policies that allow a user to end sessions that are tagged with both system tags, as shown in this example.    
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:TerminateSession"
         ],
         "Resource":"*",
         "Condition":{
            "StringLike":{
               "ssm:resourceTag/aws:ssmmessages:target-id":[
                  "i-02573cafcfEXAMPLE"
               ],
               "ssm:resourceTag/aws:ssmmessages:session-id":[
                  "${aws:userid}*"
               ]
            }
         }
      }
   ]
}
```

## Example 5: Allow full (administrative) access to all sessions


The following IAM policy allows a user to fully interact with all managed nodes and all sessions created by all users for all nodes. It should be granted only to an Administrator who needs full control over your organization's Session Manager activities.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:StartSession",
                "ssm:TerminateSession",
                "ssm:ResumeSession",
                "ssm:DescribeSessions",
                "ssm:GetConnectionStatus"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        },
        {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
       }
    ]
}
```

------

# Step 4: Configure session preferences


Users that have been granted administrative permissions in their AWS Identity and Access Management (IAM) policy can configure session preferences, including the following:
+ Turn on Run As support for Linux managed nodes. This makes it possible to start sessions using the credentials of a specified operating system user instead of the credentials of a system-generated `ssm-user` account that AWS Systems Manager Session Manager can create on a managed node.
+ Configure Session Manager to use AWS KMS key encryption to provide additional protection to the data transmitted between client machines and managed nodes.
+ Configure Session Manager to create and send session history logs to an Amazon Simple Storage Service (Amazon S3) bucket or an Amazon CloudWatch Logs log group. The stored log data can then be used to report on the session connections made to your managed nodes and the commands run on them during the sessions.
+ Configure session timeouts. You can use this setting to specify when to end a session after a period of inactivity.
+ Configure Session Manager to use configurable shell profiles. These customizable profiles allow you to define preferences within sessions such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.

For more information about the permissions needed to configue Session Manager preferences, see [Grant or deny a user permissions to update Session Manager preferences](preference-setting-permissions.md).

**Topics**
+ [

# Grant or deny a user permissions to update Session Manager preferences
](preference-setting-permissions.md)
+ [

# Specify an idle session timeout value
](session-preferences-timeout.md)
+ [

# Specify maximum session duration
](session-preferences-max-timeout.md)
+ [

# Allow configurable shell profiles
](session-preferences-shell-config.md)
+ [

# Turn on Run As support for Linux and macOS managed nodes
](session-preferences-run-as.md)
+ [

# Turn on KMS key encryption of session data (console)
](session-preferences-enable-encryption.md)
+ [

# Create a Session Manager preferences document (command line)
](getting-started-create-preferences-cli.md)
+ [

# Update Session Manager preferences (command line)
](getting-started-configure-preferences-cli.md)

For information about using the Systems Manager console to configure options for logging session data, see the following topics:
+  [Logging session data using Amazon S3 (console)](session-manager-logging-s3.md) 
+  [Streaming session data using Amazon CloudWatch Logs (console)](session-manager-logging-cwl-streaming.md) 
+  [Logging session data using Amazon CloudWatch Logs (console)](session-manager-logging-cloudwatch-logs.md) 

# Grant or deny a user permissions to update Session Manager preferences


Account preferences are stored as AWS Systems Manager (SSM) documents for each AWS Region. Before a user can update account preferences for sessions in your account, they must be granted the necessary permissions to access the type of SSM document where these preferences are stored. These permissions are granted through an AWS Identity and Access Management (IAM) policy.

**Administrator policy to allow preferences to be created and updated**  
An administrator can have the following policy to create and update preferences at any time. The following policy allows permission to access and update the `SSM-SessionManagerRunShell` document in the us-east-2 account 123456789012. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:CreateDocument",
                "ssm:GetDocument",
                "ssm:UpdateDocument",
                "ssm:DeleteDocument"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        }
    ]
}
```

------

**User policy to prevent preferences from being updated**  
Use the following policy to prevent end users in your account from updating or overriding any Session Manager preferences. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "ssm:CreateDocument",
                "ssm:GetDocument",
                "ssm:UpdateDocument",
                "ssm:DeleteDocument"
            ],
            "Effect": "Deny",
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:document/SSM-SessionManagerRunShell"
            ]
        }
    ]
}
```

------

# Specify an idle session timeout value


Session Manager, a tool in AWS Systems Manager, allows you to specify the amount of time to allow a user to be inactive before the system ends a session. By default, sessions time out after 20 minutes of inactivity. You can modify this setting to specify that a session times out between 1 and 60 minutes of inactivity. Some professional computing security agencies recommend setting idle session timeouts to a maximum of 15 minutes. 

The idle session timeout timer resets when Session Manager receives client-side inputs. These inputs include, but are not limited to:
+ Keyboard input in the terminal
+ Terminal or browser window resize events
+ Session reconnection (ResumeSession), which can occur due to network interruptions, browser tab management, or WebSocket disconnections

Because these events reset the idle timer, a session might remain active longer than the configured timeout period even without direct terminal commands.

If your security requirements mandate strict session duration limits regardless of activity, use the *Maximum session duration* setting in addition to idle timeout. For more information, see [Specify maximum session duration](session-preferences-max-timeout.md).

**To allow idle session timeout (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Specify the amount of time to allow a user to be inactive before a session ends in the **minutes** field under **Idle session timeout**.

1. Choose **Save**.

# Specify maximum session duration


Session Manager, a tool in AWS Systems Manager, allows you to specify the maximum duration of a session before it ends. By default, sessions do not have a maximum duration. The value you specify for maximum session duration must be between 1 and 1,440 minutes.

**To specify maximum session duration (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable maximum session duration**.

1. Specify the maximum duration of session before it ends in the **minutes** field under **Maximum session duration**.

1. Choose **Save**.

# Allow configurable shell profiles


By default, sessions on EC2 instances for Linux start using the Bourne shell (sh). However, you might prefer to use another shell like bash. By allowing configurable shell profiles, you can customize preferences within sessions such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.

**Important**  
Systems Manager doesn't check the commands or scripts in your shell profile to see what changes they would make to an instance before they're run. To restrict a user’s ability to modify commands or scripts entered in their shell profile, we recommend the following:  
Create a customized Session-type document for your AWS Identity and Access Management (IAM) users and roles. Then modify the IAM policy for these users and roles so the `StartSession` API operation can only use the Session-type document you have created for them. For information see, [Create a Session Manager preferences document (command line)](getting-started-create-preferences-cli.md) and [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user).
Modify the IAM policy for your IAM users and roles to deny access to the `UpdateDocument` API operation for the Session-type document resource you create. This allows your users and roles to use the document you created for their session preferences without allowing them to modify any of the settings.

**To turn on configurable shell profiles**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Specify the environment variables, shell preferences, or commands you want to run when your session starts in the fields for the applicable operating systems.

1. Choose **Save**.

The following are some example commands that can be added to your shell profile.

Change to the bash shell and change to the /usr directory on Linux instances.

```
exec /bin/bash
cd /usr
```

Output a timestamp and welcome message at the start of a session.

------
#### [ Linux & macOS ]

```
timestamp=$(date '+%Y-%m-%dT%H:%M:%SZ')
user=$(whoami)
echo $timestamp && echo "Welcome $user"'!'
echo "You have logged in to a production instance. Note that all session activity is being logged."
```

------
#### [  Windows  ]

```
$timestamp = (Get-Date).ToString("yyyy-MM-ddTH:mm:ssZ")
$splitName = (whoami).Split("\")
$user = $splitName[1]
Write-Host $timestamp
Write-Host "Welcome $user!"
Write-Host "You have logged in to a production instance. Note that all session activity is being logged."
```

------

View dynamic system activity at the start of a session.

------
#### [ Linux & macOS ]

```
top
```

------
#### [  Windows  ]

```
while ($true) { Get-Process | Sort-Object -Descending CPU | Select-Object -First 30; `
Start-Sleep -Seconds 2; cls
Write-Host "Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName"; 
Write-Host "-------  ------    -----      ----- -----   ------     -- -----------"}
```

------

# Turn on Run As support for Linux and macOS managed nodes


By default, Session Manager authenticates connections using the credentials of the system-generated `ssm-user` account that is created on a managed node. (On Linux and macOS machines, this account is added to `/etc/sudoers/`.) If you choose, you can instead authenticate sessions using the credentials of an operating system (OS) user account, or a domain user for instances joined to an Active Directory. In this case, Session Manager verifies that the OS account that you specified exists on the node, or in the domain, before starting the session. If you attempt to start a session using an OS account that doesn't exist on the node, or in the domain, the connection fails.

**Note**  
Session Manager does not support using an operating system's `root` user account to authenticate connections. For sessions that are authenticated using an OS user account, the node's OS-level and directory policies, like login restrictions or system resource usage restrictions, might not apply. 

**How it works**  
If you turn on Run As support for sessions, the system checks for access permissions as follows:

1. For the user who is starting the session, has their IAM entity (user or role) been tagged with `SSMSessionRunAs = os user account name`?

   If Yes, does the OS user name exist on the managed node? If it does, start the session. If it doesn't, don't allow a session to start.

   If the IAM entity has *not* been tagged with `SSMSessionRunAs = os user account name`, continue to step 2.

1. If the IAM entity hasn't been tagged with `SSMSessionRunAs = os user account name`, has an OS user name been specified in the AWS account's Session Manager preferences?

   If Yes, does the OS user name exist on the managed node? If it does, start the session. If it doesn't, don't allow a session to start. 

**Note**  
When you activate Run As support, it prevents Session Manager from starting sessions using the `ssm-user` account on a managed node. This means that if Session Manager fails to connect using the specified OS user account, it doesn't fall back to connecting using the default method.   
If you activate Run As without specifying an OS account or tagging an IAM entity, and you have not specified an OS account in Session Manager preferences, session connection attempts will fail.

**To turn on Run As support for Linux and macOS managed nodes**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable Run As support for Linux instances**.

1. Do one of the following:
   + **Option 1**: In the **Operating system user name** field, enter the name of the OS user account that you want to use to start sessions. Using this option, all sessions are run by the same OS user for all users in your AWS account who connect using Session Manager.
   + **Option 2** (Recommended): Choose the **Open the IAM console** link. In the navigation pane, choose either **Users** or **Roles**. Choose the entity (user or role) to add tags to, and then choose the **Tags** tab. Enter `SSMSessionRunAs` for the key name. Enter the name of an OS user account for the key value. Choose **Save changes**.

     Using this option, you can specify unique OS users for different IAM entities if you choose. For more information about tagging IAM entities (users or roles), see [Tagging IAM resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*.

     The following is an example.  
![\[Screenshot of specifying tags for Session Manager Run As permission.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/ssn-run-as-tags.png)

1. Choose **Save**.

# Turn on KMS key encryption of session data (console)


Use AWS Key Management Service (AWS KMS) to create and manage encryption keys. With AWS KMS, you can control the use of encryption across a wide range of AWS services and in your applications. You can specify that session data transmitted between your managed nodes and the local machines of users in your AWS account is encrypted using KMS key encryption. (This is in addition to the TLS 1.2/1.3 encryption that AWS already provides by default.) To encrypt Session Manager session data, create a *symmetric* KMS key using AWS KMS.

AWS KMS encryption is available for `Standard_Stream`, `InteractiveCommands`, and `NonInteractiveCommands` session types. To use the option to encrypt session data using a key created in AWS KMS, version 2.3.539.0 or later of AWS Systems Manager SSM Agent must be installed on the managed node. 

**Note**  
You must allow AWS KMS encryption in order to reset passwords on your managed nodes from the AWS Systems Manager console. For more information, see [Reset a password on a managed node](fleet-manager-reset-password.md#managed-instance-reset-a-password).

You can use a key that you created in your AWS account. You can also use a key that was created in a different AWS account. The creator of the key in a different AWS account must provide you with the permissions needed to use the key.

After you turn on KMS key encryption for your session data, both the users who start sessions and the managed nodes that they connect to must have permission to use the key. You provide permission to use the KMS key with Session Manager through AWS Identity and Access Management (IAM) policies. For information, see the following topics:
+ Add AWS KMS permissions for users in your account: [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md).
+ Add AWS KMS permissions for managed nodes in your account: [Step 2: Verify or add instance permissions for Session Manager](session-manager-getting-started-instance-profile.md).

For more information about creating and managing KMS keys, see the [https://docs.aws.amazon.com/kms/latest/developerguide/](https://docs.aws.amazon.com/kms/latest/developerguide/).

For information about using the AWS CLI to turn on KMS key encryption of session data in your account, see [Create a Session Manager preferences document (command line)](getting-started-create-preferences-cli.md) or [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

**Note**  
There is a charge to use KMS keys. For information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

**To turn on KMS key encryption of session data (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable KMS encryption**.

1. Do one of the following:
   + Choose the button next to **Select a KMS key in my current account**, then select a key from the list.

     -or-

     Choose the button next to **Enter a KMS key alias or KMS key ARN**. Manually enter a KMS key alias for a key created in your current account, or enter the key Amazon Resource Name (ARN) for a key in another account. The following are examples:
     + Key alias: `alias/my-kms-key-alias`
     + Key ARN: `arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-12345EXAMPLE`

     -or-

     Choose **Create new key** to create a new KMS key in your account. After you create the new key, return to the **Preferences** tab and select the key for encrypting session data in your account.

   For more information about sharing keys, see [Allowing External AWS accounts to Access a key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html#key-policy-modifying-external-accounts) in the *AWS Key Management Service Developer Guide*.

1. Choose **Save**.

# Create a Session Manager preferences document (command line)


Use the following procedure to create SSM documents that define your preferences for AWS Systems Manager Session Manager sessions. You can use the document to configure session options including data encryption, session duration, and logging. For example, you can specify whether to store session log data in an Amazon Simple Storage Service (Amazon S3) bucket or Amazon CloudWatch Logs log group. You can create documents that define general preferences for all sessions for an AWS account and AWS Region, or that define preferences for individual sessions. 

**Note**  
You can also configure general session preferences by using the Session Manager console.

Documents used to set Session Manager preferences must have a `sessionType` of `Standard_Stream`. For more information about Session documents, see [Session document schema](session-manager-schema.md).

For information about using the command line to update existing Session Manager preferences, see [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

For an example of how to create session preferences using CloudFormation, see [Create a Systems Manager document for Session Manager preferences](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html#aws-resource-ssm-document--examples) in the *AWS CloudFormation User Guide*.

**Note**  
This procedure describes how to create documents for setting Session Manager preferences at the AWS account level. To create documents that will be used for setting session-level preferences, specify a value other than `SSM-SessionManagerRunShell` for the file name related command inputs .   
To use your document to set preferences for sessions started from the AWS Command Line Interface (AWS CLI), provide the document name as the `--document-name` parameter value. To set preferences for sessions started from the Session Manager console, you can type or select the name of your document from a list.

**To create Session Manager preferences (command line)**

1. Create a JSON file on your local machine with a name such as `SessionManagerRunShell.json`, and then paste the following content into it.

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to hold regional settings for Session Manager",
       "sessionType": "Standard_Stream",
       "inputs": {
           "s3BucketName": "",
           "s3KeyPrefix": "",
           "s3EncryptionEnabled": true,
           "cloudWatchLogGroupName": "",
           "cloudWatchEncryptionEnabled": true,
           "cloudWatchStreamingEnabled": false,
           "kmsKeyId": "",
           "runAsEnabled": false,
           "runAsDefaultUser": "",
           "idleSessionTimeout": "",
           "maxSessionDuration": "",
           "shellProfile": {
               "windows": "date",
               "linux": "pwd;ls"
           }
       }
   }
   ```

   You can also pass values to your session preferences using parameters instead of hardcoding the values as shown in the following example.

   ```
   {
      "schemaVersion":"1.0",
      "description":"Session Document Parameter Example JSON Template",
      "sessionType":"Standard_Stream",
      "parameters":{
         "s3BucketName":{
            "type":"String",
            "default":""
         },
         "s3KeyPrefix":{
            "type":"String",
            "default":""
         },
         "s3EncryptionEnabled":{
            "type":"Boolean",
            "default":"false"
         },
         "cloudWatchLogGroupName":{
            "type":"String",
            "default":""
         },
         "cloudWatchEncryptionEnabled":{
            "type":"Boolean",
            "default":"false"
         }
      },
      "inputs":{
         "s3BucketName":"{{s3BucketName}}",
         "s3KeyPrefix":"{{s3KeyPrefix}}",
         "s3EncryptionEnabled":"{{s3EncryptionEnabled}}",
         "cloudWatchLogGroupName":"{{cloudWatchLogGroupName}}",
         "cloudWatchEncryptionEnabled":"{{cloudWatchEncryptionEnabled}}",
         "kmsKeyId":""
      }
   }
   ```

1. Specify where you want to send session data. You can specify an S3 bucket name (with an optional prefix) or a CloudWatch Logs log group name. If you want to further encrypt data between local client and managed nodes, provide the KMS key to use for encryption. The following is an example.

   ```
   {
     "schemaVersion": "1.0",
     "description": "Document to hold regional settings for Session Manager",
     "sessionType": "Standard_Stream",
     "inputs": {
       "s3BucketName": "amzn-s3-demo-bucket",
       "s3KeyPrefix": "MyS3Prefix",
       "s3EncryptionEnabled": true,
       "cloudWatchLogGroupName": "MyLogGroupName",
       "cloudWatchEncryptionEnabled": true,
       "cloudWatchStreamingEnabled": false,
       "kmsKeyId": "MyKMSKeyID",
       "runAsEnabled": true,
       "runAsDefaultUser": "MyDefaultRunAsUser",
       "idleSessionTimeout": "20",
       "maxSessionDuration": "60",
       "shellProfile": {
           "windows": "MyCommands",
           "linux": "MyCommands"
       }
     }
   }
   ```
**Note**  
If you don't want to encrypt the session log data, change `true` to `false` for `s3EncryptionEnabled`.  
If you aren't sending logs to either an Amazon S3 bucket or a CloudWatch Logs log group, don't want to encrypt active session data, or don't want to turn on Run As support for the sessions in your account, you can delete the lines for those options. Make sure the last line in the `inputs` section doesn't end with a comma.  
If you add a KMS key ID to encrypt your session data, both the users who start sessions and the managed nodes that they connect to must have permission to use the key. You provide permission to use the KMS key with Session Manager through IAM policies. For information, see the following topics:  
Add AWS KMS permissions for users in your account: [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md)
Add AWS KMS permissions for managed nodes in your account: [Step 2: Verify or add instance permissions for Session Manager](session-manager-getting-started-instance-profile.md)

1. Save the file.

1. In the directory where you created the JSON file, run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-document \
       --name SSM-SessionManagerRunShell \
       --content "file://SessionManagerRunShell.json" \
       --document-type "Session" \
       --document-format JSON
   ```

------
#### [  Windows  ]

   ```
   aws ssm create-document ^
       --name SSM-SessionManagerRunShell ^
       --content "file://SessionManagerRunShell.json" ^
       --document-type "Session" ^
       --document-format JSON
   ```

------
#### [   PowerShell   ]

   ```
   New-SSMDocument `
       -Name "SSM-SessionManagerRunShell" `
       -Content (Get-Content -Raw SessionManagerRunShell.json) `
       -DocumentType "Session" `
       -DocumentFormat JSON
   ```

------

   If successful, the command returns output similar to the following.

   ```
   {
       "DocumentDescription": {
           "Status": "Creating",
           "Hash": "ce4fd0a2ab9b0fae759004ba603174c3ec2231f21a81db8690a33eb66EXAMPLE",
           "Name": "SSM-SessionManagerRunShell",
           "Tags": [],
           "DocumentType": "Session",
           "PlatformTypes": [
               "Windows",
               "Linux"
           ],
           "DocumentVersion": "1",
           "HashType": "Sha256",
           "CreatedDate": 1547750660.918,
           "Owner": "111122223333",
           "SchemaVersion": "1.0",
           "DefaultVersion": "1",
           "DocumentFormat": "JSON",
           "LatestVersion": "1"
       }
   }
   ```

# Update Session Manager preferences (command line)


The following procedure describes how to use your preferred command line tool to make changes to the AWS Systems Manager Session Manager preferences for your AWS account in the selected AWS Region. Use Session Manager preferences to specify options for logging session data in an Amazon Simple Storage Service (Amazon S3) bucket or Amazon CloudWatch Logs log group. You can also use Session Manager preferences to encrypt your session data.

**To update Session Manager preferences (command line)**

1. Create a JSON file on your local machine with a name such as `SessionManagerRunShell.json`, and then paste the following content into it.

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to hold regional settings for Session Manager",
       "sessionType": "Standard_Stream",
       "inputs": {
           "s3BucketName": "",
           "s3KeyPrefix": "",
           "s3EncryptionEnabled": true,
           "cloudWatchLogGroupName": "",
           "cloudWatchEncryptionEnabled": true,
           "cloudWatchStreamingEnabled": false,
           "kmsKeyId": "",
           "runAsEnabled": true,
           "runAsDefaultUser": "",
           "idleSessionTimeout": "",
           "maxSessionDuration": "",
           "shellProfile": {
               "windows": "date",
               "linux": "pwd;ls"
           }
       }
   }
   ```

1. Specify where you want to send session data. You can specify an S3 bucket name (with an optional prefix) or a CloudWatch Logs log group name. If you want to further encrypt data between local client and managed nodes, provide the AWS KMS key to use for encryption. The following is an example.

   ```
   {
     "schemaVersion": "1.0",
     "description": "Document to hold regional settings for Session Manager",
     "sessionType": "Standard_Stream",
     "inputs": {
       "s3BucketName": "amzn-s3-demo-bucket",
       "s3KeyPrefix": "MyS3Prefix",
       "s3EncryptionEnabled": true,
       "cloudWatchLogGroupName": "MyLogGroupName",
       "cloudWatchEncryptionEnabled": true,
       "cloudWatchStreamingEnabled": false,
       "kmsKeyId": "MyKMSKeyID",
       "runAsEnabled": true,
       "runAsDefaultUser": "MyDefaultRunAsUser",
       "idleSessionTimeout": "20",
       "maxSessionDuration": "60",
       "shellProfile": {
           "windows": "MyCommands",
           "linux": "MyCommands"
       }
     }
   }
   ```
**Note**  
If you don't want to encrypt the session log data, change `true` to `false` for `s3EncryptionEnabled`.  
If you aren't sending logs to either an Amazon S3 bucket or a CloudWatch Logs log group, don't want to encrypt active session data, or don't want to turn on Run As support for the sessions in your account, you can delete the lines for those options. Make sure the last line in the `inputs` section doesn't end with a comma.  
If you add a KMS key ID to encrypt your session data, both the users who start sessions and the managed nodes that they connect to must have permission to use the key. You provide permission to use the KMS key with Session Manager through AWS Identity and Access Management (IAM) policies. For information, see the following topics:  
Add AWS KMS permissions for users in your account: [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md).
Add AWS KMS permissions for managed nodes in your account: [Step 2: Verify or add instance permissions for Session Manager](session-manager-getting-started-instance-profile.md).

1. Save the file.

1. In the directory where you created the JSON file, run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-document \
       --name "SSM-SessionManagerRunShell" \
       --content "file://SessionManagerRunShell.json" \
       --document-version "\$LATEST"
   ```

------
#### [  Windows  ]

   ```
   aws ssm update-document ^
       --name "SSM-SessionManagerRunShell" ^
       --content "file://SessionManagerRunShell.json" ^
       --document-version "$LATEST"
   ```

------
#### [   PowerShell   ]

   ```
   Update-SSMDocument `
       -Name "SSM-SessionManagerRunShell" `
       -Content (Get-Content -Raw SessionManagerRunShell.json) `
       -DocumentVersion '$LATEST'
   ```

------

   If successful, the command returns output similar to the following.

   ```
   {
       "DocumentDescription": {
           "Status": "Updating",
           "Hash": "ce4fd0a2ab9b0fae759004ba603174c3ec2231f21a81db8690a33eb66EXAMPLE",
           "Name": "SSM-SessionManagerRunShell",
           "Tags": [],
           "DocumentType": "Session",
           "PlatformTypes": [
               "Windows",
               "Linux"
           ],
           "DocumentVersion": "2",
           "HashType": "Sha256",
           "CreatedDate": 1537206341.565,
           "Owner": "111122223333",
           "SchemaVersion": "1.0",
           "DefaultVersion": "1",
           "DocumentFormat": "JSON",
           "LatestVersion": "2"
       }
   }
   ```

# Step 5: (Optional) Restrict access to commands in a session


You can restrict the commands that a user can run in an AWS Systems Manager Session Manager session by using a custom `Session` type AWS Systems Manager (SSM) document. In the document, you define the command that is run when the user starts a session and the parameters that the user can provide to the command. The `Session` document `schemaVersion` must be 1.0, and the `sessionType` of the document must be `InteractiveCommands`. You can then create AWS Identity and Access Management (IAM) policies that allow users to access only the `Session` documents that you define. For more information about using IAM policies to restrict access to commands in a session, see [IAM policy examples for interactive commands](#interactive-command-policy-examples).

Documents with the `sessionType` of `InteractiveCommands` are only supported for sessions started from the AWS Command Line Interface (AWS CLI). The user provides the custom document name as the `--document-name` parameter value and provides any command parameter values using the `--parameters` option. For more information about running interactive commands, see [Starting a session (interactive and noninteractive commands)](session-manager-working-with-sessions-start.md#sessions-start-interactive-commands).

Use following procedure to create a custom `Session` type SSM document that defines the command a user is allowed to run.

## Restrict access to commands in a session (console)


**To restrict the commands a user can run in a Session Manager session (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Choose **Create command or session**.

1. For **Name**, enter a descriptive name for the document.

1. For **Document type**, choose **Session document**.

1. Enter your document content that defines the command a user can run in a Session Manager session using JSON or YAML, as shown in the following example.

------
#### [ YAML ]

   ```
   ---
   schemaVersion: '1.0'
   description: Document to view a log file on a Linux instance
   sessionType: InteractiveCommands
   parameters:
     logpath:
       type: String
       description: The log file path to read.
       default: "/var/log/amazon/ssm/amazon-ssm-agent.log"
       allowedPattern: "^[a-zA-Z0-9-_/]+(.log)$"
   properties:
     linux:
       commands: "tail -f {{ logpath }}"
       runAsElevated: true
   ```

------
#### [ JSON ]

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to view a log file on a Linux instance",
       "sessionType": "InteractiveCommands",
       "parameters": {
           "logpath": {
               "type": "String",
               "description": "The log file path to read.",
               "default": "/var/log/amazon/ssm/amazon-ssm-agent.log",
               "allowedPattern": "^[a-zA-Z0-9-_/]+(.log)$"
           }
       },
       "properties": {
           "linux": {
               "commands": "tail -f {{ logpath }}",
               "runAsElevated": true
           }
       }
   }
   ```

------

1. Choose **Create document**.

## Restrict access to commands in a session (command line)


**Before you begin**  
If you haven't already, install and configure the AWS Command Line Interface (AWS CLI) or the AWS Tools for PowerShell. For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

**To restrict the commands a user can run in a Session Manager session (command line)**

1. Create a JSON or YAML file for your document content that defines the command a user can run in a Session Manager session, as shown in the following example.

------
#### [ YAML ]

   ```
   ---
   schemaVersion: '1.0'
   description: Document to view a log file on a Linux instance
   sessionType: InteractiveCommands
   parameters:
     logpath:
       type: String
       description: The log file path to read.
       default: "/var/log/amazon/ssm/amazon-ssm-agent.log"
       allowedPattern: "^[a-zA-Z0-9-_/]+(.log)$"
   properties:
     linux:
       commands: "tail -f {{ logpath }}"
       runAsElevated: true
   ```

------
#### [ JSON ]

   ```
   {
       "schemaVersion": "1.0",
       "description": "Document to view a log file on a Linux instance",
       "sessionType": "InteractiveCommands",
       "parameters": {
           "logpath": {
               "type": "String",
               "description": "The log file path to read.",
               "default": "/var/log/amazon/ssm/amazon-ssm-agent.log",
               "allowedPattern": "^[a-zA-Z0-9-_/]+(.log)$"
           }
       },
       "properties": {
           "linux": {
               "commands": "tail -f {{ logpath }}",
               "runAsElevated": true
           }
       }
   }
   ```

------

1. Run the following commands to create an SSM document using your content that defines the command a user can run in a Session Manager session.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-document \
       --content file://path/to/file/documentContent.json \
       --name "exampleAllowedSessionDocument" \
       --document-type "Session"
   ```

------
#### [  Windows  ]

   ```
   aws ssm create-document ^
       --content file://C:\path\to\file\documentContent.json ^
       --name "exampleAllowedSessionDocument" ^
       --document-type "Session"
   ```

------
#### [   PowerShell   ]

   ```
   $json = Get-Content -Path "C:\path\to\file\documentContent.json" | Out-String
   New-SSMDocument `
       -Content $json `
       -Name "exampleAllowedSessionDocument" `
       -DocumentType "Session"
   ```

------

## Interactive command parameters and the AWS CLI


There are a variety of ways you can provide interactive command parameters when using the AWS CLI. Depending on the operating system (OS) of your client machine that you use to connect to managed nodes with the AWS CLI, the syntax you provide for commands that contain special or escape characters might differ. The following examples show some of the different ways you can provide command parameters when using the AWS CLI, and how to handle special or escape characters.

Parameters stored in Parameter Store can be referenced in the AWS CLI for your command parameters as shown in the following example.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters '{"command":["{{ssm:mycommand}}"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters '{"command":["{{ssm:mycommand}}"]}'
```

------

The following example shows how you can use a shorthand syntax with the AWS CLI to pass parameters.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters command="ifconfig"
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters command="ipconfig"
```

------

You can also provide parameters in JSON as shown in the following example.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters '{"command":["ifconfig"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters '{"command":["ipconfig"]}'
```

------

Parameters can also be stored in a JSON file and provided to the AWS CLI as shown in the following example. For more information about using AWS CLI parameters from a file, see [Loading AWS CLI parameters from a file](https://docs.aws.amazon.com/cli/latest/userguide/;cli-usage-parameters-file.html) in the *AWS Command Line Interface User Guide*.

```
{
    "command": [
        "my command"
    ]
}
```

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters file://complete/path/to/file/parameters.json
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters file://complete/path/to/file/parameters.json
```

------

You can also generate an AWS CLI skeleton from a JSON input file as shown in the following example. For more information about generating AWS CLI skeletons from JSON input files, see [Generating AWS CLI skeleton and input parameters from a JSON or YAML input file](https://docs.aws.amazon.com/cli/latest/userguide/;cli-usage-skeleton.html) in the *AWS Command Line Interface User Guide*.

```
{
    "Target": "instance-id",
    "DocumentName": "MyInteractiveCommandDocument",
    "Parameters": {
        "command": [
            "my command"
        ]
    }
}
```

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --cli-input-json file://complete/path/to/file/parameters.json
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --cli-input-json file://complete/path/to/file/parameters.json
```

------

To escape characters inside quotation marks, you must add additional backslashes to the escape characters as shown in the following example.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name MyInteractiveCommandDocument \ 
    --parameters '{"command":["printf \"abc\\\\tdef\""]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name MyInteractiveCommandDocument ^
    --parameters '{"command":["printf \"abc\\\\tdef\""]}'
```

------

For information about using quotation marks with command parameters in the AWS CLI, see [Using quotation marks with strings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/;cli-usage-parameters-quoting-strings.html) in the *AWS Command Line Interface User Guide*.

## IAM policy examples for interactive commands


You can create IAM policies that allow users to access only the `Session` documents you define. This restricts the commands a user can run in a Session Manager session to only the commands defined in your custom `Session` type SSM documents.

 **Allow a user to run an interactive command on a single managed node**     
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ssm:StartSession",
         "Resource":[
            "arn:aws:ec2:us-east-1:444455556666:instance/i-02573cafcfEXAMPLE",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document"
         ]
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
      }
   ]
}
```

 **Allow a user to run an interactive command on all managed nodes**     
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ssm:StartSession",
         "Resource":[
            "arn:aws:ec2:us-east-1:444455556666:instance/*",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document"
         ]
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
      }
   ]
}
```

 **Allow a user to run multiple interactive commands on all managed nodes**     
****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ssm:StartSession",
         "Resource":[
            "arn:aws:ec2:us-east-1:444455556666:instance/*",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document",
            "arn:aws:ssm:us-east-1:444455556666:document/allowed-session-document-2"
         ]
      },
      {
         "Effect": "Allow",
         "Action": ["ssmmessages:OpenDataChannel"],
         "Resource": ["arn:aws:ssm:*:*:session/${aws:userid}-*"]
      }
   ]
}
```

# Step 6: (Optional) Use AWS PrivateLink to set up a VPC endpoint for Session Manager


You can further improve the security posture of your managed nodes by configuring AWS Systems Manager to use an interface virtual private cloud (VPC) endpoint. Interface endpoints are powered by AWS PrivateLink, a technology that allows you to privately access Amazon Elastic Compute Cloud (Amazon EC2) and Systems Manager APIs by using private IP addresses. 

AWS PrivateLink restricts all network traffic between your managed nodes, Systems Manager, and Amazon EC2 to the Amazon network. (Managed nodes don't have access to the internet.) Also, you don't need an internet gateway, a NAT device, or a virtual private gateway. 

For information about creating a VPC endpoint, see [Improve the security of EC2 instances by using VPC endpoints for Systems Manager](setup-create-vpc.md).

The alternative to using a VPC endpoint is to allow outbound internet access on your managed nodes. In this case, the managed nodes must also allow HTTPS (port 443) outbound traffic to the following endpoints:
+  `ec2messages.region.amazonaws.com` 
+  `ssm.region.amazonaws.com` 
+  `ssmmessages.region.amazonaws.com` 

Systems Manager uses the last of these endpoints, `ssmmessages.region.amazonaws.com`, to make calls from SSM Agent to the Session Manager service in the cloud.

To use optional features like AWS Key Management Service (AWS KMS) encryption, streaming logs to Amazon CloudWatch Logs (CloudWatch Logs), and sending logs to Amazon Simple Storage Service (Amazon S3) you must allow HTTPS (port 443) outbound traffic to the following endpoints:
+  `kms.region.amazonaws.com` 
+  `logs.region.amazonaws.com` 
+  `s3.region.amazonaws.com` 

For more information about required endpoints for Systems Manager, see [Reference: ec2messages, ssmmessages, and other API operations](systems-manager-setting-up-messageAPIs.md).

# Step 7: (Optional) Turn on or turn off ssm-user account administrative permissions


Starting with version 2.3.50.0 of AWS Systems Manager SSM Agent, the agent creates a local user account called `ssm-user` and adds it to `/etc/sudoers` (Linux and macOS) or to the Administrators group (Windows). On agent versions earlier than 2.3.612.0, the account is created the first time SSM Agent starts or restarts after installation. On version 2.3.612.0 and later, the `ssm-user` account is created the first time a session is started on a node. This `ssm-user` is the default operating system (OS) user when a AWS Systems Manager Session Manager session is started. SSM Agent version 2.3.612.0 was released on May 8th, 2019.

If you want to prevent Session Manager users from running administrative commands on a node, you can update the `ssm-user` account permissions. You can also restore these permissions after they have been removed.

**Topics**
+ [

## Managing ssm-user sudo account permissions on Linux and macOS
](#ssm-user-permissions-linux)
+ [

## Managing ssm-user Administrator account permissions on Windows Server
](#ssm-user-permissions-windows)

## Managing ssm-user sudo account permissions on Linux and macOS


Use one of the following procedures to turn on or turn off the ssm-user account sudo permissions on Linux and macOS managed nodes.

**Use Run Command to modify ssm-user sudo permissions (console)**
+ Use the procedure in [Running commands from the console](running-commands-console.md) with the following values:
  + For **Command document**, choose `AWS-RunShellScript`.
  + To remove sudo access, in the **Command parameters** area, paste the following in the **Commands** box.

    ```
    cd /etc/sudoers.d
    echo "#User rules for ssm-user" > ssm-agent-users
    ```

    -or-

    To restore sudo access, in the **Command parameters** area, paste the following in the **Commands** box.

    ```
    cd /etc/sudoers.d 
    echo "ssm-user ALL=(ALL) NOPASSWD:ALL" > ssm-agent-users
    ```

**Use the command line to modify ssm-user sudo permissions (AWS CLI)**

1. Connect to the managed node and run the following command.

   ```
   sudo -s
   ```

1. Change the working directory using the following command.

   ```
   cd /etc/sudoers.d
   ```

1. Open the file named `ssm-agent-users` for editing.

1. To remove sudo access, delete the following line.

   ```
   ssm-user ALL=(ALL) NOPASSWD:ALL
   ```

   -or-

   To restore sudo access, add the following line.

   ```
   ssm-user ALL=(ALL) NOPASSWD:ALL
   ```

1. Save the file.

## Managing ssm-user Administrator account permissions on Windows Server


Use one of the following procedures to turn on or turn off the ssm-user account Administrator permissions on Windows Server managed nodes.

**Use Run Command to modify Administrator permissions (console)**
+ Use the procedure in [Running commands from the console](running-commands-console.md) with the following values:

  For **Command document**, choose `AWS-RunPowerShellScript`.

  To remove administrative access, in the **Command parameters** area, paste the following in the **Commands** box.

  ```
  net localgroup "Administrators" "ssm-user" /delete
  ```

  -or-

  To restore administrative access, in the **Command parameters** area, paste the following in the **Commands** box.

  ```
  net localgroup "Administrators" "ssm-user" /add
  ```

**Use the PowerShell or command prompt window to modify Administrator permissions**

1. Connect to the managed node and open the PowerShell or Command Prompt window.

1. To remove administrative access, run the following command.

   ```
   net localgroup "Administrators" "ssm-user" /delete
   ```

   -or-

   To restore administrative access, run the following command.

   ```
   net localgroup "Administrators" "ssm-user" /add
   ```

**Use the Windows console to modify Administrator permissions**

1. Connect to the managed node and open the PowerShell or Command Prompt window.

1. From the command line, run `lusrmgr.msc` to open the **Local Users and Groups** console.

1. Open the **Users** directory, and then open **ssm-user**.

1. On the **Member Of** tab, do one of the following:
   + To remove administrative access, select **Administrators**, and then choose **Remove**.

     -or-

     To restore administrative access, enter **Administrators** in the text box, and then choose **Add**.

1. Choose **OK**.

# Step 8: (Optional) Allow and control permissions for SSH connections through Session Manager


You can allow users in your AWS account to use the AWS Command Line Interface (AWS CLI) to establish Secure Shell (SSH) connections to managed nodes using AWS Systems Manager Session Manager. Users who connect using SSH can also copy files between their local machines and managed nodes using Secure Copy Protocol (SCP). You can use this functionality to connect to managed nodes without opening inbound ports or maintaining bastion hosts.

 When you establish SSH connections through Session Manager, the AWS CLI and SSM Agent create secure WebSocket connections over TLS to Session Manager endpoints. The SSH session runs within this encrypted tunnel, providing an additional layer of security without requiring inbound ports to be opened on your managed nodes.

After allowing SSH connections, you can use AWS Identity and Access Management (IAM) policies to explicitly allow or deny users, groups, or roles to make SSH connections using Session Manager.

**Note**  
Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

**Topics**
+ [

## Allowing SSH connections for Session Manager
](#ssh-connections-enable)
+ [

## Controlling user permissions for SSH connections through Session Manager
](#ssh-connections-permissions)

## Allowing SSH connections for Session Manager


Use the following steps to allow SSH connections through Session Manager on a managed node. 

**To allow SSH connections for Session Manager**

1. On the managed node to which you want to allow SSH connections, do the following:
   + Ensure that SSH is running on the managed node. (You can close inbound ports on the node.)
   + Ensure that SSM Agent version 2.3.672.0 or later is installed on the managed node.

     For information about installing or updating SSM Agent on a managed node, see the following topics:
     + [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md).
     +  [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md) 
     +  [Manually installing and uninstalling SSM Agent on EC2 instances for macOS](manually-install-ssm-agent-macos.md) 
     +  [How to install the SSM Agent on hybrid Windows nodes](hybrid-multicloud-ssm-agent-install-windows.md) 
     +  [How to install the SSM Agent on hybrid Linux nodes](hybrid-multicloud-ssm-agent-install-linux.md) 
**Note**  
To use Session Manager with on-premises servers, edge devices, and virtual machines (VMs) that you activated as managed nodes, you must use the advanced-instances tier. For more information about advanced instances, see [Configuring instance tiers](fleet-manager-configure-instance-tiers.md).

1. On the local machine from which you want to connect to a managed node using SSH, do the following:
   + Ensure that version 1.1.23.0 or later of the Session Manager plugin is installed.

     For information about installing the Session Manager plugin, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).
   + Update the SSH configuration file to allow running a proxy command that starts a Session Manager session and transfer all data through the connection.

      **Linux and macOS** 
**Tip**  
The SSH configuration file is typically located at `~/.ssh/config`.

     Add the following to the configuration file on the local machine.

     ```
     # SSH over Session Manager
     Host i-* mi-*
         ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
         User ec2-user
     ```

      ** Windows ** 
**Tip**  
The SSH configuration file is typically located at `C:\Users\<username>\.ssh\config`.

     Add the following to the configuration file on the local machine.

     ```
     # SSH over Session Manager
     Host i-* mi-*
         ProxyCommand C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters portNumber=%p"
     ```
   + Create or verify that you have a Privacy Enhanced Mail certificate (a PEM file), or at minimum a public key, to use when establishing connections to managed nodes. This must be a key that is already associated with the managed node. The permissions of your private key file must be set so that only you can read it. You can use the following command to set the permissions of your private key file so that only you can read it.

     ```
     chmod 400 <my-key-pair>.pem
     ```

     For example, for an Amazon Elastic Compute Cloud (Amazon EC2) instance, the key pair file you created or selected when you created the instance. (You specify the path to the certificate or key as part of the command to start a session. For information about starting a session using SSH, see [Starting a session (SSH)](session-manager-working-with-sessions-start.md#sessions-start-ssh).)

## Controlling user permissions for SSH connections through Session Manager


After you enable SSH connections through Session Manager on a managed node, you can use IAM policies to allow or deny users, groups, or roles the ability to make SSH connections through Session Manager. 

**To use an IAM policy to allow SSH connections through Session Manager**
+ Use one of the following options:
  + **Option 1**: Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). 

    In the navigation pane, choose **Policies**, and then update the permissions policy for the user or role you want to allow to start SSH connections through Session Manager. 

    For example, add the following element to the Quickstart policy you created in [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user). Replace each *example resource placeholder* with your own information. 

------
#### [ JSON ]

****  

    ```
    {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "ssm:StartSession",
                "Resource": [
                    "arn:aws:ec2:us-east-1:111122223333:instance/instance-id",
                    "arn:aws:ssm:*:*:document/AWS-StartSSHSession"
                ]
            },
            {
                "Effect": "Allow",
                "Action": "ssmmessages:OpenDataChannel",
                "Resource": "arn:aws:ssm:*:*:session/${aws:userid}-*"
            }
        ]
    }
    ```

------
  + **Option 2**: Attach an inline policy to a user policy by using the AWS Management Console, the AWS CLI, or the AWS API.

    Using the method of your choice, attach the policy statement in **Option 1** to the policy for an AWS user, group, or role.

    For information, see [Adding and Removing IAM Identity Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

**To use an IAM policy to deny SSH connections through Session Manager**
+ Use one of the following options:
  + **Option 1**: Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/). In the navigation pane, choose **Policies**, and then update the permissions policy for the user or role to block from starting Session Manager sessions. 

    For example, add the following element to the Quickstart policy you created in [Quickstart end user policies for Session Manager](getting-started-restrict-access-quickstart.md#restrict-access-quickstart-end-user).

------
#### [ JSON ]

****  

    ```
    {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Effect": "Deny",
                "Action": "ssm:StartSession",
                "Resource": "arn:aws:ssm:*:*:document/AWS-StartSSHSession"
            },
            {
                "Effect": "Allow",
                "Action": "ssmmessages:OpenDataChannel",
                "Resource": "arn:aws:ssm:*:*:session/${aws:userid}-*"
            }
        ]
    }
    ```

------
  + **Option 2**: Attach an inline policy to a user policy by using the AWS Management Console, the AWS CLI, or the AWS API.

    Using the method of your choice, attach the policy statement in **Option 1** to the policy for an AWS user, group, or role.

    For information, see [Adding and Removing IAM Identity Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

# Working with Session Manager


You can use the AWS Systems Manager console, the Amazon Elastic Compute Cloud (Amazon EC2) console, or the AWS Command Line Interface (AWS CLI) to start sessions that connect you to the managed nodes your system administrator has granted you access to using AWS Identity and Access Management (IAM) policies. Depending on your permissions, you can also view information about sessions, resume inactive sessions that haven't timed out, and end sessions. After a session is established, it is not affected by IAM role session duration. For information about limiting session duration with Session Manager, see [Specify an idle session timeout value](session-preferences-timeout.md) and [Specify maximum session duration](session-preferences-max-timeout.md).

For more information about sessions, see [What is a session?](session-manager.md#what-is-a-session)

**Topics**
+ [

# Install the Session Manager plugin for the AWS CLI
](session-manager-working-with-install-plugin.md)
+ [

# Start a session
](session-manager-working-with-sessions-start.md)
+ [

# End a session
](session-manager-working-with-sessions-end.md)
+ [

# View session history
](session-manager-working-with-view-history.md)

# Install the Session Manager plugin for the AWS CLI


To initiate Session Manager sessions with your managed nodes by using the AWS Command Line Interface (AWS CLI), you must install the *Session Manager plugin* on your local machine. You can install the plugin on supported versions of Microsoft Windows Server, macOS, Linux, and Ubuntu Server.

**Note**  
To use the Session Manager plugin, you must have AWS CLI version 1.16.12 or later installed on your local machine. For more information, see [Installing or updating the latest version of the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Topics**
+ [

# Session Manager plugin latest version and release history
](plugin-version-history.md)
+ [

# Install the Session Manager plugin on Windows
](install-plugin-windows.md)
+ [

# Install the Session Manager plugin on macOS
](install-plugin-macos-overview.md)
+ [

# Install the Session Manager plugin on Linux
](install-plugin-linux-overview.md)
+ [

# Verify the Session Manager plugin installation
](install-plugin-verify.md)
+ [

# Session Manager plugin on GitHub
](plugin-github.md)
+ [

# (Optional) Turn on Session Manager plugin logging
](install-plugin-configure-logs.md)

# Session Manager plugin latest version and release history
Version history

Your local machine must be running a supported version of the Session Manager plugin. The current minimum supported version is 1.1.17.0. If you're running an earlier version, your Session Manager operations might not succeed. 

 

To see if you have the latest version, run the following command in the AWS CLI.

**Note**  
The command returns results only if the plugin is located in the default installation directory for your operating system type. You can also check the version in the contents of the `VERSION` file in the directory where you have installed the plugin.

```
session-manager-plugin --version
```

The following table lists all releases of the Session Manager plugin and the features and enhancements included with each version.

**Important**  
We recommend you always run the latest version. The latest version includes enhancements that improve the experience of using the plugin.


| Version | Release date | Details | 
| --- | --- | --- | 
| 1.2.804.0 |  April 2, 2026  | **Enhancement**: Bump to aws-sdk-go-v2 package. **Bug fix**: Update windows install script. **Bug fix**: Update default plugin version for local build. | 
| 1.2.792.0 |  March 17, 2026  | **Bug fix**: Add international keyboard support for Windows. | 
| 1.2.779.0 |  February 12, 2026  | **Enhancement**: Update Go version to 1.25 in Dockerfile. **Bug fix**: Add shebang lines to debian packaging scripts. | 
| 1.2.764.0 |  November 19, 2025  | **Enhancement**: Added support for signing OpenDataChannel request. **Bug fix**: Fix checkstyle issues to support newer Go version. | 
| 1.2.707.0 |  February 6, 2025  | **Enhancement**: Upgraded the Go version to 1.23 in the Dockerfile. Updated the version configuration step in the README. | 
| 1.2.694.0 |  November 20, 2024  | **Bug fix**: Rolled back change that added credentials to OpenDataChannel requests. | 
| 1.2.688.0 |  November 6, 2024  | **This version was deprecated on 11/20/2024.** **Enhancements**:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/plugin-version-history.html) | 
| 1.2.677.0 |  October 10, 2024  | **Enhancement**: Added support for passing the plugin version with OpenDataChannel requests. | 
| 1.2.650.0 |  July 02, 2024  | **Enhancement**: Upgraded aws-sdk-go to 1.54.10.**Bug fix**: Reformated comments for gofmt check. | 
| 1.2.633.0 |  May 30, 2024  | Enhancement: Updated the Dockerfile to use an Amazon Elastic Container Registry (Amazon ECR) image. | 
| 1.2.553.0 |  January 10, 2024  | Enhancement: Upgraded aws-sdk-go and dependent Golang packages. | 
| 1.2.536.0 |  December 4, 2023  | Enhancement: Added support for passing a [StartSession](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartSession.html) API response as an environment variable to session-manager-plugin. | 
| 1.2.497.0 |  August 1, 2023  | Enhancement: Upgraded Go SDK to v1.44.302. | 
| 1.2.463.0 |  March 15, 2023  | Enhancement: Added Mac with Apple silicon support for Apple Mac (M1) in macOS bundle installer and signed installer.  | 
| 1.2.398.0 |  October 14, 2022  | Enhancement: Support golang version 1.17. Update default session-manager-plugin runner for macOS to use python3. Update import path from SSMCLI to session-manager-plugin. | 
| 1.2.339.0 |  June 16, 2022  | Bug fix: Fix idle session timeout for port sessions. | 
| 1.2.331.0 |  May 27, 2022  | Bug fix: Fix port sessions closing prematurely when the local server doesn't connect before timeout. | 
| 1.2.323.0 |  May 19, 2022  | Bug fix: Disable smux keep alive to use idle session timeout feature. | 
| 1.2.312.0 |  March 31, 2022  | Enhancement: Supports more output message payload types. | 
| 1.2.295.0 |  January 12, 2022  | Bug fix: Hung sessions caused by client resending stream data when agent becomes inactive, and incorrect logs for start\$1publication and pause\$1publication messages. | 
| 1.2.279.0 |  October 27, 2021  | Enhancement: Zip packaging for Windows platform. | 
| 1.2.245.0 |  August 19, 2021  | Enhancement: Upgrade aws-sdk-go to latest version (v1.40.17) to support AWS IAM Identity Center. | 
| 1.2.234.0 |  July 26, 2021  | Bug fix: Handle session abruptly terminated scenario in interactive session type. | 
| 1.2.205.0 |  June 10, 2021  | Enhancement: Added support for signed macOS installer. | 
| 1.2.54.0 |  January 29, 2021  | Enhancement: Added support for running sessions in NonInteractiveCommands execution mode. | 
| 1.2.30.0 |  November 24, 2020  |  **Enhancement**: (Port forwarding sessions only) Improved overall performance.  | 
| 1.2.7.0 |  October 15, 2020  |  **Enhancement**: (Port forwarding sessions only) Reduced latency and improved overall performance.  | 
| 1.1.61.0 |  April 17, 2020  |  **Enhancement**: Added ARM support for Linux and Ubuntu Server.   | 
| 1.1.54.0 |  January 6, 2020  |  **Bug fix**: Handle race condition scenario of packets being dropped when the Session Manager plugin isn't ready.   | 
|  1.1.50.0  | November 19, 2019 |  **Enhancement**: Added support for forwarding a port to a local unix socket.  | 
|  1.1.35.0  | November 7, 2019 |  **Enhancement**: (Port forwarding sessions only) Send a TerminateSession command to SSM Agent when the local user presses `Ctrl+C`.  | 
| 1.1.33.0 | September 26, 2019 | Enhancement: (Port forwarding sessions only) Send a disconnect signal to the server when the client drops the TCP connection.  | 
| 1.1.31.0 | September 6, 2019 | Enhancement: Update to keep port forwarding session open until remote server closes the connection. | 
|  1.1.26.0  | July 30, 2019 |  **Enhancement**: Update to limit the rate of data transfer during a session.  | 
|  1.1.23.0  | July 9, 2019 |  **Enhancement**: Added support for running SSH sessions using Session Manager.  | 
| 1.1.17.0 | April 4, 2019 |  **Enhancement**: Added support for further encryption of session data using AWS Key Management Service (AWS KMS).  | 
| 1.0.37.0 | September 20, 2018 |  **Enhancement**: Bug fix for Windows version.  | 
| 1.0.0.0 | September 11, 2018 |  Initial release of the Session Manager plugin.  | 

# Install the Session Manager plugin on Windows
Install on Windows

You can install the Session Manager plugin on Windows Vista or later using the standalone installer.

When updates are released, you must repeat the installation process to get the latest version of the Session Manager plugin.

**Note**  
Note the following information.  
The Session Manager plugin installer needs Administrator rights to install the plugin.
For best results, we recommend that you start sessions on Windows clients using Windows PowerShell, version 5 or later. Alternatively, you can use the Command shell in Windows 10. The Session Manager plugin only supports PowerShell and the Command shell. Third-party command line tools might not be compatible with the plugin.

**To install the Session Manager plugin using the EXE installer**

1. Download the installer using the following URL.

   ```
   https://s3.amazonaws.com/session-manager-downloads/plugin/latest/windows/SessionManagerPluginSetup.exe
   ```

   Alternatively, you can download a zipped version of the installer using the following URL.

   ```
   https://s3.amazonaws.com/session-manager-downloads/plugin/latest/windows/SessionManagerPlugin.zip
   ```

1. Run the downloaded installer, and follow the on-screen instructions. If you downloaded the zipped version of the installer, you must unzip the installer first.

   Leave the install location box blank to install the plugin to the default directory.
   +  `%PROGRAMFILES%\Amazon\SessionManagerPlugin\bin\` 

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).
**Note**  
If Windows is unable to find the executable, you might need to re-open the command prompt or add the installation directory to your `PATH` environment variable manually. For information, see the troubleshooting topic [Session Manager plugin not automatically added to command line path (Windows)](session-manager-troubleshooting.md#windows-plugin-env-var-not-set).

# Install the Session Manager plugin on macOS
Install on macOS

Choose one of the following topics to install the Session Manager plugin on macOS. 

**Note**  
The signed installer is a signed `.pkg` file. The bundled installer uses a `.zip` file. After the file is unzipped, you can install the plugin using the binary.

## Install the Session Manager plugin on macOS with the signed installer


This section describes how to install the Session Manager plugin on macOS using the signed installer.

**To install the Session Manager plugin using the signed installer (macOS)**

1. Download the signed installer.

------
#### [ x86\$164 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac/session-manager-plugin.pkg" -o "session-manager-plugin.pkg"
   ```

------
#### [ Mac with Apple silicon ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac_arm64/session-manager-plugin.pkg" -o "session-manager-plugin.pkg"
   ```

------

1. Run the install commands. If the command fails, verify that the `/usr/local/bin` folder exists. If it doesn't, create it and run the command again.

   ```
   sudo installer -pkg session-manager-plugin.pkg -target /
   sudo ln -s /usr/local/sessionmanagerplugin/bin/session-manager-plugin /usr/local/bin/session-manager-plugin
   ```

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

## Install the Session Manager plugin on macOS


This section describes how to install the Session Manager plugin on macOS using the bundled installer.

**Important**  
Note the following important information.  
By default, the installer requires sudo access to run, because the script installs the plugin to the `/usr/local/sessionmanagerplugin` system directory. If you don't want to install the plugin using sudo, manually update the installer script to install the plugin to a directory that doesn't require sudo access.
The bundled installer doesn't support installing to paths that contain spaces.

**To install the Session Manager plugin using the bundled installer (macOS)**

1. Download the bundled installer.

------
#### [ x86\$164 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac/sessionmanager-bundle.zip" -o "sessionmanager-bundle.zip"
   ```

------
#### [ Mac with Apple silicon ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/mac_arm64/sessionmanager-bundle.zip" -o "sessionmanager-bundle.zip"
   ```

------

1. Unzip the package.

   ```
   unzip sessionmanager-bundle.zip
   ```

1. Run the install command.

   ```
   sudo ./sessionmanager-bundle/install -i /usr/local/sessionmanagerplugin -b /usr/local/bin/session-manager-plugin
   ```
**Note**  
 The plugin requires Python 3.10 or later. By default, the install script runs under the system default version of Python. If you have installed an alternative version of Python and want to use that to install the Session Manager plugin, run the install script with that version by absolute path to the Python executable. The following is an example.  

   ```
   sudo /usr/local/bin/python3.11 sessionmanager-bundle/install -i /usr/local/sessionmanagerplugin -b /usr/local/bin/session-manager-plugin
   ```

   The installer installs the Session Manager plugin at `/usr/local/sessionmanagerplugin` and creates the symlink `session-manager-plugin` in the `/usr/local/bin` directory. This eliminates the need to specify the install directory in the user's `$PATH` variable.

   To see an explanation of the `-i` and `-b` options, use the `-h` option.

   ```
   ./sessionmanager-bundle/install -h
   ```

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

**Note**  
To uninstall the plugin, run the following two commands in the order shown.  

```
sudo rm -rf /usr/local/sessionmanagerplugin
```

```
sudo rm /usr/local/bin/session-manager-plugin
```

# Install the Session Manager plugin on Linux


This section includes information about verifying the signature of the Session Manager plugin installer package and installing the plugin on the following Linux distributions:
+ Amazon Linux 2
+ AL2023
+ RHEL
+ Debian Server
+ Ubuntu Server

**Topics**
+ [

# Verify the signature of the Session Manager plugin
](install-plugin-linux-verify-signature.md)
+ [

# Install the Session Manager plugin on Amazon Linux 2, Amazon Linux 2023, and Red Hat Enterprise Linux distributions
](install-plugin-linux.md)
+ [

# Install the Session Manager plugin on Debian Server and Ubuntu Server
](install-plugin-debian-and-ubuntu.md)

# Verify the signature of the Session Manager plugin


The Session Manager plugin RPM and Debian installer packages for Linux instances are cryptographically signed. You can use a public key to verify that the plugin binary and package is original and unmodified. If the file is altered or damaged, the verification fails. You can verify the signature of the installer package using the GNU Privacy Guard (GPG) tool. The following information is for Session Manager plugin versions 1.2.707.0 or later.

Complete the following steps to verify the signature of the Session Manager plugin installer package.

**Topics**
+ [

## Step 1: Download the Session Manager plugin installer package
](#install-plugin-linux-verify-signature-installer-packages)
+ [

## Step 2: Download the associated signature file
](#install-plugin-linux-verify-signature-packages)
+ [

## Step 3: Install the GPG tool
](#install-plugin-linux-verify-signature-packages-gpg)
+ [

## Step 4: Verify the Session Manager plugin installer package on a Linux server
](#install-plugin-linux-verify-signature-packages)

## Step 1: Download the Session Manager plugin installer package


Download the Session Manager plugin installer package you want to verify.

**Amazon Linux 2, AL2023, and RHEL RPM packages**

------
#### [ x86\$164 ]

```
curl -o "session-manager-plugin.rpm" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm"
```

------
#### [ ARM64 ]

```
curl -o "session-manager-plugin.rpm" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm"
```

------

**Debian Server and Ubuntu Server Deb packages**

------
#### [ x86\$164 ]

```
curl -o "session-manager-plugin.deb" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb"
```

------
#### [ ARM64 ]

```
curl -o "session-manager-plugin.deb" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb"
```

------

## Step 2: Download the associated signature file


After you download the installer package, download the associated signature file for package verification. To provide an extra layer of protection against unauthorized copying or use of the session-manager-plugin binary file inside the package, we also offer binary signatures, which you can use to validate individual binary files. You can choose to use these binary signatures based on your security needs.

**Amazon Linux 2, AL2023, and RHEL signature packages**

------
#### [ x86\$164 ]

Package:

```
curl -o "session-manager-plugin.rpm.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.sig"
```

------
#### [ ARM64 ]

Package:

```
curl -o "session-manager-plugin.rpm.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.sig"
```

------

**Debian Server and Ubuntu Server Deb signature packages**

------
#### [ x86\$164 ]

Package:

```
curl -o "session-manager-plugin.deb.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.sig"
```

------
#### [ ARM64 ]

Package:

```
curl -o "session-manager-plugin.deb.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb.sig"
```

Binary:

```
curl -o "session-manager-plugin.sig" "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.sig"
```

------

## Step 3: Install the GPG tool


To verify the signature of the Session Manager plugin, you must have the GNU Privacy Guard (GPG) tool installed on your system. The verification process requires GPG version 2.1 or later. You can check your GPG version by running the following command:

```
gpg --version
```

If your GPG version is older than 2.1, update it before proceeding with the verification process. For most systems, you can update the GPG tool using your package manager. For example, on supported Amazon Linux and RHEL versions, you can use the following commands:

```
sudo yum update
sudo yum install gnupg2
```

On supported Ubuntu Server and Debian Server systems, you can use the following commands:

```
sudo apt-get update
sudo apt-get install gnupg2
```

Ensure you have the required GPG version before continuing with the verification process.

## Step 4: Verify the Session Manager plugin installer package on a Linux server


Use the following procedure to verify the Session Manager plugin installer package on a Linux server.

**Note**  
Amazon Linux 2 doesn't support the gpg tool version 2.1 or higher. If the following procedure doesn't work on your Amazon Linux 2 instances, verify the signature on a different platform before installing it on your Amazon Linux 2 instances.

1. Copy the following public key, and save it to a file named session-manager-plugin.gpg.

   ```
   -----BEGIN PGP PUBLIC KEY BLOCK-----
   
   mFIEZ5ERQxMIKoZIzj0DAQcCAwQjuZy+IjFoYg57sLTGhF3aZLBaGpzB+gY6j7Ix
   P7NqbpXyjVj8a+dy79gSd64OEaMxUb7vw/jug+CfRXwVGRMNtIBBV1MgU1NNIFNl
   c3Npb24gTWFuYWdlciA8c2Vzc2lvbi1tYW5hZ2VyLXBsdWdpbi1zaWduZXJAYW1h
   em9uLmNvbT4gKEFXUyBTeXN0ZW1zIE1hbmFnZXIgU2Vzc2lvbiBNYW5hZ2VyIFBs
   dWdpbiBMaW51eCBTaWduZXIgS2V5KYkBAAQQEwgAqAUCZ5ERQ4EcQVdTIFNTTSBT
   ZXNzaW9uIE1hbmFnZXIgPHNlc3Npb24tbWFuYWdlci1wbHVnaW4tc2lnbmVyQGFt
   YXpvbi5jb20+IChBV1MgU3lzdGVtcyBNYW5hZ2VyIFNlc3Npb24gTWFuYWdlciBQ
   bHVnaW4gTGludXggU2lnbmVyIEtleSkWIQR5WWNxJM4JOtUB1HosTUr/b2dX7gIe
   AwIbAwIVCAAKCRAsTUr/b2dX7rO1AQCa1kig3lQ78W/QHGU76uHx3XAyv0tfpE9U
   oQBCIwFLSgEA3PDHt3lZ+s6m9JLGJsy+Cp5ZFzpiF6RgluR/2gA861M=
   =2DQm
   -----END PGP PUBLIC KEY BLOCK-----
   ```

1. Import the public key into your keyring. The returned key value should be `2C4D4AFF6F6757EE`.

   ```
   $ gpg --import session-manager-plugin.gpg
   gpg: key 2C4D4AFF6F6757EE: public key "AWS SSM Session Manager <session-manager-plugin-signer@amazon.com> (AWS Systems Manager Session Manager Plugin Linux Signer Key)" imported
   gpg: Total number processed: 1
   gpg:               imported: 1
   ```

1. Run the following command to verify the fingerprint.

   ```
   gpg --fingerprint 2C4D4AFF6F6757EE
   ```

   The fingerprint for the command output should match the following.

   ```
   7959 6371 24CE 093A D501 D47A 2C4D 4AFF 6F67 57EE
   ```

   ```
   pub   nistp256 2025-01-22 [SC]
         7959 6371 24CE 093A D501  D47A 2C4D 4AFF 6F67 57EE
   uid           [ unknown] AWS SSM Session Manager <session-manager-plugin-signer@amazon.com> (AWS Systems Manager Session Manager Plugin Linux Signer Key)
   ```

   If the fingerprint doesn't match, don't install the plugin. Contact AWS Support.

1. Verify the installer package signature. Replace the *signature-filename* and *downloaded-plugin-filename* with the values you specified when downloading the signature file and session-manager-plugin, as listed in the table earlier in this topic.

   ```
   gpg --verify signature-filename downloaded-plugin-filename
   ```

   For example, for the x86\$164 architecture on Amazon Linux 2, the command is as follows:

   ```
   gpg --verify session-manager-plugin.rpm.sig session-manager-plugin.rpm
   ```

   This command returns output similar to the following.

   ```
   gpg: Signature made Mon Feb 3 20:08:32 2025 UTC gpg: using ECDSA key 2C4D4AFF6F6757EE
   gpg: Good signature from "AWS Systems Manager Session Manager <session-manager-plugin-signer@amazon.com> (AWS Systems Manager Session Manager Plugin Linux Signer Key)" [unknown] 
   gpg: WARNING: This key is not certified with a trusted signature! 
   gpg: There is no indication that the signature belongs to the owner. 
   Primary key fingerprint: 7959 6371 24CE 093A D501 D47A 2C4D 4AFF 6F67 57EE
   ```

If the output includes the phrase `BAD signature`, check whether you performed the procedure correctly. If you continue to get this response, contact AWS Support and don't install the package. The warning message about the trust doesn't mean that the signature isn't valid, only that you haven't verified the public key. A key is trusted only if you or someone who you trust has signed it. If the output includes the phrase `Can't check signature: No public key`, verify you downloaded Session Manager plugin with version 1.2.707.0 or later.

# Install the Session Manager plugin on Amazon Linux 2, Amazon Linux 2023, and Red Hat Enterprise Linux distributions
Install on Amazon Linux 2, AL2023, and RHEL distros

Use the following procedure to install the Session Manager plugin on Amazon Linux 2, Amazon Linux 2023 (AL2023), and RHEL distributions.

1. Download and install the Session Manager plugin RPM package.

------
#### [ x86\$164 ]

   On Amazon Linux 2 and RHEL 7, run the following command:

   ```
   sudo yum install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm
   ```

   On AL2023 and RHEL 8 and 9, run the following command:

   ```
   sudo dnf install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm
   ```

------
#### [ ARM64 ]

   On Amazon Linux 2 and RHEL 7, run the following command:

   ```
   sudo yum install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm
   ```

   On AL2023 and RHEL 8 and 9, run the following command:

   ```
   sudo dnf install -y https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_arm64/session-manager-plugin.rpm
   ```

------

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

**Note**  
If you want to uninstall the plugin, run `sudo yum erase session-manager-plugin -y`

# Install the Session Manager plugin on Debian Server and Ubuntu Server
Install on Debian Server and Ubuntu Server

1. Download the Session Manager plugin deb package.

------
#### [ x86\$164 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
   ```

------
#### [ ARM64 ]

   ```
   curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb" -o "session-manager-plugin.deb"
   ```

------

1. Run the install command.

   ```
   sudo dpkg -i session-manager-plugin.deb
   ```

1. Verify that the installation was successful. For information, see [Verify the Session Manager plugin installation](install-plugin-verify.md).

**Note**  
If you ever want to uninstall the plugin, run `sudo dpkg -r session-manager-plugin`

# Verify the Session Manager plugin installation


Run the following commands to verify that the Session Manager plugin installed successfully.

```
session-manager-plugin
```

If the installation was successful, the following message is returned.

```
The Session Manager plugin is installed successfully. Use the AWS CLI to start a session.
```

You can also test the installation by running the [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) command in the the [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI). In the following command, replace *instance-id* with your own information.

```
aws ssm start-session --target instance-id
```

This command will work only if you have installed and configured the AWS CLI, and if your Session Manager administrator has granted you the necessary IAM permissions to access the target managed node using Session Manager.

# Session Manager plugin on GitHub


The source code for Session Manager plugin is available on [https://github.com/aws/session-manager-plugin](https://github.com/aws/session-manager-plugin) so that you can adapt the plugin to meet your needs. We encourage you to submit [pull requests](https://github.com/aws/session-manager-plugin/blob/mainline/CONTRIBUTING.md) for changes that you would like to have included. However, Amazon Web Services doesn't provide support for running modified copies of this software.

# (Optional) Turn on Session Manager plugin logging


The Session Manager plugin includes an option to allow logging for sessions that you run. By default, logging is turned off.

If you allow logging, the Session Manager plugin creates log files for both application activity (`session-manager-plugin.log`) and errors (`errors.log`) on your local machine.

**Topics**
+ [

## Turn on logging for the Session Manager plugin (Windows)
](#configure-logs-windows)
+ [

## Enable logging for the Session Manager plugin (Linux and macOS)
](#configure-logs-linux)

## Turn on logging for the Session Manager plugin (Windows)


1. Locate the `seelog.xml.template` file for the plugin. 

   The default location is `C:\Program Files\Amazon\SessionManagerPlugin\seelog.xml.template`.

1. Change the name of the file to `seelog.xml`.

1. Open the file and change `minlevel="off"` to `minlevel="info"` or `minlevel="debug"`.
**Note**  
By default, log entries about opening a data channel and reconnecting sessions are recorded at the **INFO** level. Data flow (packets and acknowledgement) entries are recorded at the **DEBUG** level.

1. Change other configuration options you want to modify. Options you can change include: 
   + **Debug level**: You can change the debug level from `formatid="fmtinfo"` to `formatid="fmtdebug"`.
   + **Log file options**: You can make changes to the log file options, including where the logs are stored, with the exception of the log file names. 
**Important**  
Don't change the file names or logging won't work correctly.

     ```
     <rollingfile type="size" filename="C:\Program Files\Amazon\SessionManagerPlugin\Logs\session-manager-plugin.log" maxsize="30000000" maxrolls="5"/>
     <filter levels="error,critical" formatid="fmterror">
     <rollingfile type="size" filename="C:\Program Files\Amazon\SessionManagerPlugin\Logs\errors.log" maxsize="10000000" maxrolls="5"/>
     ```

1. Save the file.

## Enable logging for the Session Manager plugin (Linux and macOS)


1. Locate the `seelog.xml.template` file for the plugin. 

   The default location is `/usr/local/sessionmanagerplugin/seelog.xml.template`.

1. Change the name of the file to `seelog.xml`.

1. Open the file and change `minlevel="off"` to `minlevel="info"` or `minlevel="debug"`.
**Note**  
By default, log entries about opening data channels and reconnecting sessions are recorded at the **INFO** level. Data flow (packets and acknowledgement) entries are recorded at the **DEBUG** level.

1. Change other configuration options you want to modify. Options you can change include: 
   + **Debug level**: You can change the debug level from `formatid="fmtinfo"` to `outputs formatid="fmtdebug"`
   + **Log file options**: You can make changes to the log file options, including where the logs are stored, with the exception of the log file names. 
**Important**  
Don't change the file names or logging won't work correctly.

     ```
     <rollingfile type="size" filename="/usr/local/sessionmanagerplugin/logs/session-manager-plugin.log" maxsize="30000000" maxrolls="5"/>
     <filter levels="error,critical" formatid="fmterror">
     <rollingfile type="size" filename="/usr/local/sessionmanagerplugin/logs/errors.log" maxsize="10000000" maxrolls="5"/>
     ```
**Important**  
If you use the specified default directory for storing logs, you must either run session commands using **sudo** or give the directory where the plugin is installed full read and write permissions. To bypass these restrictions, change the location where logs are stored.

1. Save the file.

# Start a session
Start a session

You can use the AWS Systems Manager console, the Amazon Elastic Compute Cloud (Amazon EC2) console, the AWS Command Line Interface (AWS CLI), or SSH to start a session.

**Topics**
+ [

## Starting a session (Systems Manager console)
](#start-sys-console)
+ [

## Starting a session (Amazon EC2 console)
](#start-ec2-console)
+ [

## Starting a session (AWS CLI)
](#sessions-start-cli)
+ [

## Starting a session (SSH)
](#sessions-start-ssh)
+ [

## Starting a session (port forwarding)
](#sessions-start-port-forwarding)
+ [

## Starting a session (port forwarding to remote host)
](#sessions-remote-port-forwarding)
+ [

## Starting a session (interactive and noninteractive commands)
](#sessions-start-interactive-commands)

## Starting a session (Systems Manager console)


You can use the AWS Systems Manager console to start a session with a managed node in your account.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

**To start a session (Systems Manager console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose **Start session**.

1. (Optional) Enter a session description in the **Reason for session** field.

1. For **Target instances**, choose the option button to the left of the managed node that you want to connect to.

   If the node that you want isn't in the list, or if you select a node and receive a configuration error, see [Managed node not available or not configured for Session Manager](session-manager-troubleshooting.md#session-manager-troubleshooting-instances) for troubleshooting steps.

1. Choose **Start session** to launch the session immediately.

   -or-

   Choose **Next** for session options.

1. (Optional) For **Session document**, select the document that you want to run when the session starts. If your document supports runtime parameters, you can enter one or more comma-separated values in each parameter field.

1. Choose **Next**.

1. Choose **Start session**.

After the connection is made, you can run bash commands (Linux and macOS) or PowerShell commands (Windows) as you would through any other connection type.

**Important**  
If you want to allow users to specify a document when starting sessions in the Session Manager console, note the following:  
You must grant users the `ssm:GetDocument` and `ssm:ListDocuments` permissions in their IAM policy. For more information, see [Grant access to custom Session documents in the console](getting-started-restrict-access-examples.md#grant-access-documents-console-example).
The console only supports Session documents that have the `sessionType` defined as `Standard_Stream`. For more information, see [Session document schema](session-manager-schema.md).

## Starting a session (Amazon EC2 console)


You can use the Amazon Elastic Compute Cloud (Amazon EC2) console to start a session with an instance in your account.

**Note**  
If you receive an error that you aren't authorized to perform one or more Systems Manager actions (`ssm:command-name`, then you must contact your administrator for assistance. Your administrator is the person that provided you with your sign-in credentials. Ask that person to update your policies to allow you to start sessions from the Amazon EC2 console. If you're an administrator, see [Sample IAM policies for Session Manager](getting-started-restrict-access-quickstart.md) for more information.

**To start a session (Amazon EC2 console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation pane, choose **Instances**.

1. Select the instance and choose **Connect**.

1. For **Connection method**, choose **Session Manager**.

1. Choose **Connect**.

After the connection is made, you can run bash commands (Linux and macOS) or PowerShell commands (Windows) as you would through any other connection type.

## Starting a session (AWS CLI)


Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

To use the AWS CLI to run session commands, the Session Manager plugin must also be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

To start a session using the AWS CLI, run the following command replacing *instance-id* with your own information.

```
aws ssm start-session \
    --target instance-id
```

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

## Starting a session (SSH)


To start a Session Manager SSH session, version 2.3.672.0 or later of SSM Agent must be installed on the managed node.

**SSH connection requirements**  
Take note of the following requirements and limitations for session connections using SSH through Session Manager:
+ Your target managed node must be configured to support SSH connections. For more information, see [(Optional) Allow and control permissions for SSH connections through Session Manager](session-manager-getting-started-enable-ssh-connections.md).
+ You must connect using the managed node account associated with the Privacy Enhanced Mail (PEM) certificate, not the `ssm-user` account that is used for other types of session connections. For example, on EC2 instances for Linux and macOS, the default user is `ec2-user`. For information about identifying the default user for each instance type, see [Get Information About Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html#connection-prereqs-get-info-about-instance) in the *Amazon EC2 User Guide*.
+ Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

To start a session using SSH, run the following command. Replace each *example resource placeholder* with your own information.

```
ssh -i /path/my-key-pair.pem username@instance-id
```

**Tip**  
When you start a session using SSH, you can copy local files to the target managed node using the following command format.  

```
scp -i /path/my-key-pair.pem /path/ExampleFile.txt username@instance-id:~
```

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

## Starting a session (port forwarding)


To start a Session Manager port forwarding session, version 2.3.672.0 or later of SSM Agent must be installed on the managed node.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).  
To use the AWS CLI to run session commands, you must install the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).  
Depending on your operating system and command line tool, the placement of quotation marks can differ and escape characters might be required.

To start a port forwarding session, run the following command from the CLI. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name AWS-StartPortForwardingSession \
    --parameters '{"portNumber":["80"], "localPortNumber":["56789"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name AWS-StartPortForwardingSession ^
    --parameters portNumber="3389",localPortNumber="56789"
```

------

`portNumber` is the remote port on the managed node where you want the session traffic to be redirected. For example, you might specify port `3389` for connecting to a Windows node using the Remote Desktop Protocol (RDP). If you don't specify the `portNumber` parameter, Session Manager uses `80` as the default value. 

`localPortNumber` is the port on your local computer where traffic starts, such as `56789`. This value is what you enter when connecting to a managed node using a client. For example, **localhost:56789**.

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

For more information about port forwarding sessions, see [Port Forwarding Using AWS Systems Manager Session Manager](https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/) in the *AWS News Blog*.

## Starting a session (port forwarding to remote host)


To start a Session Manager port forwarding session to a remote host, version 3.1.1374.0 or later of SSM Agent must be installed on the managed node. The remote host isn't required to be managed by Systems Manager.

**Note**  
Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).  
To use the AWS CLI to run session commands, you must install the Session Manager plugin on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).  
Depending on your operating system and command line tool, the placement of quotation marks can differ and escape characters might be required.

To start a port forwarding session, run the following command from the AWS CLI. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name AWS-StartPortForwardingSessionToRemoteHost \
    --parameters '{"host":["mydb.example.us-east-2.rds.amazonaws.com"],"portNumber":["3306"], "localPortNumber":["3306"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name AWS-StartPortForwardingSessionToRemoteHost ^
    --parameters host="mydb.example.us-east-2.rds.amazonaws.com",portNumber="3306",localPortNumber="3306"
```

------

The `host` value represents the hostname or IP address of the remote host that you want to connect to. General connectivity and name resolution requirements between the managed node and the remote host still apply.

`portNumber` is the remote port on the managed node where you want the session traffic to be redirected. For example, you might specify port `3389` for connecting to a Windows node using the Remote Desktop Protocol (RDP). If you don't specify the `portNumber` parameter, Session Manager uses `80` as the default value. 

`localPortNumber` is the port on your local computer where traffic starts, such as `56789`. This value is what you enter when connecting to a managed node using a client. For example, **localhost:56789**.

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

### Starting a session with an Amazon ECS task


Session Manager supports starting a port forwarding session with a task inside an Amazon Elastic Container Service (Amazon ECS) cluster. To do so, enable ECS Exec. For more information, see [Monitor Amazon Elastic Container Service containers with ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html) in the *Amazon Elastic Container Service Developer Guide*.

You must also update the task role in IAM to include the following permissions:

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
       "Effect": "Allow",
       "Action": [
            "ssmmessages:CreateControlChannel",
            "ssmmessages:CreateDataChannel",
            "ssmmessages:OpenControlChannel",
            "ssmmessages:OpenDataChannel"
       ],
      "Resource": "*"
      }
   ]
}
```

------

To start a port forwarding session with an Amazon ECS task, run the following command from the AWS CLI. Replace each *example resource placeholder* with your own information.

**Note**  
Remove the < and > symbols from the `target` parameter. These symbols are provided for reader clarification only.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target ecs:<ECS_cluster_name>_<ECS_container_ID>_<container_runtime_ID> \
    --document-name AWS-StartPortForwardingSessionToRemoteHost \
    --parameters '{"host":["URL"],"portNumber":["port_number"], "localPortNumber":["port_number"]}'
```

------
#### [  Windows  ]

```
aws ssm start-session ^
    --target ecs:<ECS_cluster_name>_<ECS_container_ID>_<container_runtime_ID> ^
    --document-name AWS-StartPortForwardingSessionToRemoteHost ^
    --parameters host="URL",portNumber="port_number",localPortNumber="port_number"
```

------

## Starting a session (interactive and noninteractive commands)


Before you start a session, make sure that you have completed the setup steps for Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).

To use the AWS CLI to run session commands, the Session Manager plugin must also be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

To start an interactive command session, run the following command. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm start-session \
    --target instance-id \
    --document-name CustomCommandSessionDocument \
    --parameters '{"logpath":["/var/log/amazon/ssm/amazon-ssm-agent.log"]}'
```

------
#### [ Windows ]

```
aws ssm start-session ^
    --target instance-id ^
    --document-name CustomCommandSessionDocument ^
    --parameters logpath="/var/log/amazon/ssm/amazon-ssm-agent.log"
```

------

For information about other options you can use with the **start-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/start-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

 **More info**   
+  [Use port forwarding in AWS Systems Manager Session Manager to connect to remote hosts](https://aws.amazon.com/blogs/mt/use-port-forwarding-in-aws-systems-manager-session-manager-to-connect-to-remote-hosts/) 
+  [Amazon EC2 instance port forwarding with AWS Systems Manager](https://aws.amazon.com/blogs/mt/amazon-ec2-instance-port-forwarding-with-aws-systems-manager/) 
+  [Manage AWS Managed Microsoft AD resources with Session Manager port forwarding](https://aws.amazon.com/blogs/mt/manage-aws-managed-microsoft-ad-resources-with-session-manager-port-forwarding/) 
+ [Port Forwarding Using AWS Systems Manager Session Manager](https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/) on the *AWS News Blog*.

# End a session
End a session

You can end a session that you started in your account using the AWS Systems Manager console or the AWS Command Line Interface (AWS CLI). When you choose the **Terminate** button for a session in the console or call the [TerminateSession](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_TerminateSession.html) API action by using the AWS CLI, Session Manager permanently ends the session and closes the data connection between the Session Manager client and SSM Agent on the managed node. You can't resume a terminated session.

If there is no user activity in an open session for 20 minutes, the idle state triggers a timeout. Session Manager doesn't call `TerminateSession`, but it does close the underlying channel. You can't resume a session closed because of idle timeout.

We recommend always explicitly terminating a session by using the `terminate-session` command, when using the AWS CLI, or the **Terminate** button when using the console. (**Terminate** buttons are located on both the session window and main Session Manager console page.) If you only close a browser or command window, the session remains listed as **Active** in the console for 30 days. When you don't explicitly terminate a session, or when a session times out, any processes that were running on the managed node at the time will continue to run.

**Topics**
+ [

## Ending a session (console)
](#stop-sys-console)
+ [

## Ending a session (AWS CLI)
](#stop-cli)

## Ending a session (console)


You can use the AWS Systems Manager console to end a session in your account.

**To end a session (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. For **Sessions**, choose the option button to the left of the session you want to end.

1. Choose **Terminate**.

## Ending a session (AWS CLI)


To end a session using the AWS CLI, run the following command. Replace *session-id* with your own information.

```
aws ssm terminate-session \
    --session-id session-id
```

For more information about the **terminate-session** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/terminate-session.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/terminate-session.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

# View session history


You can use the AWS Systems Manager console or the AWS Command Line Interface (AWS CLI) to view information about sessions in your account. In the console, you can view session details such as the following:
+ The ID of the session
+ Which user connected to a managed node through a session
+ The ID of the managed node
+ When the session began and ended
+ The status of the session
+ The location specified for storing session logs (if turned on)

Using the AWS CLI, you can view a list of sessions in your account, but not the additional details that are available in the console.

For information about logging session history information, see [Enabling and disabling session logging](session-manager-logging.md).

**Topics**
+ [

## Viewing session history (console)
](#view-console)
+ [

## Viewing session history (AWS CLI)
](#view-history-cli)

## Viewing session history (console)


You can use the AWS Systems Manager console to view details about the sessions in your account.

**To view session history (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Session history** tab.

   -or-

   If the Session Manager home page opens first, choose **Configure Preferences** and then choose the **Session history** tab.

## Viewing session history (AWS CLI)


To view a list of sessions in your account using the AWS CLI, run the following command.

```
aws ssm describe-sessions \
    --state History
```

**Note**  
This command returns only results for connections to targets initiated using Session Manager. It doesn't list connections made through other means, such as Remote Desktop Protocol (RDP) or the Secure Shell Protocol (SSH).

For information about other options you can use with the **describe-sessions** command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-sessions.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-sessions.html) in the AWS Systems Manager section of the AWS CLI Command Reference.

# Logging session activity


In addition to providing information about current and completed sessions in the Systems Manager console, Session Manager provides you with the ability to log session activity in your AWS account using AWS CloudTrail.

CloudTrail captures session API calls through the Systems Manager console, the AWS Command Line Interface (AWS CLI), and the Systems Manager SDK. You can view the information on the CloudTrail console or store it in a specified Amazon Simple Storage Service (Amazon S3) bucket. One Amazon S3 bucket is used for all CloudTrail logs for your account. For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

**Note**  
For recurring, historical, analytical analysis of your log files, consider querying CloudTrail logs using [CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html) or a table you maintain. For more information, see [Querying AWS CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html) in the *AWS CloudTrail User Guide*. 

## Monitoring session activity using Amazon EventBridge (console)


With EventBridge, you can set up rules to detect when changes happen to AWS resources. You can create a rule to detect when a user in your organization starts or ends a session, and then, for example, receive a notification through Amazon SNS about the event. 

EventBridge support for Session Manager relies on records of API operations that were recorded by CloudTrail. (You can use CloudTrail integration with EventBridge to respond to most AWS Systems Manager events.) Actions that take place within a session, such as an `exit` command, that don't make an API call aren't detected by EventBridge.

The following steps outline how to initiate notifications through Amazon Simple Notification Service (Amazon SNS) when a Session Manager API event occurs, such as **StartSession**.

**To monitor session activity using Amazon EventBridge (console)**

1. Create an Amazon SNS topic to use for sending notifications when the Session Manager event occurs that you want to track.

   For more information, see [Create a Topic](https://docs.aws.amazon.com/sns/latest/dg/CreateTopic.html) in the *Amazon Simple Notification Service Developer Guide*.

1. Create an EventBridge rule to invoke the Amazon SNS target for the type of Session Manager event you want to track.

   For information about how to create the rule, see [Creating Amazon EventBridge rules that react to events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html) in the *Amazon EventBridge User Guide*.

   As you follow the steps to create the rule, make the following selections:
   + For **AWS service**, choose **Systems Manager**.
   + For **Event type**, choose **AWS API Call through CloudTrail**.
   + Choose **Specific operation(s)**, and then enter the Session Manager command or commands (one at a time) you want to receive notifications for. You can choose **StartSession**, **ResumeSession**, and **TerminateSession**. (EventBridge doesn't support `Get*`,` List*`, and `Describe*` commands.)
   + For **Select a target**, choose **SNS topic**. For **Topic**, choose the name of the Amazon SNS topic you created in Step 1.

For more information, see the *[Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/)* and the *[Amazon Simple Notification Service Getting Started Guide](https://docs.aws.amazon.com/sns/latest/gsg/)*.

# Enabling and disabling session logging


Session logging records information about current and completed sessions in the Systems Manager console. You can also log details about commands run during sessions in your AWS account. Session logging enables you to do the following:
+ Create and store session logs for archival purposes.
+ Generate a report showing details of every connection made to your managed nodes using Session Manager over the past 30 days.
+ Generate notifications for session logging in your AWS account, such as Amazon Simple Notification Service (Amazon SNS) notifications.
+ Automatically initiate another action on an AWS resource as the result of actions performed during a session, such as running an AWS Lambda function, starting an AWS CodePipeline pipeline, or running an AWS Systems Manager Run Command document.

**Important**  
Note the following requirements and limitations for Session Manager:  
Session Manager logs the commands you enter and their output during a session depending on your session preferences. To prevent sensitive data, such as passwords, from being viewed in your session logs we recommend using the following commands when entering sensitive data during a session.  

  ```
  stty -echo; read passwd; stty echo;
  ```

  ```
  $Passwd = Read-Host -AsSecureString
  ```
If you're using Windows Server 2012 or earlier, the data in your logs might not be formatted optimally. We recommend using Windows Server 2012 R2 and later for optimal log formats.
If you're using Linux or macOS managed nodes, ensure that the screen utility is installed. If it isn't, your log data might be truncated. On Amazon Linux 2, AL2023 and Ubuntu Server, the screen utility is installed by default. To install screen manually, depending on your version of Linux, run either `sudo yum install screen` or `sudo apt-get install screen`.
Logging isn't available for Session Manager sessions that connect through port forwarding or SSH. This is because SSH encrypts all session data within the secure TLS connection established between the AWS CLI and Session Manager endpoints, and Session Manager only serves as a tunnel for SSH connections.

For more information about the permissions required to use Amazon S3 or Amazon CloudWatch Logs for logging session data, see [Creating an IAM role with permissions for Session Manager and Amazon S3 and CloudWatch Logs (console)](getting-started-create-iam-instance-profile.md#create-iam-instance-profile-ssn-logging).

Refer to the following topics for more information about logging options for Session Manager.

**Topics**
+ [

# Streaming session data using Amazon CloudWatch Logs (console)
](session-manager-logging-cwl-streaming.md)
+ [

# Logging session data using Amazon S3 (console)
](session-manager-logging-s3.md)
+ [

# Logging session data using Amazon CloudWatch Logs (console)
](session-manager-logging-cloudwatch-logs.md)
+ [

# Configuring session logging to disk
](session-manager-logging-disk.md)
+ [

# Adjusting how long the Session Manager temporary log file is stored on disk
](session-manager-logging-disk-retention.md)
+ [

# Disabling Session Manager logging in CloudWatch Logs and Amazon S3
](session-manager-enable-and-disable-logging.md)

# Streaming session data using Amazon CloudWatch Logs (console)


You can send a continual stream of session data logs to Amazon CloudWatch Logs. Essential details, such as the commands a user has run in a session, the ID of the user who ran the commands, and timestamps for when the session data is streamed to CloudWatch Logs, are included when streaming session data. When streaming session data, the logs are JSON-formatted to help you integrate with your existing logging solutions. Streaming session data isn't supported for interactive commands.

**Note**  
To stream session data from Windows Server managed nodes, you must have PowerShell 5.1 or later installed. By default, Windows Server 2016 and later have the required PowerShell version installed. However, Windows Server 2012 and 2012 R2 don't have the required PowerShell version installed by default. If you haven't already updated PowerShell on your Windows Server 2012 or 2012 R2 managed nodes, you can do so using Run Command. For information about updating PowerShell using Run Command, see [Updating PowerShell using Run Command](run-command-tutorial-update-software.md#rc-console-pwshexample).

**Important**  
If you have the **PowerShell Transcription** policy setting configured on your Windows Server managed nodes, you won't be able to stream session data.

**To stream session data using Amazon CloudWatch Logs (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable** under **CloudWatch logging**.

1. Choose the **Stream session logs** option.

1. (Recommended) Select the check box next to **Allow only encrypted CloudWatch log groups**. With this option turned on, log data is encrypted using the server-side encryption key specified for the log group. If you don't want to encrypt the log data that is sent to CloudWatch Logs, clear the check box. You must also clear the check box if encryption isn't allowed on the log group.

1. For **CloudWatch logs**, to specify the existing CloudWatch Logs log group in your AWS account to upload session logs to, select one of the following:
   + Enter the name of a log group in the text box that has already been created in your account to store session log data.
   + **Browse log groups**: Select a log group that has already been created in your account to store session log data.

1. Choose **Save**.

# Logging session data using Amazon S3 (console)


You can choose to store session log data in a specified Amazon Simple Storage Service (Amazon S3) bucket for debugging and troubleshooting purposes. The default option is for logs to be sent to an encrypted Amazon S3 bucket. Encryption is performed using the key specified for the bucket, either an AWS KMS key or an Amazon S3 Server-Side Encryption (SSE) key (AES-256). 

**Important**  
When you use virtual hosted–style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that don't contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you don't use periods (".") in bucket names when using virtual hosted–style buckets.

**Amazon S3 bucket encryption**  
In order to send logs to your Amazon S3 bucket with encryption, encryption must be allowed on the bucket. For more information about Amazon S3 bucket encryption, see [Amazon S3 Default Encryption for S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html).

**Customer managed key**  
If you're using a KMS key that you manage yourself to encrypt your bucket, then the IAM instance profile attached to your instances must have explicit permissions to read the key. If you use an AWS managed key, the instance doesn't require this explicit permission. For more information about providing the instance profile with access to use the key, see [Allows Key Users to Use the key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-users) in the *AWS Key Management Service Developer Guide*.

Follow these steps to configure Session Manager to store session logs in an Amazon S3 bucket.

**Note**  
You can also use the AWS CLI to specify or change the Amazon S3 bucket that session data is sent to. For information, see [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

**To log session data using Amazon S3 (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable** under **S3 logging**.

1. (Recommended) Select the check box next to **Allow only encrypted S3 buckets**. With this option turned on, log data is encrypted using the server-side encryption key specified for the bucket. If you don't want to encrypt the log data that is sent to Amazon S3, clear the check box. You must also clear the check box if encryption isn't allowed on the S3 bucket.

1. For **S3 bucket name**, select one of the following:
**Note**  
We recommend that you don't use periods (".") in bucket names when using virtual hosted–style buckets. For more information about Amazon S3 bucket-naming conventions, see [Bucket Restrictions and Limitations](https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules) in the *Amazon Simple Storage Service User Guide*.
   + **Choose a bucket name from the list**: Select an Amazon S3 bucket that has already been created in your account to store session log data.
   + **Enter a bucket name in the text box**: Enter the name of an Amazon S3 bucket that has already been created in your account to store session log data.

1. (Optional) For **S3 key prefix**, enter the name of an existing or new folder to store logs in the selected bucket.

1. Choose **Save**.

For more information about working with Amazon S3 and Amazon S3 buckets, see the *[Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)* and the *[Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/)*.

# Logging session data using Amazon CloudWatch Logs (console)


With Amazon CloudWatch Logs, you can monitor, store, and access log files from various AWS services. You can send session log data to a CloudWatch Logs log group for debugging and troubleshooting purposes. The default option is for log data to be sent with encryption using your KMS key, but you can send the data to your log group with or without encryption. 

Follow these steps to configure AWS Systems Manager Session Manager to send session log data to a CloudWatch Logs log group at the end of your sessions.

**Note**  
You can also use the AWS CLI to specify or change the CloudWatch Logs log group that session data is sent to. For information, see [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

**To log session data using Amazon CloudWatch Logs (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable** under **CloudWatch logging**.

1. Choose the **Upload session logs** option.

1. (Recommended) Select the check box next to **Allow only encrypted CloudWatch log groups**. With this option turned on, log data is encrypted using the server-side encryption key specified for the log group. If you don't want to encrypt the log data that is sent to CloudWatch Logs, clear the check box. You must also clear the check box if encryption isn't allowed on the log group.

1. For **CloudWatch logs**, to specify the existing CloudWatch Logs log group in your AWS account to upload session logs to, select one of the following:
   + **Choose a log group from the list**: Select a log group that has already been created in your account to store session log data.
   + **Enter a log group name in the text box**: Enter the name of a log group that has already been created in your account to store session log data.

1. Choose **Save**.

For more information about working with CloudWatch Logs, see the *[Amazon CloudWatch Logs User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/)*.

# Configuring session logging to disk


After you enable Session Manager logging to CloudWatch or Amazon S3, all commands executed during a session (and the resulting output from those commands) are logged to a temporary file on the disk of the target instance. The temporary file is named `ipcTempFile.log`. 

The `ipcTempFile.log` is controlled by the `SessionLogsDestination` parameter in the SSM Agent configuration file. This parameter accepts the following values:
+ **disk**: If you specify this parameter and session logging to CloudWatch or Amazon S3 are *enabled*, SSM Agent creates the `ipcTempFile.log` temporary log file and logs session commands and output to disk. Session Manager uploads this log to either CloudWatch or S3 during or after the session, depending on the logging configuration. The log is then deleted according to the duration specified for the SSM Agent `SessionLogsRetentionDurationHours` configuration parameter.

  If you specify this parameter and session logging to CloudWatch and Amazon S3 are *disabled*, SSM Agent still logs command history and output in the `ipcTempFile.log` file. The file will be deleted according to the duration specified for the SSM Agent `SessionLogsRetentionDurationHours` configuration parameter.
+ **none**: If you specify this parameter and session logging to CloudWatch or Amazon S3 are *enabled*, logging to disk works exactly as it does as if you'd specified the `disk` parameter. SSM Agent requires the temporary file when session logging to CloudWatch or Amazon S3 are enabled.

  If you specify this parameter and session logging to CloudWatch or Amazon S3 are *disabled*, SSM Agent doesn't create the `ipcTempFile.log` file.

Use the following procedure to enable or disable creating the `ipcTempFile.log` temporary log file to disk when a session is stared.

**To enable or disable creating the Session Manager temporary log file to disk**

1. Either install SSM Agent on your instance or upgrade to version 3.2.2086 or higher. For information about how to check the agent version number, see [Checking the SSM Agent version number](ssm-agent-get-version.md). For information about how to manually install the agent, locate the procedure for your operating system in the following sections:
   + [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md)
   + [Manually installing and uninstalling SSM Agent on EC2 instances for macOS](manually-install-ssm-agent-macos.md)
   + [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md)

1. Connect to your instance and locate the `amazon-ssm-agent.json` file in the following location.
   + **Linux**: /etc/amazon/ssm/
   + **macOS**: /opt/aws/ssm/
   + **Windows Server**: C:\$1Program Files\$1Amazon\$1SSM

   If the file `amazon-ssm-agent.json` doesn't exist, copy the contents of the `amazon-ssm-agent.json.template` to a new file in the same directory. Name the new file `amazon-ssm-agent.json`. 

1. Specify either `none` or `disk` for the `SessionLogsDestination` parameter. Save your changes.

1. [Restart](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html) SSM Agent.

If you specified `disk` for the `SessionLogsDestination` parameter, you can verify that SSM Agent creates the temporary log file by starting a new session and then locating the `ipcTempFile.log` in the following location:
+ **Linux**: /var/lib/amazon/ssm/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **macOS**: /opt/aws/ssm/data/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **Windows Server**: C:\$1ProgramData\$1Amazon\$1SSM\$1InstanceData\$1*target ID*\$1session\$1orchestration\$1*session ID*\$1Standard\$1Stream\$1ipcTempFile.log

**Note**  
By default, the temporary log file is saved on the instance for 14 days.

If you want to update the `SessionLogsDestination` parameter across multiple instances, we recommend you create an SSM Document that specifies the new configuration. You can then use Systems Manager Run Command to implement the change on your instances. For more information, see [Writing your own AWS Systems Manager documents (blog)](https://aws.amazon.com/blogs/mt/writing-your-own-aws-systems-manager-documents/) and [Running commands on managed nodes](running-commands.md).

# Adjusting how long the Session Manager temporary log file is stored on disk


After you enable Session Manager logging to CloudWatch or Amazon S3, all commands executed during a session (and the resulting output from those commands) are logged to a temporary file on the disk of the target instance. The temporary file is named `ipcTempFile.log`. During a session, or after it is completed, Session Manager uploads this temporary log to either CloudWatch or S3. The temporary log is then deleted according to the duration specified for the SSM Agent `SessionLogsRetentionDurationHours` configuration parameter. By default, the temporary log file is saved on the instance for 14 days in the following location:
+ **Linux**: /var/lib/amazon/ssm/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **macOS**: /opt/aws/ssm/data/*target ID*/session/orchestration/*session ID*/Standard\$1Stream/ipcTempFile.log
+ **Windows Server**: C:\$1ProgramData\$1Amazon\$1SSM\$1InstanceData\$1*target ID*\$1session\$1orchestration\$1*session ID*\$1Standard\$1Stream\$1ipcTempFile.log

Use the following procedure to adjust how long the Session Manager temporary log file is stored on disk.

**To adjust how long the `ipcTempFile.log` file is stored on disk**

1. Connect to your instance and locate the `amazon-ssm-agent.json` file in the following location.
   + **Linux**: /etc/amazon/ssm/
   + **macOS**: /opt/aws/ssm/
   + **Windows Server**: C:\$1Program Files\$1Amazon\$1SSM

   If the file `amazon-ssm-agent.json` doesn't exist, copy the contents of the `amazon-ssm-agent.json.template` to a new file in the same directory. Name the new file `amazon-ssm-agent.json`. 

1. Change the value of `SessionLogsRetentionDurationHours` to the desired number of hours. If `SessionLogsRetentionDurationHours` is set to 0, the temporary log file is created during the session and deleted when the session is completed. This setting should ensure the log file doesn't persist after the session ends.

1. Save your changes.

1. [Restart](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-and-restart.html) SSM Agent.

# Disabling Session Manager logging in CloudWatch Logs and Amazon S3


You can use the Systems Manager console or AWS CLI to disable session logging in your account.

**To disable session logging (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Session Manager**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. To disable CloudWatch logging, in the **CloudWatch logging** section, clear the **Enable** checkbox.

1. To disable S3 logging, in the **S3 logging** section, clear the **Enable** checkbox.

1. Choose **Save**.

**To disable session logging (AWS CLI)**  
To disable session logging using the AWS CLI, follow the instructions in [Update Session Manager preferences (command line)](getting-started-configure-preferences-cli.md).

 In your JSON file, ensure that the `s3BucketName` and `cloudWatchLogGroupName` inputs contain no values. For example: 

```
"inputs": {
        "s3BucketName": "",
        ...
        "cloudWatchLogGroupName": "",
        ...
    }
```

Alternatively, to disable logging, you can remove all `S3*` and `cloudWatch*` inputs from your JSON file.

**Note**  
Depending on your configuration, after you disable CloudWatch or S3, a temporary log file might still be generated to disk by SSM Agent. For information about how to disable logging to disk, see [Configuring session logging to disk](session-manager-logging-disk.md).

# Session document schema


The following information describes the schema elements of a Session document. AWS Systems Manager Session Manager uses Session documents to determine which type of session to start, such as a standard session, a port forwarding session, or a session to run an interactive command.

 [schemaVersion](#version)   
The schema version of the Session document. Session documents only support version 1.0.  
Type: String  
Required: Yes

 [description](#descript)   
A description you specify for the Session document. For example, "Document to start port forwarding session with Session Manager".  
Type: String  
Required: No

 [sessionType](#type)   
The type of session the Session document is used to establish.  
Type: String  
Required: Yes  
Valid values: `InteractiveCommands` \$1 `NonInteractiveCommands` \$1 `Port` \$1 `Standard_Stream`

 [inputs](#in)   
The session preferences to use for sessions established using this Session document. This element is required for Session documents that are used to create `Standard_Stream` sessions.  
Type: StringMap  
Required: No    
 [s3BucketName](#bucket)   
The Amazon Simple Storage Service (Amazon S3) bucket you want to send session logs to at the end of your sessions.  
Type: String  
Required: No  
 [s3KeyPrefix](#prefix)   
The prefix to use when sending logs to the Amazon S3 bucket you specified in the `s3BucketName` input. For more information about using a shared prefix with objects stored in Amazon S3, see [How do I use folders in an S3 bucket?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html) in the *Amazon Simple Storage Service User Guide*.  
Type: String  
Required: No  
 [s3EncryptionEnabled](#s3Encrypt)   
If set to `true`, the Amazon S3 bucket you specified in the `s3BucketName` input must be encrypted.  
Type: Boolean  
Required: Yes  
 [cloudWatchLogGroupName](#logGroup)   
The name of the Amazon CloudWatch Logs (CloudWatch Logs) group you want to send session logs to at the end of your sessions.  
Type: String  
Required: No  
 [cloudWatchEncryptionEnabled](#cwEncrypt)   
If set to `true`, the log group you specified in the `cloudWatchLogGroupName` input must be encrypted.  
Type: Boolean  
Required: Yes  
 [cloudWatchStreamingEnabled](#cwStream)   
If set to `true`, a continual stream of session data logs are sent to the log group you specified in the `cloudWatchLogGroupName` input. If set to `false`, session logs are sent to the log group you specified in the `cloudWatchLogGroupName` input at the end of your sessions.  
Type: Boolean  
Required: Yes  
 [kmsKeyId](#kms)   
The ID of the AWS KMS key you want to use to further encrypt data between your local client machines and the Amazon Elastic Compute Cloud (Amazon EC2) managed nodes you connect to.  
Type: String  
Required: No  
 [runAsEnabled](#run)   
If set to `true`, you must specify a user account that exists on the managed nodes you will be connecting to in the `runAsDefaultUser` input. Otherwise, sessions will fail to start. By default, sessions are started using the `ssm-user` account created by the AWS Systems Manager SSM Agent. The Run As feature is only supported for connecting to Linux and macOS managed nodes.  
Type: Boolean  
Required: Yes  
 [runAsDefaultUser](#runUser)   
The name of the user account to start sessions with on Linux and macOS managed nodes when the `runAsEnabled` input is set to `true`. The user account you specify for this input must exist on the managed nodes you will be connecting to; otherwise, sessions will fail to start. When determining which OS user account to use, Session Manager checks in the following order: the `SSMSessionRunAs` tag on the IAM user's session tags, then the `SSMSessionRunAs` tag on the assumed IAM role, and finally this `runAsDefaultUser` value from session preferences. For more information, see [Turn on Run As support for Linux and macOS managed nodes](session-preferences-run-as.md).  
Type: String  
Required: No  
 [idleSessionTimeout](#timeout)   
The amount of time of inactivity you want to allow before a session ends. This input is measured in minutes.  
Type: String  
Valid values: 1-60  
Required: No  
 [maxSessionDuration](#maxDuration)   
The maximum amount of time you want to allow before a session ends. This input is measured in minutes.  
Type: String  
Valid values: 1-1440  
Required: No  
 [shellProfile](#shell)   
The preferences you specify per operating system to apply within sessions such as shell preferences, environment variables, working directories, and running multiple commands when a session is started.  
Type: StringMap  
Required: No    
 [windows](#win)   
The shell preferences, environment variables, working directories, and commands you specify for sessions on Windows Server managed nodes.  
Type: String  
Required: No  
 [linux](#lin)   
The shell preferences, environment variables, working directories, and commands you specify for sessions on Linux and macOS managed nodes.  
Type: String  
Required: No

 [parameters](#param)   
An object that defines the parameters the document accepts. For more information about defining document parameters, see **parameters** in the [Top-level data elements](documents-syntax-data-elements-parameters.md#top-level). For parameters that you reference often, we recommend that you store those parameters in Systems Manager Parameter Store and then reference them. You can reference `String` and `StringList` Parameter Store parameters in this section of a document. You can't reference `SecureString` Parameter Store parameters in this section of a document. You can reference a Parameter Store parameter using the following format.  

```
{{ssm:parameter-name}}
```
For more information about Parameter Store, see [AWS Systems Manager Parameter Store](systems-manager-parameter-store.md).  
Type: StringMap  
Required: No

 [properties](#props)   
An object whose values you specify that are used in the `StartSession` API operation.  
For Session documents that are used for `InteractiveCommands` sessions, the properties object includes the commands to run on the operating systems you specify. You can also determine whether commands are run as `root` using the `runAsElevated` boolean property. For more information, see [Restrict access to commands in a session](session-manager-restrict-command-access.md).  
For Session documents that are used for `Port` sessions, the properties object contains the port number where traffic should be redirected to. For an example, see the `Port` type Session document example later in this topic.  
Type: StringMap  
Required: No

`Standard_Stream` type Session document example

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Document to hold regional settings for Session Manager
sessionType: Standard_Stream
inputs:
  s3BucketName: ''
  s3KeyPrefix: ''
  s3EncryptionEnabled: true
  cloudWatchLogGroupName: ''
  cloudWatchEncryptionEnabled: true
  cloudWatchStreamingEnabled: true
  kmsKeyId: ''
  runAsEnabled: true
  runAsDefaultUser: ''
  idleSessionTimeout: '20'
  maxSessionDuration: '60'
  shellProfile:
    windows: ''
    linux: ''
```

------
#### [ JSON ]

```
{
    "schemaVersion": "1.0",
    "description": "Document to hold regional settings for Session Manager",
    "sessionType": "Standard_Stream",
    "inputs": {
        "s3BucketName": "",
        "s3KeyPrefix": "",
        "s3EncryptionEnabled": true,
        "cloudWatchLogGroupName": "",
        "cloudWatchEncryptionEnabled": true,
        "cloudWatchStreamingEnabled": true,
        "kmsKeyId": "",
        "runAsEnabled": true,
        "runAsDefaultUser": "",
        "idleSessionTimeout": "20",
        "maxSessionDuration": "60",
        "shellProfile": {
            "windows": "date",
            "linux": "pwd;ls"
        }
    }
}
```

------

`InteractiveCommands` type Session document example

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Document to view a log file on a Linux instance
sessionType: InteractiveCommands
parameters:
  logpath:
    type: String
    description: The log file path to read.
    default: "/var/log/amazon/ssm/amazon-ssm-agent.log"
    allowedPattern: "^[a-zA-Z0-9-_/]+(.log)$"
properties:
  linux:
    commands: "tail -f {{ logpath }}"
    runAsElevated: true
```

------
#### [ JSON ]

```
{
    "schemaVersion": "1.0",
    "description": "Document to view a log file on a Linux instance",
    "sessionType": "InteractiveCommands",
    "parameters": {
        "logpath": {
            "type": "String",
            "description": "The log file path to read.",
            "default": "/var/log/amazon/ssm/amazon-ssm-agent.log",
            "allowedPattern": "^[a-zA-Z0-9-_/]+(.log)$"
        }
    },
    "properties": {
        "linux": {
            "commands": "tail -f {{ logpath }}",
            "runAsElevated": true
        }
    }
}
```

------

`Port` type Session document example

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Document to open given port connection over Session Manager
sessionType: Port
parameters:
  paramExample:
    type: string
    description: document parameter
properties:
  portNumber: anyPortNumber
```

------
#### [ JSON ]

```
{
    "schemaVersion": "1.0",
    "description": "Document to open given port connection over Session Manager",
    "sessionType": "Port",
    "parameters": {
        "paramExample": {
            "type": "string",
            "description": "document parameter"
        }
    },
    "properties": {
        "portNumber": "anyPortNumber"
    }
}
```

------

Session document example with special characters

------
#### [ YAML ]

```
---
schemaVersion: '1.0'
description: Example document with quotation marks
sessionType: InteractiveCommands
parameters:
  Test:
    type: String
    description: Test Input
    maxChars: 32
properties:
  windows:
    commands: |
        $Test = '{{ Test }}'
        $myVariable = \"Computer name is $env:COMPUTERNAME\"
        Write-Host "Test variable: $myVariable`.`nInput parameter: $Test"
    runAsElevated: false
```

------
#### [ JSON ]

```
{
   "schemaVersion":"1.0",
   "description":"Test document with quotation marks",
   "sessionType":"InteractiveCommands",
   "parameters":{
      "Test":{
         "type":"String",
         "description":"Test Input",
         "maxChars":32
      }
   },
   "properties":{
      "windows":{
         "commands":[
            "$Test = '{{ Test }}'",
            "$myVariable = \\\"Computer name is $env:COMPUTERNAME\\\"",
            "Write-Host \"Test variable: $myVariable`.`nInput parameter: $Test\""
         ],
         "runAsElevated":false
      }
   }
}
```

------

# Troubleshooting Session Manager


Use the following information to help you troubleshoot problems with AWS Systems Manager Session Manager.

**Topics**
+ [

## AccessDeniedException when calling the TerminateSession operation
](#session-manager-troubleshooting-access-denied-exception)
+ [

## Document process failed unexpectedly: document worker timed out
](#session-manager-troubleshooting-document-worker-timed-out)
+ [

## Session Manager can't connect from the Amazon EC2 console
](#session-manager-troubleshooting-EC2-console)
+ [

## No permission to start a session
](#session-manager-troubleshooting-start-permissions)
+ [

## SSM Agent not online
](#session-manager-troubleshooting-agent-not-online)
+ [

## No permission to change session preferences
](#session-manager-troubleshooting-preferences-permissions)
+ [

## Managed node not available or not configured for Session Manager
](#session-manager-troubleshooting-instances)
+ [

## Session Manager plugin not found
](#plugin-not-found)
+ [

## Session Manager plugin not automatically added to command line path (Windows)
](#windows-plugin-env-var-not-set)
+ [

## Session Manager plugin becomes unresponsive
](#plugin-unresponsive)
+ [

## TargetNotConnected
](#ssh-target-not-connected)
+ [

## Blank screen displays after starting a session
](#session-manager-troubleshooting-start-blank-screen)
+ [

## Managed node becomes unresponsive during long running sessions
](#session-manager-troubleshooting-log-retention)
+ [

## An error occurred (InvalidDocument) when calling the StartSession operation
](#session-manager-troubleshooting-invalid-document)

## AccessDeniedException when calling the TerminateSession operation


**Problem**: When attempting to terminate a session, Systems Manager returns the following error:

```
An error occurred (AccessDeniedException) when calling the TerminateSession operation: 
User: <user_arn> is not authorized to perform: ssm:TerminateSession on resource: 
<ssm_session_arn> because no identity-based policy allows the ssm:TerminateSession action.
```

**Solution A: Confirm that the [latest version of the Session Manager plugin](https://docs.aws.amazon.com/systems-manager/latest/userguide/plugin-version-history.html) is installed on the node**

Enter the following command in the terminal and press Enter.

```
session-manager-plugin --version
```

**Solution B: Install or reinstall the latest version of the plugin**

For more information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

**Solution C: Attempt to reestablish a connection to the node**

Verify that the node is responding to requests. Try reestablishing the session. Or, if necessary, open the Amazon EC2 console and verify the status of the instance is running.

## Document process failed unexpectedly: document worker timed out


**Problem**: When starting a session to a Linux host, Systems Manager returns the following error:

```
document process failed unexpectedly: document worker timed out, 
check [ssm-document-worker]/[ssm-session-worker] log for crash reason
```

If you configured SSM Agent logging, as described in [Viewing SSM Agent logs](ssm-agent-logs.md), you can view more details in the debugging log. For this issue, Session Manager shows the following log entry:

```
failed to create channel: too many open files
```

This error typically indicates that there are too many Session Manager worker processes running and the underlying operating system reached a limit. You have two options for resolving this issue.

**Solution A: Increase the operating system file notification limit**

You can increase the limit by running the following command from a separate Linux host. This command uses Systems Manager Run Command. The specified value increases `max_user_instances` to 8192. This value is considerably higher than the default value of 128, but it won't strain host resources:

```
aws ssm send-command --document-name AWS-RunShellScript \
--instance-id i-02573cafcfEXAMPLE  --parameters \
"commands=sudo sysctl fs.inotify.max_user_instances=8192"
```

**Solution B: Decrease the file notifications used by Session Manager in the target host **

Run the following command from a separate Linux host to list sessions running on the target host:

```
aws ssm describe-sessions --state Active --filters key=Target,value=i-02573cafcfEXAMPLE
```

Review the command output to identify sessions that are no longer needed. You can terminate those session by running the following command from a separate Linux host:

```
aws ssm terminate-session —session-id session ID
```

Optionally, once there are no more sessions running on the remote server, you can free additional resources by running the following command from a separate Linux host. This command terminates all Session Manager processes running on the remote host, and consequently all sessions to the remote host. Before you run this command, verify there are no ongoing sessions you would like to keep:

```
aws ssm send-command --document-name AWS-RunShellScript \
            --instance-id i-02573cafcfEXAMPLE --parameters \
'{"commands":["sudo kill $(ps aux | grep ssm-session-worker | grep -v grep | awk '"'"'{print $2}'"'"')"]}'
```

## Session Manager can't connect from the Amazon EC2 console


**Problem**: After creating a new instance, the **Connect** button > **Session Manager** tab in the Amazon Elastic Compute Cloud (Amazon EC2) console doesn't give you the option to connect.

**Solution A: Create an instance profile**: If you haven't already done so (as instructed by the information on the **Session Manager** tab in the EC2 console), create an AWS Identity and Access Management (IAM) instance profile by using Quick Setup. Quick Setup is a tool in AWS Systems Manager.

Session Manager requires an IAM instance profile to connect to your instance. You can create an instance profile and assign it to your instance by creating a [host management configuration](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-host-management.html) with Quick Setup. A *host management configuration* creates an instance profile with the required permissions and assigns it to your instance. A host management configuration also enables other Systems Manager tools and creates IAM roles for running those tools. There is no charge to use Quick Setup or the tools enabled by the host management configuration. [Open Quick Setup and create a host management configuration](https://console.aws.amazon.com/systems-manager/quick-setup/create-configuration&configurationType=SSMHostMgmt).

**Important**  
After you create the host management configuration, Amazon EC2 can take several minutes to register the change and refresh the **Session Manager** tab. If the tab doesn't show you a **Connect** button after two minutes, reboot your instance. After it reboots, if you still don't see the option to connect, open [Quick Setup](https://console.aws.amazon.com/systems-manager/quick-setup/create-configuration&configurationType=SSMHostMgmt) and verify you have only one host management configuration. If there are two, delete the older configuration and wait a few minutes.

If you still can't connect after creating a host management configuration, or if you receive an error, including an error about SSM Agent, see one of the following solutions:
+  [Solution B: No error, but still can't connect](#session-manager-troubleshooting-EC2-console-no-error) 
+  [Solution C: Error about missing SSM Agent](#session-manager-troubleshooting-EC2-console-no-agent) 

### Solution B: No error, but still can't connect


If you created the host management configuration, waited several minutes before trying to connect, and still can't connect, then you might need to manually apply the host management configuration to your instance. Use the following procedure to update a Quick Setup host management configuration and apply changes to an instance.

**To update a host management configuration using Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. In the **Configurations** list, choose the **Host Management** configuration you created.

1. Choose **Actions**, and then choose **Edit configuration**.

1. Near the bottom of the **Targets** section, under **Choose how you want to target instances**, choose **Manual**.

1. In the **Instances** section, choose the instance you created.

1. Choose **Update**.

Wait a few minutes for EC2 to refresh the **Session Manager** tab. If you still can't connect or if you receive an error, review the remaining solutions for this issue.

### Solution C: Error about missing SSM Agent


If you weren't able to create a host management configuration by using Quick Setup, or if you received an error about SSM Agent not being installed, you may need to manually install SSM Agent on your instance. SSM Agent is Amazon software that enables Systems Manager to connect to your instance by using Session Manager. SSM Agent is installed by default on most Amazon Machine Images (AMIs). If your instance was created from a non-standard AMI or an older AMI, you might have to manually install the agent. For the procedure to install SSM Agent, see the following topic that corresponds to your instance operating system.
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-windows.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-windows.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-macos.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-macos.html) 
+  [AlmaLinux](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-alma.html) 
+  [Amazon Linux 2 and AL2023](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-al2.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-deb.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-deb.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-oracle.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-oracle.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rhel.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rhel.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rocky.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-rocky.html) 
+  [https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-ubuntu.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/agent-install-ubuntu.html) 

For issues with SSM Agent, see [Troubleshooting SSM Agent](troubleshooting-ssm-agent.md).

## No permission to start a session


**Problem**: You try to start a session, but the system tells you that you don't have the necessary permissions.
+ **Solution**: A system administrator hasn't granted you AWS Identity and Access Management (IAM) policy permissions for starting Session Manager sessions. For information, see [Control user session access to instances](session-manager-getting-started-restrict-access.md).

## SSM Agent not online


**Problem**: You see a message on the Amazon EC2 instance **Session Manager** tab that states: "SSM Agent is not online. The SSM Agent was unable to connect to a Systems Manager endpoint to register itself with the service."

**Solution**: SSM Agent is Amazon software that runs on Amazon EC2 instances so that Session Manager can connect to them. If you see this error, SSM Agent is unable to establish a connection with the Systems Manager endpoint. Possible sources of the problem could be firewall restrictions, routing problems, or lack of internet connectivity. To resolve this issue, investigate network connectivity problems. For more information, see [Troubleshooting SSM Agent](troubleshooting-ssm-agent.md) and [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md). For information about Systems Manager endpoints, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html) in the AWS General Reference.

## No permission to change session preferences


**Problem**: You try to update global session preferences for your organization, but the system tells you that you don't have the necessary permissions.
+ **Solution**: A system administrator hasn't granted you IAM policy permissions for setting Session Manager preferences. For information, see [Grant or deny a user permissions to update Session Manager preferences](preference-setting-permissions.md).

## Managed node not available or not configured for Session Manager


**Problem 1**: You want to start a session on the **Start a session** console page, but a managed node isn't in the list.
+ **Solution A**: The managed node you want to connect to might not have been configured for AWS Systems Manager. For more information, see [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md). 
**Note**  
If AWS Systems Manager SSM Agent is already running on a managed node when you attach the IAM instance profile, you might need to restart the agent before the instance is listed on the **Start a session** console page.
+ **Solution B**: The proxy configuration you applied to the SSM Agent on your managed node might be incorrect. If the proxy configuration is incorrect, the managed node won't be able to reach the needed service endpoints, or the node might report as a different operating system to Systems Manager. For more information, see [Configuring SSM Agent to use a proxy on Linux nodes](configure-proxy-ssm-agent.md) and [Configure SSM Agent to use a proxy for Windows Server instances](configure-proxy-ssm-agent-windows.md).

**Problem 2**: A managed node you want to connect is in the list on the **Start a session** console page, but the page reports that "The instance you selected isn't configured to use Session Manager." 
+ **Solution A**: The managed node has been configured for use with the Systems Manager service, but the IAM instance profile attached to the node might not include permissions for the Session Manager tool. For information, see [Verify or Create an IAM Instance Profile with Session Manager Permissions](session-manager-getting-started-instance-profile.md).
+ **Solution B**: The managed node isn't running a version of SSM Agent that supports Session Manager. Update SSM Agent on the node to version 2.3.68.0 or later. 

  Update SSM Agent manually on a managed node by following the steps in [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md), [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md), or [Manually installing and uninstalling SSM Agent on EC2 instances for macOS](manually-install-ssm-agent-macos.md), depending on the operating system. 

  Alternatively, use the Run Command document `AWS-UpdateSSMAgent` to update the agent version on one or more managed nodes at a time. For information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).
**Tip**  
To always keep your agent up to date, we recommend updating SSM Agent to the latest version on an automated schedule that you define using either of the following methods:  
Run `AWS-UpdateSSMAgent` as part of a State Manager association. For information, see [Walkthrough: Automatically update SSM Agent with the AWS CLI](state-manager-update-ssm-agent-cli.md).
Run `AWS-UpdateSSMAgent` as part of a maintenance window. For information about working with maintenance windows, see [Create and manage maintenance windows using the console](sysman-maintenance-working.md) and [Tutorial: Create and configure a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-create.md).
+ **Solution C**: The managed node can't reach the requisite service endpoints. You can improve the security posture of your managed nodes by using interface endpoints powered by AWS PrivateLink to connect to Systems Manager endpoints. The alternative to using interface endpoints is to allow outbound internet access on your managed nodes. For more information, see [Use PrivateLink to set up a VPC endpoint for Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-privatelink.html).
+ **Solution D**: The managed node has limited available CPU or memory resources. Although your managed node might otherwise be functional, if the node doesn't have enough available resources, you can't establish a session. For more information, see [Troubleshooting an Unreachable Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-console.html).

## Session Manager plugin not found


To use the AWS CLI to run session commands, the Session Manager plugin must also be installed on your local machine. For information, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

## Session Manager plugin not automatically added to command line path (Windows)


When you install the Session Manager plugin on Windows, the `session-manager-plugin` executable should be automatically added to your operating system's `PATH` environment variable. If the command failed after you ran it to check whether the Session Manager plugin installed correctly (`aws ssm start-session --target instance-id`), you might need to set it manually using the following procedure.

**To modify your PATH variable (Windows)**

1. Press the Windows key and enter **environment variables**.

1. Choose **Edit environment variables for your account**.

1. Choose **PATH** and then choose **Edit**.

1. Add paths to the **Variable value** field, separated by semicolons, as shown in this example: *`C:\existing\path`*;*`C:\new\path`*

   *`C:\existing\path`* represents the value already in the field. *`C:\new\path`* represents the path you want to add, as shown in the following example.
   + **64-bit machines**: `C:\Program Files\Amazon\SessionManagerPlugin\bin\`

1. Choose **OK** twice to apply the new settings.

1. Close any running command prompts and re-open.

## Session Manager plugin becomes unresponsive


During a port forwarding session, traffic might stop forwarding if you have antivirus software installed on your local machine. In some cases, antivirus software interferes with the Session Manager plugin causing process deadlocks. To resolve this issue, allow or exclude the Session Manager plugin from the antivirus software. For information about the default installation path for the Session Manager plugin, see [Install the Session Manager plugin for the AWS CLI](session-manager-working-with-install-plugin.md).

## TargetNotConnected


**Problem**: You try to start a session, but the system returns the error message, "An error occurred (TargetNotConnected) when calling the StartSession operation: *InstanceID* isn't connected."
+ **Solution A**: This error is returned when the specified target managed node for the session isn't fully configured for use with Session Manager. For information, see [Setting up Session Manager](session-manager-getting-started.md).
+ **Solution B**: This error is also returned if you attempt to start a session on a managed node that is located in a different AWS account or AWS Region.

## Blank screen displays after starting a session


**Problem**: You start a session and Session Manager displays a blank screen.
+ **Solution A**: This issue can occur when the root volume on the managed node is full. Due to lack of disk space, SSM Agent on the node stops working. To resolve this issue, use Amazon CloudWatch to collect metrics and logs from the operating systems. For information, see [Collect metrics, logs, and traces with the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) in the *Amazon CloudWatch User Guide*.
+ **Solution B**: A blank screen might display if you accessed the console using a link that includes a mismatched endpoint and Region pair. For example, in the following console URL, `us-west-2` is the specified endpoint, but `us-west-1` is the specified AWS Region.

  ```
  https://us-west-2.console.aws.amazon.com/systems-manager/session-manager/sessions?region=us-west-1
  ```
+ **Solution C**: The managed node is connecting to Systems Manager using VPC endpoints, and your Session Manager preferences write session output to an Amazon S3 bucket or Amazon CloudWatch Logs log group, but an `s3` gateway endpoint or `logs` interface endpoint doesn't exist in the VPC. An `s3` endpoint in the format **`com.amazonaws.region.s3`** is required if your managed nodes are connecting to Systems Manager using VPC endpoints, and your Session Manager preferences write session output to an Amazon S3 bucket. Alternatively, a `logs` endpoint in the format **`com.amazonaws.region.logs`** is required if your managed nodes are connecting to Systems Manager using VPC endpoints, and your Session Manager preferences write session output to a CloudWatch Logs log group. For more information, see [Creating VPC endpoints for Systems Manager](setup-create-vpc.md#create-vpc-endpoints).
+ **Solution D**: The log group or Amazon S3 bucket you specified in your session preferences has been deleted. To resolve this issue, update your session preferences with a valid log group or S3 bucket.
+ **Solution E**: The log group or Amazon S3 bucket you specified in your session preferences isn't encrypted, but you have set the `cloudWatchEncryptionEnabled` or `s3EncryptionEnabled` input to `true`. To resolve this issue, update your session preferences with a log group or Amazon S3 bucket that is encrypted, or set the `cloudWatchEncryptionEnabled` or `s3EncryptionEnabled` input to `false`. This scenario is only applicable to customers who create session preferences using command line tools.

## Managed node becomes unresponsive during long running sessions


**Problem**: Your managed node becomes unresponsive or crashes during a long running session.

**Solution**: Decrease the SSM Agent log retention duration for Session Manager.

**To decrease the SSM Agent log retention duration for sessions**

1. Locate the `amazon-ssm-agent.json.template` in the `/etc/amazon/ssm/` directory for Linux, or `C:\Program Files\Amazon\SSM` for Windows.

1. Copy the contents of the `amazon-ssm-agent.json.template` to a new file in the same directory named `amazon-ssm-agent.json`.

1. Decrease the default value of the `SessionLogsRetentionDurationHours` value in the `SSM` property, and save the file.

1. Restart the SSM Agent.

## An error occurred (InvalidDocument) when calling the StartSession operation


**Problem**: You receive the following error when starting a session by using the AWS CLI.

```
An error occurred (InvalidDocument) when calling the StartSession operation: Document type: 'Command' is not supported. Only type: 'Session' is supported for Session Manager.
```

**Solution**: The SSM document you specified for the `--document-name` parameter isn't a *Session* document. Use the following procedure to view a list of Session documents in the AWS Management Console.

**To view a list of Session documents**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the **Categories** list, choose **Session documents**.

# AWS Systems Manager State Manager
State Manager

State Manager, a tool in AWS Systems Manager, is a secure and scalable configuration management service that automates the process of keeping your managed nodes and other AWS resources in a state that you define. To get started with State Manager, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/state-manager). In the navigation pane, choose **State Manager**.

**Note**  
State Manager and Maintenance Windows can perform some similar types of updates on your managed nodes. Which one you choose depends on whether you need to automate system compliance or perform high-priority, time-sensitive tasks during periods you specify.  
For more information, see [Choosing between State Manager and Maintenance Windows](state-manager-vs-maintenance-windows.md).

## How can State Manager benefit my organization?


By using pre-configured Systems Manager documents (SSM documents), State Manager offers the following benefits for managing your nodes:
+ Bootstrap nodes with specific software at start-up.
+ Download and update agents on a defined schedule, including the SSM Agent.
+ Configure network settings.
+ Join nodes to a Microsoft Active Directory domain.
+ Run scripts on Linux, macOS, and Windows Server managed nodes throughout their lifecycle.

To manage configuration drift across other AWS resources, you can use Automation, a tool in Systems Manager, with State Manager to perform the following types of tasks:
+ Attach a Systems Manager role to Amazon Elastic Compute Cloud (Amazon EC2) instances to make them *managed nodes*.
+ Enforce desired ingress and egress rules for a security group.
+ Create or delete Amazon DynamoDB backups.
+ Create or delete Amazon Elastic Block Store (Amazon EBS) snapshots.
+ Turn off read and write permissions on Amazon Simple Storage Service (Amazon S3) buckets.
+ Start, restart, or stop managed nodes and Amazon Relational Database Service (Amazon RDS) instances.
+ Apply patches to Linux, macOS, and Window AMIs.

For information about using State Manager with Automation runbooks, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).

## Who should use State Manager?


State Manager is appropriate for any AWS customer that wants to improve the management and governance of their AWS resources and reduce configuration drift.

## What are the features of State Manager?


Key features of State Manager include the following:
+ 

**State Manager associations**  
A State Manager *association* is a configuration that you assign to your AWS resources. The configuration defines the state that you want to maintain on your resources. For example, an association can specify that antivirus software must be installed and running on a managed node, or that certain ports must be closed.

  An association specifies a schedule for when to apply the configuration and the targets for the association. For example, an association for antivirus software might run once a day on all managed nodes in an AWS account. If the software isn't installed on a node, then the association could instruct State Manager to install it. If the software is installed, but the service isn't running, then the association could instruct State Manager to start the service.
+ 

**Flexible scheduling options**  
State Manager offers the following options for scheduling when an association runs:
  + **Immediate or delayed processing**

    When you create an association, by default, the system immediately runs it on the specified resources. After the initial run, the association runs in intervals according to the schedule that you defined. 

    You can instruct State Manager not to run an association immediately by using the **Apply association only at the next specified Cron interval** option in the console or the `ApplyOnlyAtCronInterval` parameter from the command line.
  + **Cron and rate expressions**

    When you create an association, you specify a schedule for when State Manager applies the configuration. State Manager supports most standard cron and rate expressions for scheduling when an association runs. State Manager also supports cron expressions that include a day of the week and the number sign (\$1) to designate the *n*th day of a month to run an association and the (L) sign to indicate the last *X* day of the month.
**Note**  
State Manager doesn't currently support specifying months in cron expressions for associations.

    To further control when an association runs, for example if you want to run an association two days after patch Tuesday, you can specify an offset. An *offset* defines how many days to wait after the scheduled day to run an association.

    For information about building cron and rate expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).
+ 

**Multiple targeting options**  
An association also specifies the targets for the association. State Manager supports targeting AWS resources by using tags, AWS Resource Groups, individual node IDs, or all managed nodes in the current AWS Region and AWS account.
+ 

**Amazon S3 support**  
Store the command output from association runs in an Amazon S3 bucket of your choice. For more information, see [Working with associations in Systems Manager](state-manager-associations.md).
+ 

**EventBridge support**  
This Systems Manager tool is supported as both an *event* type and a *target* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

## Is there a charge to use State Manager?


State Manager is available at no additional charge.

**Topics**
+ [

## How can State Manager benefit my organization?
](#state-manager-benefits)
+ [

## Who should use State Manager?
](#state-manager-who)
+ [

## What are the features of State Manager?
](#state-manager-features)
+ [

## Is there a charge to use State Manager?
](#state-manager-cost)
+ [

# Understanding how State Manager works
](state-manager-about.md)
+ [

# Working with associations in Systems Manager
](state-manager-associations.md)
+ [

# Creating associations that run MOF files
](systems-manager-state-manager-using-mof-file.md)
+ [

# Creating associations that run Ansible playbooks
](systems-manager-state-manager-ansible.md)
+ [

# Creating associations that run Chef recipes
](systems-manager-state-manager-chef.md)
+ [

# Walkthrough: Automatically update SSM Agent with the AWS CLI
](state-manager-update-ssm-agent-cli.md)
+ [

# Walkthrough: Automatically update PV drivers on EC2 instances for Windows Server
](state-manager-update-pv-drivers.md)

**More info**  
+ [Combating Configuration Drift Using Amazon EC2 Systems Manager and Windows PowerShell DSC](https://aws.amazon.com/blogs/mt/combating-configuration-drift-using-amazon-ec2-systems-manager-and-windows-powershell-dsc/)
+ [Configure Amazon EC2 Instances in an Auto Scaling Group Using State Manager](https://aws.amazon.com/blogs/mt/configure-amazon-ec2-instances-in-an-auto-scaling-group-using-state-manager/)

# Understanding how State Manager works


State Manager, a tool in AWS Systems Manager, is a secure and scalable service that automates the process of keeping managed nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) infrastructure in a state that you define.

Here's how State Manager works:

**1. Determine the state you want to apply to your AWS resources.**  
Do you want to guarantee that your managed nodes are configured with specific applications, such as antivirus or malware applications? Do you want to automate the process of updating the SSM Agent or other AWS packages such as `AWSPVDriver`? Do you need to guarantee that specific ports are closed or open? To get started with State Manager, determine the state that you want to apply to your AWS resources. The state that you want to apply determines which SSM document you use to create a State Manager association.  
A State Manager *association* is a configuration that you assign to your AWS resources. The configuration defines the state that you want to maintain on your resources. For example, an association can specify that antivirus software must be installed and running on a managed node, or that certain ports must be closed.  
An association specifies a schedule for when to apply the configuration and the targets for the association. For example, an association for antivirus software might run once a day on all managed nodes in an AWS account. If the software isn't installed on a node, then the association could instruct State Manager to install it. If the software is installed, but the service isn't running, then the association could instruct State Manager to start the service.

**2. Determine if a preconfigured SSM document can help you create the desired state on your AWS resources.**  
Systems Manager includes dozens of preconfigured SSM documents that you can use to create an association. Preconfigured documents are ready to perform common tasks like installing applications, configuring Amazon CloudWatch, running AWS Systems Manager automations, running PowerShell and Shell scripts, and joining managed nodes to a directory service domain for Active Directory.  
You can view all SSM documents in the [Systems Manager console](https://console.aws.amazon.com/systems-manager/documents). Choose the name of a document to learn more about each one. Here are two examples: [https://console.aws.amazon.com/systems-manager/documents/AWS-ConfigureAWSPackage/description](https://console.aws.amazon.com/systems-manager/documents/AWS-ConfigureAWSPackage/description) and [https://console.aws.amazon.com/systems-manager/documents/AWS-InstallApplication/description](https://console.aws.amazon.com/systems-manager/documents/AWS-InstallApplication/description).

**3. Create an association.**  
You can create an association by using the Systems Manager console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell (Tools for Windows PowerShell), or the Systems Manager API. When you create an association, you specify the following information:  
+ A name for the association.
+ The parameters for the SSM document (for example, the path to the application to install or the script to run on the nodes).
+ Targets for the association. You can target managed nodes by specifying tags, by choosing individual node IDs, or by choosing a group in AWS Resource Groups. You can also target *all* managed nodes in the current AWS Region and AWS account. If your targets include more than 1,000 nodes, the system uses an hourly throttling mechanism. This means you might see inaccuracies in your status aggregation count since the aggregation process runs hourly and only when the execution status for a node changes.
+ A role used by association to take actions on your behalf. State Manager will assume this role and call required APIs when dispatching configurations to nodes. For information about setting up the custom-provided role, see [Setup roles for `AssociationDispatchAssumeRole`](#setup-assume-role). If no role is provided, [service-linked role for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/using-service-linked-roles.html) will be used. 
**Note**  
It is recommended that you define a custom IAM role so that you have full control of the permissions that State Manager has when taking actions on your behalf.  
Service-linked role support in State Manager is being phased out. Associations relying on service-linked role may require updates in the future to continue functioning properly.  
For information about managing the usage of custom-provided role, see [Manage usage of AssociationDispatchAssumeRole with `ssm:AssociationDispatchAssumeRole`](#context-key-assume-role).
+ A schedule for when or how often to apply the state. You can specify a cron or rate expression. For more information about creating schedules by using cron and rate expressions, see [Cron and rate expressions for associations](reference-cron-and-rate-expressions.md#reference-cron-and-rate-expressions-association).
**Note**  
State Manager doesn't currently support specifying months in cron expressions for associations.
When you run the command to create the association, Systems Manager binds the information you specified (schedule, targets, SSM document, and parameters) to the targeted resources. The status of the association initially shows "Pending" as the system attempts to reach all targets and *immediately* apply the state specified in the association.   
If you create a new association that is scheduled to run while an earlier association is still running, the earlier association times out and the new association runs.
Systems Manager reports the status of the request to create associations on the resources. You can view status details in the console or (for managed nodes) by using the [DescribeInstanceAssociationsStatus](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstanceAssociationsStatus.html) API operation. If you choose to write the output of the command to Amazon Simple Storage Service (Amazon S3) when you create an association, you can also view the output in the Amazon S3 bucket you specified.  
For more information, see [Working with associations in Systems Manager](state-manager-associations.md).   
API operations that are initiated by the SSM document during an association run are not logged in AWS CloudTrail.

**4. Monitor and update.**  
After you create the association, State Manager reapplies the configuration according to the schedule that you defined in the association. You can view the status of your associations on the [State Manager page](https://console.aws.amazon.com/systems-manager/state-manager) in the console or by directly calling the association ID generated by Systems Manager when you created the association. For more information, see [Viewing association histories](state-manager-associations-history.md). You can update your association documents and reapply them as necessary. You can also create multiple versions of an association. For more information, see [Editing and creating a new version of an association](state-manager-associations-edit.md).

## Understanding when associations are applied to resources


When you create an association, you specify an SSM document that defines the configuration, a list of target resources, and a schedule for applying the configuration. By default, State Manager runs the association when you create it and then according to your schedule. State Manager also attempts to run the association in the following situations: 
+ **Association edit** – State Manager runs the association after a user edits and saves their changes to any of the following association fields: `DOCUMENT_VERSION`, `PARAMETERS`, `SCHEDULE_EXPRESSION`, `OUTPUT_S3_LOCATION`.
+ **Document edit** – State Manager runs the association after a user edits and saves changes to the SSM document that defines the association's configuration state. Specifically, the association runs after the following edits to the document:
  + A user specifies a new `$DEFAULT` document version and the association was created using the `$DEFAULT` version. 
  + A user updates a document and the association was created using the `$LATEST` version.
  + A user deletes the document that was specified when the association was created.
+ **Manual start** – State Manager runs the association when initiated by the user from either the Systems Manager console or programmatically.
+ **Target changes** – State Manager runs the association after any of the following activity occurs on a target node:
  + A managed node comes online for the first time.
  + A managed node comes online after missing a scheduled association run.
  + A managed node comes online after being stopped for more than 30 days.

     
**Note**  
State Manager doesn't monitor documents or packages used in associations across AWS accounts. If you update a document or package in one account, the update won't cause the association to run in the second account. You must manually run the association in the second account.

**Preventing associations from running when a target changes**  
In some cases, you might not want an association to run when a target that consists of managed nodes changes, but only according to its specified schedule.
**Note**  
Running an Automation runbook incurs a cost. If an association with an Automation runbook targets all instances in your account and you regularly launch a large number of instances, the runbook is run on each of the instances when it launches. This can lead to elevated automation charges.

  To prevent an association from running when the targets for that association change, select the **Apply association only at the next specified cron interval check box**. This check box is located in the **Specify schedule** area of the **Create association** and **Edit association** pages.

  This option applies to associations that incorporate either an Automation runbook or an SSM document.

## About target updates with Automation runbooks


In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, the following conditions must be true:
+ The association must have been created by a [Quick Setup](systems-manager-quick-setup.md) configuration. Quick Setup is a tool in AWS Systems Manager. Associations created by other processes are not currently supported.
+ The Automation runbook must explicitly target the resource type `AWS::EC2::Instance` or `AWS::SSM::ManagedInstance`. 
+ The association must specify both parameters and targets.

  In the console, the **Parameter** and **Targets** fields are displayed when you choose a rate control execution.  
![\[Parameter and target options are presented in the console for rate control executions\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sm_Rate_control_execution_options.png)

  When you use the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociation.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociation.html), [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociationBatch.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateAssociationBatch.html), or [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateAssociation.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateAssociation.html) API actions, you can specify these values using the `AutomationTargetParameterName` and `Targets` inputs. In each of these API actions, you can also prevent the association from running each time a target changes by setting the `ApplyOnlyAtCronInterval` parameter to `true`. 

  For information about using the console to control when associations run, including details for avoiding unexpectedly high costs for Automation executions, see [Understanding when associations are applied to resources](#state-manager-about-scheduling). 

## Setup roles for `AssociationDispatchAssumeRole`


To setup custom dispatch assume roles that State Manager assumes to perform actions on your behalf, the roles should trust `ssm.amazonaws.com` and have the required permission to call `ssm:SendCommand` or `ssm:StartAutomationExecution` based on association use cases. 

Sample trust policy: 

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "ssm.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
```

## Manage usage of AssociationDispatchAssumeRole with `ssm:AssociationDispatchAssumeRole`


To manage the usage of custom dispatch assume roles that State Manager assumes to perform actions on your behalf, use the `ssm:AssociationDispatchAssumeRole` condition key. This condition controls whether associations can be created or updated without specifying a custom dispatch assume role. 

In the following sample policy, the `"Allow"` statement grants permissions to association create and update APIs only when the `AssociationDispatchAssumeRole` parameter is specified. Without this parameter in API requests, the policy does not grant permission to create or update associations: 

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:CreateAssociation",
                "ssm:UpdateAssociation",
                "ssm:CreateAssociationBatch"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "ssm:AssociationDispatchAssumeRole": "*"
                }
            }
        }
    ]
}
```

# Working with associations in Systems Manager
Working with associations

This section describes how to create and manage State Manager associations by using the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), and AWS Tools for PowerShell. 

**Topics**
+ [

# Understanding targets and rate controls in State Manager associations
](systems-manager-state-manager-targets-and-rate-controls.md)
+ [

# Creating associations
](state-manager-associations-creating.md)
+ [

# Editing and creating a new version of an association
](state-manager-associations-edit.md)
+ [

# Deleting associations
](systems-manager-state-manager-delete-association.md)
+ [

# Running Auto Scaling groups with associations
](systems-manager-state-manager-asg.md)
+ [

# Viewing association histories
](state-manager-associations-history.md)
+ [

# Working with associations using IAM
](systems-manager-state-manager-iam.md)

# Understanding targets and rate controls in State Manager associations
Understanding targets and rate controls

This topic describes State Manager features that help you deploy an association to dozens or hundreds of nodes while controlling the number of nodes that run the association at the scheduled time. State Manager is a tool in AWS Systems Manager.

## Using targets


When you create a State Manager association, you choose which nodes to configure with the association in the **Targets** section of the Systems Manager console, as shown here.

![\[Different options for targeting nodes when creating a State Manager association\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-targets.png)


If you create an association by using a command line tool such as the AWS Command Line Interface (AWS CLI), then you specify the `targets` parameter. Targeting nodes allows you to configure tens, hundreds, or thousands of nodes with an association without having to specify or choose individual node IDs. 

Each managed node can be targeted by a maximum of 20 associations.

State Manager includes the following target options when creating an association.

**Specify tags**  
Use this option to specify a tag key and (optionally) a tag value assigned to your nodes. When you run the request, the system locates and attempts to create the association on all nodes that match the specified tag key and value. If you specified multiple tag values, the association targets any node with at least one of those tag values. When the system initially creates the association, it runs the association. After this initial run, the system runs the association according to the schedule you specified.

If you create new nodes and assign the specified tag key and value to those nodes, the system automatically applies the association, runs it immediately, and then runs it according to the schedule. This applies when the association uses a Command or Policy document and doesn't apply if the association uses an Automation runbook. If you delete the specified tags from a node, the system no longer runs the association on those nodes.

**Note**  
If you use Automation runbooks with State Manager and the tagging limitation prevents you from achieving a specific goal, consider using Automation runbooks with Amazon EventBridge. For more information, see [Run automations based on EventBridge events](running-automations-event-bridge.md). For more information about using runbooks with State Manager, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md). 

As a best practice, we recommend using tags when creating associations that use a Command or Policy document. We also recommend using tags when creating associations to run Auto Scaling groups. For more information, see [Running Auto Scaling groups with associations](systems-manager-state-manager-asg.md).

**Note**  
Note the following information.  
When creating an association in the AWS Management Console that targets nodes by using tags, you can specify only one tag key for an automation association and five tag keys for a command association. *All* tag keys specified in the association must be currently assigned to the node. If they aren't, State Manager fails to target the node for an association.
If you want to use the console *and* you want to target your nodes by using more than one tag key for an automation association and five tag keys for a command association, assign the tag keys to an AWS Resource Groups group and add the nodes to it. You can then choose the **Resource Group** option in the **Targets** list when you create the State Manager association.
You can specify a maximum of five tag keys by using the AWS CLI. If you use the AWS CLI, *all* tag keys specified in the `create-association` command must be currently assigned to the node. If they aren't, State Manager fails to target the node for an association.

**Choose nodes manually**  
Use this option to manually select the nodes where you want to create the association. The **Instances** pane displays all Systems Manager managed nodes in the current AWS account and AWS Region. You can manually select as many nodes as you want. When the system initially creates the association, it runs the association. After this initial run, the system runs the association according to the schedule you specified.

**Note**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

**Choose a resource group**  
Use this option to create an association on all nodes returned by an AWS Resource Groups tag-based or AWS CloudFormation stack-based query. 

Below are details about targeting resource groups for an association.
+ If you add new nodes to a group, the system automatically maps the nodes to the association that targets the resource group. The system applies the association to the nodes when it discovers the change. After this initial run, the system runs the association according to the schedule you specified.
+ If you create an association that targets a resource group and the `AWS::SSM::ManagedInstance` resource type was specified for that group, then by design, the association runs on both Amazon Elastic Compute Cloud (Amazon EC2) instances and non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment.

  The converse is also true. If you create an association that targets a resource group and the `AWS::EC2::Instance` resource type was specified for that group, then by design, the association runs on both non-EC2 nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment and (Amazon EC2) instances.
+ If you create an association that targets a resource group, the resource group must not have more than five tag keys assigned to it or more than five values specified for any one tag key. If either of these conditions applies to the tags and keys assigned to your resource group, the association fails to run and returns an `InvalidTarget` error. 
+ If you create an association that targets a resource group using tags, you can't choose the **(empty value)** option for the tag value.
+ If you delete a resource group, all instances in that group no longer run the association. As a best practice, delete associations targeting the group.
+ At most you can target a single resource group for an association. Multiple or nested groups aren't supported.
+ After you create an association, State Manager periodically updates the association with information about resources in the Resource Group. If you add new resources to a Resource Group, the schedule for when the system applies the association to the new resources depends on several factors. You can determine the status of the association in the State Manager page of the Systems Manager console.

**Warning**  
An AWS Identity and Access Management (IAM) user, group, or role with permission to create an association that targets a resource group of Amazon EC2 instances automatically has root-level control of all instances in the group. Only trusted administrators should be permitted to create associations. 

For more information about Resource Groups, see [What Is AWS Resource Groups?](https://docs.aws.amazon.com/ARG/latest/userguide/) in the *AWS Resource Groups User Guide*.

**Choose all nodes**  
Use this option to target all nodes in the current AWS account and AWS Region. When you run the request, the system locates and attempts to create the association on all nodes in the current AWS account and AWS Region. When the system initially creates the association, it runs the association. After this initial run, the system runs the association according to the schedule you specified. If you create new nodes, the system automatically applies the association, runs it immediately, and then runs it according to the schedule.

## Using rate controls


You can control the execution of an association on your nodes by specifying a concurrency value and an error threshold. The concurrency value specifies how many nodes can run the association simultaneously. An error threshold specifies how many association executions can fail before Systems Manager sends a command to each node configured with that association to stop running the association. The command stops the association from running until the next scheduled execution. The concurrency and error threshold features are collectively called *rate controls*. 

![\[Different rate control options when creating a State Manager association\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-rate-controls.png)


**Concurrency**  
Concurrency helps to limit the impact on your nodes by allowing you to specify that only a certain number of nodes can process an association at one time. You can specify either an absolute number of nodes, for example 20, or a percentage of the target set of nodes, for example 10%.

State Manager concurrency has the following restrictions and limitations:
+ If you choose to create an association by using targets, but you don't specify a concurrency value, then State Manager automatically enforces a maximum concurrency of 50 nodes.
+ If new nodes that match the target criteria come online while an association that uses concurrency is running, then the new nodes run the association if the concurrency value isn't exceeded. If the concurrency value is exceeded, then the nodes are ignored during the current association execution interval. The nodes run the association during the next scheduled interval while conforming to the concurrency requirements.
+ If you update an association that uses concurrency, and one or more nodes are processing that association when it's updated, then any node that is running the association is allowed to complete. Those associations that haven't started are stopped. After running associations complete, all target nodes immediately run the association again because it was updated. When the association runs again, the concurrency value is enforced. 

**Error thresholds**  
An error threshold specifies how many association executions are allowed to fail before Systems Manager sends a command to each node configured with that association. The command stops the association from running until the next scheduled execution. You can specify either an absolute number of errors, for example 10, or a percentage of the target set, for example 10%.

If you specify an absolute number of three errors, for example, State Manager sends the stop command when the fourth error is returned. If you specify 0, then State Manager sends the stop command after the first error result is returned.

If you specify an error threshold of 10% for 50 associations, then State Manager sends the stop command when the sixth error is returned. Associations that are already running when an error threshold is reached are allowed to complete, but some of these associations might fail. To ensure that there aren't more errors than the number specified for the error threshold, set the **Concurrency** value to 1 so that associations proceed one at a time. 

State Manager error thresholds have the following restrictions and limitations:
+ Error thresholds are enforced for the current interval.
+ Information about each error, including step-level details, is recorded in the association history.
+ If you choose to create an association by using targets, but you don't specify an error threshold, then State Manager automatically enforces a threshold of 100% failures.

# Creating associations


State Manager, a tool in AWS Systems Manager, helps you keep your AWS resources in a state that you define and reduce configuration drift. To do this, State Manager uses associations. An *association* is a configuration that you assign to your AWS resources. The configuration defines the state that you want to maintain on your resources. For example, an association can specify that antivirus software must be installed and running on a managed node, or that certain ports must be closed.

An association specifies a schedule for when to apply the configuration and the targets for the association. For example, an association for antivirus software might run once a day on all managed nodes in an AWS account. If the software isn't installed on a node, then the association could instruct State Manager to install it. If the software is installed, but the service isn't running, then the association could instruct State Manager to start the service.

**Warning**  
When you create an association, you can choose an AWS resource group of managed nodes as the target for the association. If an AWS Identity and Access Management (IAM) user, group, or role has permission to create an association that targets a resource group of managed nodes, then that user, group, or role automatically has root-level control of all nodes in the group. Permit only trusted administrators to create associations. 

**Association targets and rate controls**  
An association specifies which managed nodes, or targets, should receive the association. State Manager includes several features to help you target your managed nodes and control how the association is deployed to those targets. For more information about targets and rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

**Tagging associations**  
You can assign tags to an association when you create it by using a command line tool such as the AWS CLI or AWS Tools for PowerShell. Adding tags to an association by using the Systems Manager console isn't supported. 

**Running associations**  
By default, State Manager runs an association immediately after you create it, and then according to the schedule that you've defined. 

The system also runs associations according to the following rules:
+ State Manager attempts to run the association on all specified or targeted nodes during an interval.
+ If an association doesn't run during an interval (because, for example, a concurrency value limited the number of nodes that could process the association at one time), then State Manager attempts to run the association during the next interval.
+ State Manager runs the association after changes to the association's configuration, target nodes, documents, or parameters. For more information, see [Understanding when associations are applied to resources](state-manager-about.md#state-manager-about-scheduling)
+ State Manager records history for all skipped intervals. You can view the history on the **Execution History** tab.

## Scheduling associations


You can schedule associations to run at basic intervals such as *every 10 hours*, or you can create more advanced schedules using custom cron and rate expressions. You can also prevent associations from running when you first create them. 

**Using cron and rate expressions to schedule association runs**  
In addition to standard cron and rate expressions, State Manager also supports cron expressions that include a day of the week and the number sign (\$1) to designate the *n*th day of a month to run an association. Here is an example that runs a cron schedule on the third Tuesday of every month at 23:30 UTC:

`cron(30 23 ? * TUE#3 *)`

Here is an example that runs on the second Thursday of every month at midnight UTC:

`cron(0 0 ? * THU#2 *)`

State Manager also supports the (L) sign to indicate the last *X* day of the month. Here is an example that runs a cron schedule on the last Tuesday of every month at midnight UTC:

`cron(0 0 ? * 3L *)`

To further control when an association runs, for example if you want to run an association two days after patch Tuesday, you can specify an offset. An *offset* defines how many days to wait after the scheduled day to run an association. For example, if you specified a cron schedule of `cron(0 0 ? * THU#2 *)`, you could specify the number 3 in the **Schedule offset** field to run the association each Sunday after the second Thursday of the month.

**Note**  
To use offsets, you must either select **Apply association only at the next specified Cron interval** in the console or specify the `ApplyOnlyAtCronInterval` parameter from the command line. When either of these options are activated, State Manager doesn't run the association immediately after you create it.

For more information about cron and rate expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

## Create an association (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association.

**Note**  
Note the following information.  
This procedure describes how to create an association that uses either a `Command` or a `Policy` document to target managed nodes. For information about creating an association that uses an Automation runbook to target nodes or other types of AWS resources, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).
When creating an association, you can specify a maximum of five tag keys by using the AWS Management Console. *All* tag keys specified for the association must be currently assigned to the node. If they aren't, State Manager fails to target the node for the association.

**To create a State Manager association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **Create association**.

1. In the **Name** field, specify a name.

1. In the **Document** list, choose the option next to a document name. Note the document type. This procedure applies to `Command` and `Policy` documents. For information about creating an association that uses an Automation runbook, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).
**Important**  
State Manager doesn't support running associations that use a new version of a document if that document is shared from another account. State Manager always runs the `default` version of a document if shared from another account, even though the Systems Manager console shows that a new version was processed. If you want to run an association using a new version of a document shared from another account, you must set the document version to `default`.

1. For **Parameters**, specify the required input parameters.

1. (Optional) For **Association Dispatch Assume Role**, select a role from the drop-down. State Manager will take actions using this role on your behalf. For information about setting up the custom-provided role, see [Setup roles for `AssociationDispatchAssumeRole`](state-manager-about.md#setup-assume-role) 
**Note**  
It is recommended that you define a custom IAM role so that you have full control of the permissions that State Manager has when taking actions on your behalf.  
Service-linked role support in State Manager is being phased out. Associations relying on service-linked role may require updates in the future to continue functioning properly.  
For information about managing the usage of custom-provided role, see [Manage usage of AssociationDispatchAssumeRole with `ssm:AssociationDispatchAssumeRole`](state-manager-about.md#context-key-assume-role).

1. (Optional) Choose a CloudWatch alarm to apply to your association for monitoring. 
**Note**  
Note the following information about this step.  
The alarms list displays a maximum of 100 alarms. If you don't see your alarm in the list, use the AWS Command Line Interface to create the association. For more information, see [Create an association (command line)](#create-state-manager-association-commandline).
To attach a CloudWatch alarm to your command, the IAM principal that creates the association must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).
If your alarm activates, any pending command invocations or automations do not run.

1. For **Targets**, choose an option. For information about using targets, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).
**Note**  
In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, certain conditions must be met. For information, see [About target updates with Automation runbooks](state-manager-about.md#runbook-target-updates).

1. In the **Specify schedule** section, choose either **On Schedule** or **No schedule**. If you choose **On Schedule**, use the buttons provided to create a cron or rate schedule for the association. 

   If you don't want the association to run immediately after you create it, choose **Apply association only at the next specified Cron interval**.

1. (Optional) In the **Schedule offset** field, specify a number between 1 and 6. 

1. In the **Advanced options** section use **Compliance severity** to choose a severity level for the association and use **Change Calendars** to choose a change calendar for the association.

   Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

   The change calendar determines when the association runs. If the calendar is closed, the association isn't applied. If the calendar is open, the association runs accordingly. For more information, see [AWS Systems Manager Change Calendar](systems-manager-change-calendar.md).

1. In the **Rate control** section, choose options to control how the association runs on multiple nodes. For more information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors that are allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors that are allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

   Following are the minimal permissions required to turn on Amazon S3 output for an association. You can further restrict access by attaching IAM policies to users or roles within an account. At minimum, an Amazon EC2 instance profile should have an IAM role with the `AmazonSSMManagedInstanceCore` managed policy and the following inline policy. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject",
                   "s3:PutObjectAcl"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
           }
       ]
   }
   ```

------

   For minimal permissions, the Amazon S3 bucket you export to must have the default settings defined by the Amazon S3 console. For more information about creating Amazon S3 buckets, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon S3 User Guide*. 
**Note**  
API operations that are initiated by the SSM document during an association run are not logged in AWS CloudTrail.

1. Choose **Create Association**.

**Note**  
If you delete the association you created, the association no longer runs on any targets of that association.

## Create an association (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows Server) or Tools for PowerShell to create a State Manager association. This section includes several examples that show how to use targets and rate controls. Targets and rate controls allow you to assign an association to dozens or hundreds of nodes while controlling the execution of those associations. For more information about targets and rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

**Important**  
This procedure describes how to create an association that uses either a `Command` or a `Policy` document to target managed nodes. For information about creating an association that uses an Automation runbook to target nodes or other types of AWS resources, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).

**Before you begin**  
The `targets` parameter is an array of search criteria that targets nodes using a `Key`,`Value` combination that you specify. If you plan to create an association on dozens or hundreds of node by using the `targets` parameter, review the following targeting options before you begin the procedure.

Target specific nodes by specifying IDs

```
--targets Key=InstanceIds,Values=instance-id-1,instance-id-2,instance-id-3
```

```
--targets Key=InstanceIds,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE,i-07782c72faEXAMPLE
```

Target instances by using tags

```
--targets Key=tag:tag-key,Values=tag-value-1,tag-value-2,tag-value-3
```

```
--targets Key=tag:Environment,Values=Development,Test,Pre-production
```

Target nodes by using AWS Resource Groups

```
--targets Key=resource-groups:Name,Values=resource-group-name
```

```
--targets Key=resource-groups:Name,Values=WindowsInstancesGroup
```

Target all instances in the current AWS account and AWS Region

```
--targets Key=InstanceIds,Values=*
```

**Note**  
Note the following information.  
State Manager doesn't support running associations that use a new version of a document if that document is shared from another account. State Manager always runs the `default` version of a document if shared from another account, even though the Systems Manager console shows that a new version was processed. If you want to run an association using a new version of a document shared form another account, you must set the document version to `default`.
State Manager doesn't support `IncludeChildOrganizationUnits`, `ExcludeAccounts`, `TargetsMaxErrors`, `TargetsMaxConcurrency`, `Targets`, `TargetLocationAlarmConfiguration` parameters for [TargetLocation](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_TargetLocation.html).
You can specify a maximum of five tag keys by using the AWS CLI. If you use the AWS CLI, *all* tag keys specified in the `create-association` command must be currently assigned to the node. If they aren't, State Manager fails to target the node for an association.
When you create an association, you specify when the schedule runs. Specify the schedule by using a cron or rate expression. For more information about cron and rate expressions, see [Cron and rate expressions for associations](reference-cron-and-rate-expressions.md#reference-cron-and-rate-expressions-association).
In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, certain conditions must be met. For information, see [About target updates with Automation runbooks](state-manager-about.md#runbook-target-updates).

**To create an association**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Use the following format to create a command that creates a State Manager association. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
       --name document_name \
       --document-version version_of_document_applied \
       --instance-id instances_to_apply_association_on \
       --parameters (if any) \
       --targets target_options \
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations \
       --schedule-expression "cron_or_rate_expression" \
       --apply-only-at-cron-interval required_parameter_for_schedule_offsets \
       --schedule-offset number_between_1_and_6 \
       --output-location s3_bucket_to_store_output_details \
       --association-name association_name \
       --max-errors a_number_of_errors_or_a_percentage_of_target_set \
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set \
       --compliance-severity severity_level \
       --calendar-names change_calendar_names \
       --target-locations aws_region_or_account \
       --tags "Key=tag_key,Value=tag_value"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
       --name document_name ^
       --document-version version_of_document_applied ^
       --instance-id instances_to_apply_association_on ^
       --parameters (if any) ^
       --targets target_options ^
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations ^
       --schedule-expression "cron_or_rate_expression" ^
       --apply-only-at-cron-interval required_parameter_for_schedule_offsets ^
       --schedule-offset number_between_1_and_6 ^
       --output-location s3_bucket_to_store_output_details ^
       --association-name association_name ^
       --max-errors a_number_of_errors_or_a_percentage_of_target_set ^
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set ^
       --compliance-severity severity_level ^
       --calendar-names change_calendar_names ^
       --target-locations aws_region_or_account ^
       --tags "Key=tag_key,Value=tag_value"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
       -Name document_name `
       -DocumentVersion version_of_document_applied `
       -InstanceId instances_to_apply_association_on `
       -Parameters (if any) `
       -Target target_options `
       -AssociationDispatchAssumeRole arn_of_role_to_be_used_when_dispatching_configurations `
       -ScheduleExpression "cron_or_rate_expression" `
       -ApplyOnlyAtCronInterval required_parameter_for_schedule_offsets `
       -ScheduleOffSet number_between_1_and_6 `
       -OutputLocation s3_bucket_to_store_output_details `
       -AssociationName association_name `
       -MaxError  a_number_of_errors_or_a_percentage_of_target_set
       -MaxConcurrency a_number_of_instances_or_a_percentage_of_target_set `
       -ComplianceSeverity severity_level `
       -CalendarNames change_calendar_names `
       -TargetLocations aws_region_or_account `
       -Tags "Key=tag_key,Value=tag_value"
   ```

------

   The following example creates an association on nodes tagged with `"Environment,Linux"`. The association uses the `AWS-UpdateSSMAgent` document to update the SSM Agent on the targeted nodes at 2:00 UTC every Sunday morning. This association runs simultaneously on 10 nodes maximum at any given time. Also, this association stops running on more nodes for a particular execution interval if the error count exceeds 5. For compliance reporting, this association is assigned a severity level of Medium.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name Update_SSM_Agent_Linux \
     --targets Key=tag:Environment,Values=Linux \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --compliance-severity "MEDIUM" \
     --schedule-expression "cron(0 2 ? * SUN *)" \
     --max-errors "5" \
     --max-concurrency "10"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name Update_SSM_Agent_Linux ^
     --targets Key=tag:Environment,Values=Linux ^
     --name AWS-UpdateSSMAgent  ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --compliance-severity "MEDIUM" ^
     --schedule-expression "cron(0 2 ? * SUN *)" ^
     --max-errors "5" ^
     --max-concurrency "10"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName Update_SSM_Agent_Linux `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="tag:Environment"
         "Values"="Linux"
       } `
     -ComplianceSeverity MEDIUM `
     -ScheduleExpression "cron(0 2 ? * SUN *)" `
     -MaxConcurrency 10 `
     -MaxError 5
   ```

------

   The following example targets node IDs by specifying a wildcard value (\$1). This allows Systems Manager to create an association on *all* nodes in the current AWS account and AWS Region. This association runs simultaneously on 10 nodes maximum at any given time. Also, this association stops running on more nodes for a particular execution interval if the error count exceeds 5. For compliance reporting, this association is assigned a severity level of Medium. This association uses a schedule offset, which means it runs two days after the specified cron schedule. It also includes the `ApplyOnlyAtCronInterval` parameter, which is required to use the schedule offset and which means the association won't run immediately after it is created.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name Update_SSM_Agent_Linux \
     --name "AWS-UpdateSSMAgent" \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --targets "Key=instanceids,Values=*" \
     --compliance-severity "MEDIUM" \
     --schedule-expression "cron(0 2 ? * SUN#2 *)" \
     --apply-only-at-cron-interval \
     --schedule-offset 2 \
     --max-errors "5" \
     --max-concurrency "10" \
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name Update_SSM_Agent_Linux ^
     --name "AWS-UpdateSSMAgent" ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --targets "Key=instanceids,Values=*" ^
     --compliance-severity "MEDIUM" ^
     --schedule-expression "cron(0 2 ? * SUN#2 *)" ^
     --apply-only-at-cron-interval ^
     --schedule-offset 2 ^
     --max-errors "5" ^
     --max-concurrency "10" ^
     --apply-only-at-cron-interval
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName Update_SSM_Agent_All `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="InstanceIds"
         "Values"="*"
       } `
     -ScheduleExpression "cron(0 2 ? * SUN#2 *)" `
     -ApplyOnlyAtCronInterval `
     -ScheduleOffset 2 `
     -MaxConcurrency 10 `
     -MaxError 5 `
     -ComplianceSeverity MEDIUM `
     -ApplyOnlyAtCronInterval
   ```

------

   The following example creates an association on nodes in Resource Groups. The group is named "HR-Department". The association uses the `AWS-UpdateSSMAgent` document to update SSM Agent on the targeted nodes at 2:00 UTC every Sunday morning. This association runs simultaneously on 10 nodes maximum at any given time. Also, this association stops running on more nodes for a particular execution interval if the error count exceeds 5. For compliance reporting, this association is assigned a severity level of Medium. This association runs at the specified cron schedule. It doesn't run immediately after the association is created.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name Update_SSM_Agent_Linux \
     --targets Key=resource-groups:Name,Values=HR-Department \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --compliance-severity "MEDIUM" \
     --schedule-expression "cron(0 2 ? * SUN *)" \
     --max-errors "5" \
     --max-concurrency "10" \
     --apply-only-at-cron-interval
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name Update_SSM_Agent_Linux ^
     --targets Key=resource-groups:Name,Values=HR-Department ^
     --name AWS-UpdateSSMAgent  ^
     -association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --compliance-severity "MEDIUM" ^
     --schedule-expression "cron(0 2 ? * SUN *)" ^
     --max-errors "5" ^
     --max-concurrency "10" ^
     --apply-only-at-cron-interval
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName Update_SSM_Agent_Linux `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="resource-groups:Name"
         "Values"="HR-Department"
       } `
     -ScheduleExpression "cron(0 2 ? * SUN *)" `
     -MaxConcurrency 10 `
     -MaxError 5 `
     -ComplianceSeverity MEDIUM `
     -ApplyOnlyAtCronInterval
   ```

------

   The following example creates an association that runs on nodes tagged with a specific node ID. The association uses the SSM Agent document to update SSM Agent on the targeted nodes once when the change calendar is open. The association checks the calendar state when it runs. If the calendar is closed at launch time and the association is only run once, it won't run again because the association run window has passed. If the calendar is open, the association runs accordingly.
**Note**  
If you add new nodes to the tags or resource groups that an association acts on when the change calendar is closed, the association is applied to those nodes once the change calendar opens.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name CalendarAssociation \
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" \
     --schedule-expression "rate(1day)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name CalendarAssociation ^
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" ^
     --name AWS-UpdateSSMAgent  ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" ^
     --schedule-expression "rate(1day)"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName CalendarAssociation `
     -Target @{
         "Key"="tag:instanceids"
         "Values"="i-0cb2b964d3e14fd9f"
       } `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" `
     -ScheduleExpression "rate(1day)"
   ```

------

   The following example creates an association that runs on nodes tagged with a specific node ID. The association uses the SSM Agent document to update SSM Agent on the targeted nodes on the targeted nodes at 2:00 AM every Sunday. This association runs only at the specified cron schedule when the change calendar is open. When the association is created, it checks the calendar state. If the calendar is closed, the association isn't applied. When the interval to apply the association starts at 2:00 AM on Sunday, the association checks to see if the calendar is open. If the calendar is open, the association runs accordingly.
**Note**  
If you add new nodes to the tags or resource groups that an association acts on when the change calendar is closed, the association is applied to those nodes once the change calendar opens.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
     --association-name MultiCalendarAssociation \
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" \
     --name AWS-UpdateSSMAgent  \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2" \
     --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
     --association-name MultiCalendarAssociation ^
     --targets "Key=instanceids,Values=i-0cb2b964d3e14fd9f" ^
     --name AWS-UpdateSSMAgent  ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2" ^
     --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ PowerShell ]

   ```
   New-SSMAssociation `
     -AssociationName MultiCalendarAssociation `
     -Name AWS-UpdateSSMAgent `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -Target @{
         "Key"="tag:instanceids"
         "Values"="i-0cb2b964d3e14fd9f"
       } `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2" `
     -ScheduleExpression "cron(0 2 ? * SUN *)"
   ```

------

**Note**  
If you delete the association you created, the association no longer runs on any targets of that association. Also, if you specified the `apply-only-at-cron-interval` parameter, you can reset this option. To do so, specify the `no-apply-only-at-cron-interval` parameter when you update the association from the command line. This parameter forces the association to run immediately after updating the association and according to the interval specified.

# Editing and creating a new version of an association
Editing an association

You can edit a State Manager association to specify a new name, schedule, severity level, targets, or other values. For associations based on SSM Command-type documents, you can also choose to write the output of the command to an Amazon Simple Storage Service (Amazon S3) bucket. After you edit an association, State Manager creates a new version. You can view different versions after editing, as described in the following procedures. 

**Note**  
In order for associations that are created with Automation runbooks to be applied when new target nodes are detected, certain conditions must be met. For information, see [About target updates with Automation runbooks](state-manager-about.md#runbook-target-updates).

The following procedures describe how to edit and create a new version of an association using the Systems Manager console, AWS Command Line Interface (AWS CLI), and AWS Tools for PowerShell (Tools for PowerShell). 

**Important**  
State Manager doesn't support running associations that use a new version of a document if that document is shared from another account. State Manager always runs the `default` version of a document if shared from another account, even though the Systems Manager console shows that a new version was processed. If you want to run an association using a new version of a document shared form another account, you must set the document version to `default`.

## Edit an association (console)


The following procedure describes how to use the Systems Manager console to edit and create a new version of an association.

**Note**  
For associations that use SSM Command documents, not Automation runbooks, this procedure requires that you have write access to an existing Amazon S3 bucket. If you haven't used Amazon S3 before, be aware that you will incur charges for using Amazon S3. For information about how to create a bucket, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html).

**To edit a State Manager association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose an existing association, and then choose **Edit**.

1. Reconfigure the association to meet your current requirements. 

   For information about association options with `Command` and `Policy` documents, see [Creating associations](state-manager-associations-creating.md). For information about association options with Automation runbooks, see [Scheduling automations with State Manager associations](scheduling-automations-state-manager-associations.md).

1. Choose **Save Changes**. 

1. (Optional) To view association information, in the **Associations** page, choose the name of the association you edited, and then choose the **Versions** tab. The system lists each version of the association you created and edited.

1. (Optional) To view output for associations based on SSM `Command` documents, do the following:

   1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

   1. Choose the name of the Amazon S3 bucket you specified for storing command output, and then choose the folder named with the ID of the node that ran the association. (If you chose to store output in a folder in the bucket, open it first.)

   1. Drill down several levels, through the `awsrunPowerShell` folder, to the `stdout` file.

   1. Choose **Open** or **Download** to view the host name.

## Edit an association (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows Server) or AWS Tools for PowerShell to edit and create a new version of an association.

**To edit a State Manager association**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Use the following format to create a command to edit and create a new version of an existing State Manager association. Replace each *example resource placeholder* with your own information.
**Important**  
When you call `[https://docs.aws.amazon.com/cli/latest/reference/ssm/desupdatecribe-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/desupdatecribe-association.html)`, the system drops all optional parameters from the request and overwrites the association with null values for those parameters. This is by design. You must specify all optional parameters in the call, even if you are not changing the parameters. This includes the `--name` parameter. Before calling this action, we recommend that you call the `[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-association.html)` operation and make a note of all optional parameters required for your `update-association` call.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
       --name document_name \
       --document-version version_of_document_applied \
       --instance-id instances_to_apply_association_on \
       --parameters (if any) \
       --targets target_options \
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations \
       --schedule-expression "cron_or_rate_expression" \
       --schedule-offset "number_between_1_and_6" \
       --output-location s3_bucket_to_store_output_details \
       --association-name association_name \
       --max-errors a_number_of_errors_or_a_percentage_of_target_set \
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set \
       --compliance-severity severity_level \
       --calendar-names change_calendar_names \
       --target-locations aws_region_or_account
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
       --name document_name ^
       --document-version version_of_document_applied ^
       --instance-id instances_to_apply_association_on ^
       --parameters (if any) ^
       --targets target_options ^
       --association-dispatch-assume-role arn_of_role_to_be_used_when_dispatching_configurations ^
       --schedule-expression "cron_or_rate_expression" ^
       --schedule-offset "number_between_1_and_6" ^
       --output-location s3_bucket_to_store_output_details ^
       --association-name association_name ^
       --max-errors a_number_of_errors_or_a_percentage_of_target_set ^
       --max-concurrency a_number_of_instances_or_a_percentage_of_target_set ^
       --compliance-severity severity_level ^
       --calendar-names change_calendar_names ^
       --target-locations aws_region_or_account
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
       -Name document_name `
       -DocumentVersion version_of_document_applied `
       -InstanceId instances_to_apply_association_on `
       -Parameters (if any) `
       -Target target_options `
       -AssociationDispatchAssumeRole arn_of_role_to_be_used_when_dispatching_configurations `
       -ScheduleExpression "cron_or_rate_expression" `
       -ScheduleOffset "number_between_1_and_6" `
       -OutputLocation s3_bucket_to_store_output_details `
       -AssociationName association_name `
       -MaxError  a_number_of_errors_or_a_percentage_of_target_set
       -MaxConcurrency a_number_of_instances_or_a_percentage_of_target_set `
       -ComplianceSeverity severity_level `
       -CalendarNames change_calendar_names `
       -TargetLocations aws_region_or_account
   ```

------

   The following example updates an existing association to change the name to `TestHostnameAssociation2`. The new association version runs every hour and writes the output of commands to the specified Amazon S3 bucket.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE \
     --association-name TestHostnameAssociation2 \
     --parameters commands="echo Association" \
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole \
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' \
     --schedule-expression "cron(0 */1 * * ? *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE ^
     --association-name TestHostnameAssociation2 ^
     --parameters commands="echo Association" ^
     --association-dispatch-assume-role arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole ^
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' ^
     --schedule-expression "cron(0 */1 * * ? *)"
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE `
     -AssociationName TestHostnameAssociation2 `
     -Parameter @{"commands"="echo Association"} `
     -AssociationDispatchAssumeRole "arn:aws:iam::123456789012:role/myAssociationDispatchAssumeRole" `
     -S3Location_OutputS3BucketName amzn-s3-demo-bucket `
     -S3Location_OutputS3KeyPrefix logs `
     -S3Location_OutputS3Region us-east-1 `
     -ScheduleExpression "cron(0 */1 * * ? *)"
   ```

------

   The following example updates an existing association to change the name to `CalendarAssociation`. The new association runs when the calendar is open and writes command output to the specified Amazon S3 bucket. 

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE \
     --association-name CalendarAssociation \
     --parameters commands="echo Association" \
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar2"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE ^
     --association-name CalendarAssociation ^
     --parameters commands="echo Association" ^
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar2"
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE `
     -AssociationName CalendarAssociation `
     -AssociationName OneTimeAssociation `
     -Parameter @{"commands"="echo Association"} `
     -S3Location_OutputS3BucketName amzn-s3-demo-bucket `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar2"
   ```

------

   The following example updates an existing association to change the name to `MultiCalendarAssociation`. The new association runs when the calendars are open and writes command output to the specified Amazon S3 bucket. 

------
#### [ Linux & macOS ]

   ```
   aws ssm update-association \
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE \
     --association-name MultiCalendarAssociation \
     --parameters commands="echo Association" \
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' \
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-association ^
     --association-id 8dfe3659-4309-493a-8755-01234EXAMPLE ^
     --association-name MultiCalendarAssociation ^
     --parameters commands="echo Association" ^
     --output-location S3Location='{OutputS3Region=us-east-1,OutputS3BucketName=amzn-s3-demo-bucket,OutputS3KeyPrefix=logs}' ^
     --calendar-names "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2"
   ```

------
#### [ PowerShell ]

   ```
   Update-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE `
     -AssociationName MultiCalendarAssociation `
     -Parameter @{"commands"="echo Association"} `
     -S3Location_OutputS3BucketName amzn-s3-demo-bucket `
     -CalendarNames "arn:aws:ssm:us-east-1:123456789012:document/testCalendar1" "arn:aws:ssm:us-east-2:123456789012:document/testCalendar2"
   ```

------

1. To view the new version of the association, run the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association \
     --association-id b85ccafe-9f02-4812-9b81-01234EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association ^
     --association-id b85ccafe-9f02-4812-9b81-01234EXAMPLE
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociation `
     -AssociationId b85ccafe-9f02-4812-9b81-01234EXAMPLE | Select-Object *
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "cron(0 */1 * * ? *)",
           "OutputLocation": {
               "S3Location": {
                   "OutputS3KeyPrefix": "logs",
                   "OutputS3BucketName": "amzn-s3-demo-bucket",
                   "OutputS3Region": "us-east-1"
               }
           },
           "Name": "AWS-RunPowerShellScript",
           "Parameters": {
               "commands": [
                   "echo Association"
               ]
           },
           "LastExecutionDate": 1559316400.338,
           "Overview": {
               "Status": "Success",
               "DetailedStatus": "Success",
               "AssociationStatusAggregatedCount": {}
           },
           "AssociationId": "b85ccafe-9f02-4812-9b81-01234EXAMPLE",
           "DocumentVersion": "$DEFAULT",
           "LastSuccessfulExecutionDate": 1559316400.338,
           "LastUpdateAssociationDate": 1559316389.753,
           "Date": 1559314038.532,
           "AssociationVersion": "2",
           "AssociationName": "TestHostnameAssociation2",
           "Targets": [
               {
                   "Values": [
                       "Windows"
                   ],
                   "Key": "tag:Environment"
               }
           ]
       }
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "cron(0 */1 * * ? *)",
           "OutputLocation": {
               "S3Location": {
                   "OutputS3KeyPrefix": "logs",
                   "OutputS3BucketName": "amzn-s3-demo-bucket",
                   "OutputS3Region": "us-east-1"
               }
           },
           "Name": "AWS-RunPowerShellScript",
           "Parameters": {
               "commands": [
                   "echo Association"
               ]
           },
           "LastExecutionDate": 1559316400.338,
           "Overview": {
               "Status": "Success",
               "DetailedStatus": "Success",
               "AssociationStatusAggregatedCount": {}
           },
           "AssociationId": "b85ccafe-9f02-4812-9b81-01234EXAMPLE",
           "DocumentVersion": "$DEFAULT",
           "LastSuccessfulExecutionDate": 1559316400.338,
           "LastUpdateAssociationDate": 1559316389.753,
           "Date": 1559314038.532,
           "AssociationVersion": "2",
           "AssociationName": "TestHostnameAssociation2",
           "Targets": [
               {
                   "Values": [
                       "Windows"
                   ],
                   "Key": "tag:Environment"
               }
           ]
       }
   }
   ```

------
#### [ PowerShell ]

   ```
   AssociationId                 : b85ccafe-9f02-4812-9b81-01234EXAMPLE
   AssociationName               : TestHostnameAssociation2
   AssociationVersion            : 2
   AutomationTargetParameterName : 
   ComplianceSeverity            : 
   Date                          : 5/31/2019 2:47:18 PM
   DocumentVersion               : $DEFAULT
   InstanceId                    : 
   LastExecutionDate             : 5/31/2019 3:26:40 PM
   LastSuccessfulExecutionDate   : 5/31/2019 3:26:40 PM
   LastUpdateAssociationDate     : 5/31/2019 3:26:29 PM
   MaxConcurrency                : 
   MaxErrors                     : 
   Name                          : AWS-RunPowerShellScript
   OutputLocation                : Amazon.SimpleSystemsManagement.Model.InstanceAssociationOutputLocation
   Overview                      : Amazon.SimpleSystemsManagement.Model.AssociationOverview
   Parameters                    : {[commands, Amazon.Runtime.Internal.Util.AlwaysSendList`1[System.String]]}
   ScheduleExpression            : cron(0 */1 * * ? *)
   Status                        : 
   Targets                       : {tag:Environment}
   ```

------

# Deleting associations


Use the following procedure to delete an association by using the AWS Systems Manager console.

**To delete an association**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Select an association and then choose **Delete**.

You can delete multiple associations in a single operation by running an automation from the AWS Systems Manager console. When you select multiple associations for deletion, State Manager launches the automation runbook start page with the association IDs entered as input parameter values. 

**To delete multiple associations in a single operation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Select each association that you want to delete and then choose **Delete**.

1. (Optional) In the **Additional input parameters** area, select the Amazon Resource Name (ARN) for the *assume role* that you want the automation to use while running. To create a new assume role, choose **Create**.

1. Choose **Submit**.

# Running Auto Scaling groups with associations


The best practice when using associations to run Auto Scaling groups is to use tag targets. Not using tags might cause you to reach the association limit. 

If all nodes are tagged with the same key and value, you only need one association to run your Auto Scaling group. The following procedure describes how to create such an association.

**To create an association that runs Auto Scaling groups**

1. Ensure all nodes in the Auto Scaling group are tagged with the same key and value. For more instructions on tagging nodes, see [Tagging Auto Scaling groups and instances](https://docs.aws.amazon.com//autoscaling/ec2/userguide/autoscaling-tagging.html) in the *AWS Auto Scaling User Guide*. 

1. Create an association by using the procedure in [Working with associations in Systems Manager](state-manager-associations.md). 

   If you're working in the console, choose **Specify instance tags** in the **Targets** field. For **Instance tags**, enter the **Tag** key and value for your Auto Scaling group.

   If you're using the AWS Command Line Interface (AWS CLI), specify `--targets Key=tag:tag-key,Values=tag-value` where the key and value match what you tagged your nodes with. 

# Viewing association histories


You can view all executions for a specific association ID by using the [DescribeAssociationExecutions](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeAssociationExecutions.html) API operation. Use this operation to see the status, detailed status, results, last execution time, and more information for a State Manager association. State Manager is a tool in AWS Systems Manager. This API operation also includes filters to help you locate associations according to the criteria you specify. For example, you can specify an exact date and time, and use a GREATER\$1THAN filter to view executions processed after the specified date and time.

If, for example, an association execution failed, you can drill down into the details of a specific execution by using the [DescribeAssociationExecutionTargets](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeAssociationExecutionTargets.html) API operation. This operation shows you the resources, such as node IDs, where the association ran and the various association statuses. You can then see which resource or node failed to run an association. With the resource ID you can then view the command execution details to see which step in a command failed.

The examples in this section also include information about how to use the [StartAssociationsOnce](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartAssociationsOnce.html) API operation to run an association once at the time of creation. You can use this API operation when you investigate failed association executions. If you see that an association failed, you can make a change on the resource, and then immediately run the association to see if the change on the resource allows the association to run successfully.

**Note**  
API operations that are initiated by the SSM document during an association run are not logged in AWS CloudTrail.

## Viewing association histories (console)


Use the following procedure to view the execution history for a specific association ID and then view execution details for one or more resources. 

**To view execution history for a specific association ID**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. Choose **State Manager**.

1. In the **Association id** field, choose an association for which you want to view the history.

1. Choose the **View details** button.

1. Choose the **Execution history** tab.

1. Choose an association for which you want to view resource-level execution details. For example, choose an association that shows a status of **Failed**. You can then view the execution details for the nodes that failed to run the association.

   Use the search box filters to locate the execution for which you want to view details.  
![\[Filtering the list of State Manager association executions.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-state-executions-filter.png)

1. Choose an execution ID. The **Association execution targets** page opens. This page shows all the resources that ran the association.

1. Choose a resource ID to view specific information about that resource.

   Use the search box filters to locate the resource for which you want to view details.  
![\[Filtering the list of State Manager association executions targets.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-state-executions-targets-filter.png)

1. If you're investigating an association that failed to run, you can use the **Apply association now** button to run an association once at the time of creation. After you made changes on the resource where the association failed to run, choose the **Association ID** link in the navigation breadcrumb.

1. Choose the **Apply association now** button. After the execution is complete, verify that the association execution succeeded.

## Viewing association histories (command line)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) (on Linux or Windows Server) or AWS Tools for PowerShell to view the execution history for a specific association ID. Following this, the procedure describes how to view execution details for one or more resources.

**To view execution history for a specific association ID**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to view a list of executions for a specific association ID.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-executions \
     --association-id ID \
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN
   ```

**Note**  
This command includes a filter to limit the results to only those executions that occurred after a specific date and time. If you want to view all executions for a specific association ID, remove the `--filters` parameter and ` Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN` value.

------
#### [ Windows ]

   ```
   aws ssm describe-association-executions ^
     --association-id ID ^
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN
   ```

**Note**  
This command includes a filter to limit the results to only those executions that occurred after a specific date and time. If you want to view all executions for a specific association ID, remove the `--filters` parameter and ` Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN` value.

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecution `
     -AssociationId ID `
     -Filter @{"Key"="CreatedTime";"Value"="2019-06-01T19:15:38.372Z";"Type"="GREATER_THAN"}
   ```

**Note**  
This command includes a filter to limit the results to only those executions that occurred after a specific date and time. If you want to view all executions for a specific association ID, remove the `-Filter` parameter and ` @{"Key"="CreatedTime";"Value"="2019-06-01T19:15:38.372Z";"Type"="GREATER_THAN"}` value.

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
      "AssociationExecutions":[
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"76a5a04f-caf6-490c-b448-92c02EXAMPLE",
            "CreatedTime":1523986028.219,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
            "CreatedTime":1523984226.074,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
            "CreatedTime":1523982404.013,
            "AssociationVersion":"1"
         }
      ]
   }
   ```

------
#### [ Windows ]

   ```
   {
      "AssociationExecutions":[
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"76a5a04f-caf6-490c-b448-92c02EXAMPLE",
            "CreatedTime":1523986028.219,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
            "CreatedTime":1523984226.074,
            "AssociationVersion":"1"
         },
         {
            "Status":"Success",
            "DetailedStatus":"Success",
            "AssociationId":"c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
            "ExecutionId":"ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
            "CreatedTime":1523982404.013,
            "AssociationVersion":"1"
         }
      ]
   }
   ```

------
#### [ PowerShell ]

   ```
   AssociationId         : c336d2ab-09de-44ba-8f6a-6136cEXAMPLE
   AssociationVersion    : 1
   CreatedTime           : 8/18/2019 2:00:50 AM
   DetailedStatus        : Success
   ExecutionId           : 76a5a04f-caf6-490c-b448-92c02EXAMPLE
   LastExecutionDate     : 1/1/0001 12:00:00 AM
   ResourceCountByStatus : {Success=1}
   Status                : Success
   
   AssociationId         : c336d2ab-09de-44ba-8f6a-6136cEXAMPLE
   AssociationVersion    : 1
   CreatedTime           : 8/11/2019 2:00:54 AM
   DetailedStatus        : Success
   ExecutionId           : 791b72e0-f0da-4021-8b35-f95dfEXAMPLE
   LastExecutionDate     : 1/1/0001 12:00:00 AM
   ResourceCountByStatus : {Success=1}
   Status                : Success
   
   AssociationId         : c336d2ab-09de-44ba-8f6a-6136cEXAMPLE
   AssociationVersion    : 1
   CreatedTime           : 8/4/2019 2:01:00 AM
   DetailedStatus        : Success
   ExecutionId           : ecec60fa-6bb0-4d26-98c7-140308EXAMPLE
   LastExecutionDate     : 1/1/0001 12:00:00 AM
   ResourceCountByStatus : {Success=1}
   Status                : Success
   ```

------

   You can limit the results by using one or more filters. The following example returns all associations that were run before a specific date and time. 

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-executions \
     --association-id ID \
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=LESS_THAN
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-executions ^
     --association-id ID ^
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=LESS_THAN
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecution `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -Filter @{"Key"="CreatedTime";"Value"="2019-06-01T19:15:38.372Z";"Type"="LESS_THAN"}
   ```

------

   The following returns all associations that were *successfully* run after a specific date and time.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-executions \
     --association-id ID \
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN Key=Status,Value=Success,Type=EQUAL
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-executions ^
     --association-id ID ^
     --filters Key=CreatedTime,Value="2018-04-10T19:15:38.372Z",Type=GREATER_THAN Key=Status,Value=Success,Type=EQUAL
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecution `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -Filter @{
         "Key"="CreatedTime";
         "Value"="2019-06-01T19:15:38.372Z";
         "Type"="GREATER_THAN"
       },
       @{
         "Key"="Status";
         "Value"="Success";
         "Type"="EQUAL"
       }
   ```

------

1. Run the following command to view all targets where the specific execution ran.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-execution-targets \
     --association-id ID \
     --execution-id ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-execution-targets ^
     --association-id ID ^
     --execution-id ID
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecutionTarget `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -ExecutionId 76a5a04f-caf6-490c-b448-92c02EXAMPLE
   ```

------

   You can limit the results by using one or more filters. The following example returns information about all targets where the specific association failed to run.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-execution-targets \
     --association-id ID \
     --execution-id ID \
     --filters Key=Status,Value="Failed"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-execution-targets ^
     --association-id ID ^
     --execution-id ID ^
     --filters Key=Status,Value="Failed"
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecutionTarget `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -ExecutionId 76a5a04f-caf6-490c-b448-92c02EXAMPLE `
     -Filter @{
         "Key"="Status";
         "Value"="Failed"
       }
   ```

------

   The following example returns information about a specific managed node where an association failed to run.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-association-execution-targets \
     --association-id ID \
     --execution-id ID \
     --filters Key=Status,Value=Failed Key=ResourceId,Value="i-02573cafcfEXAMPLE" Key=ResourceType,Value=ManagedInstance
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-association-execution-targets ^
     --association-id ID ^
     --execution-id ID ^
     --filters Key=Status,Value=Failed Key=ResourceId,Value="i-02573cafcfEXAMPLE" Key=ResourceType,Value=ManagedInstance
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAssociationExecutionTarget `
     -AssociationId 14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE `
     -ExecutionId 76a5a04f-caf6-490c-b448-92c02EXAMPLE `
     -Filter @{
         "Key"="Status";
         "Value"="Success"
       },
       @{
         "Key"="ResourceId";
         "Value"="i-02573cafcfEXAMPLE"
       },
       @{
         "Key"="ResourceType";
         "Value"="ManagedInstance"
       }
   ```

------

1. If you're investigating an association that failed to run, you can use the [StartAssociationsOnce](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartAssociationsOnce.html) API operation to run an association immediately and only one time. After you change the resource where the association failed to run, run the following command to run the association immediately and only one time.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-associations-once \
     --association-id ID
   ```

------
#### [ Windows ]

   ```
   aws ssm start-associations-once ^
     --association-id ID
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAssociationsOnce `
     -AssociationId ID
   ```

------

# Working with associations using IAM


State Manager, a tool in AWS Systems Manager, uses [targets](systems-manager-state-manager-targets-and-rate-controls.md#systems-manager-state-manager-targets-and-rate-controls-about-targets) to choose which instances you configure your associations with. Originally, associations were created by specifying a document name (`Name`) and instance ID (`InstanceId`). This created an association between a document and an instance or managed node. Associations used to be identified by these parameters. These parameters are now deprecated, but they're still supported. The resources `instance` and `managed-instance` were added as resources to actions with `Name` and `InstanceId`.

AWS Identity and Access Management (IAM) policy enforcement behavior depends on the type of resource specified. Resources for State Manager operations are only enforced based on the passed-in request. State Manager doesn't perform a deep check for the properties of resources in your account. A request is only validated against policy resources if the request parameter contains the specified policy resources. For example, if you specify an instance in the resource block, the policy is enforced if the request uses the `InstanceId` parameter. The `Targets` parameter for each resource in the account isn't checked for that `InstanceId`. 

Following are some cases with confusing behavior:
+  [DescribeAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_DescribeActivations.html), [DeleteAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_DeleteAssociation.html), and [UpdateAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_UpdateAssociation.html) use `instance`, `managed-instance`, and `document` resources to specify the deprecated way of referring to associations. This includes all associations created with the deprecated `InstanceId` parameter.
+ [CreateAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_CreateAssociation.html), [CreateAssociationBatch](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_CreateAssociationBatch.html), and [UpdateAssociation](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_UpdateAssociation.html) use `instance` and `managed-instance` resources to specify the deprecated way of referring to associations. This includes all associations created with the deprecated `InstanceId` parameter. The `document` resource type is part of the deprecated way of referring to associations and is an actual property of an association. This means you can construct IAM policies with `Allow` or `Deny` permissions for both `Create` and `Update` actions based on document name.

For more information about using IAM policies with Systems Manager, see [Identity and access management for AWS Systems Manager](security-iam.md) or [Actions, resources, and condition keys for AWS Systems Manager](https://docs.aws.amazon.com/service-authorization/latest/reference/list_awssystemsmanager.html) in the *Service Authorization Reference*.

# Creating associations that run MOF files


You can run Managed Object Format (MOF) files to enforce a target state on Windows Server managed nodes with State Manager, a tool in AWS Systems Manager, by using the `AWS-ApplyDSCMofs` SSM document. The `AWS-ApplyDSCMofs` document has two execution modes. With the first mode, you can configure the association to scan and report if the managed nodes are in the desired state defined in the specified MOF files. In the second mode, you can run the MOF files and change the configuration of your nodes based on the resources and their values defined in the MOF files. The `AWS-ApplyDSCMofs` document allows you to download and run MOF configuration files from Amazon Simple Storage Service (Amazon S3), a local share, or from a secure website with an HTTPS domain.

State Manager logs and reports the status of each MOF file execution during each association run. State Manager also reports the output of each MOF file execution as a compliance event which you can view on the [AWS Systems Manager Compliance](https://console.aws.amazon.com/systems-manager/compliance) page.

MOF file execution is built on Windows PowerShell Desired State Configuration (PowerShell DSC). PowerShell DSC is a declarative platform used for configuration, deployment, and management of Windows systems. PowerShell DSC allows administrators to describe, in simple text documents called DSC configurations, how they want a server to be configured. A PowerShell DSC configuration is a specialized PowerShell script that states what to do, but not how to do it. Running the configuration produces a MOF file. The MOF file can be applied to one or more servers to achieve the desired configuration for those servers. PowerShell DSC resources do the actual work of enforcing configuration. For more information, see [Windows PowerShell Desired State Configuration Overview](https://download.microsoft.com/download/4/3/1/43113F44-548B-4DEA-B471-0C2C8578FBF8/Quick_Reference_DSC_WS12R2.pdf).

**Topics**
+ [

## Using Amazon S3 to store artifacts
](#systems-manager-state-manager-using-mof-file-S3-storage)
+ [

## Resolving credentials in MOF files
](#systems-manager-state-manager-using-mof-file-credentials)
+ [

## Using tokens in MOF files
](#systems-manager-state-manager-using-mof-file-tokens)
+ [

## Prerequisites for creating associations that run MOF files
](#systems-manager-state-manager-using-mof-file-prereqs)
+ [

## Creating an association that runs MOF files
](#systems-manager-state-manager-using-mof-file-creating)
+ [

## Troubleshooting issues when creating associations that run MOF files
](#systems-manager-state-manager-using-mof-file-troubleshooting)
+ [

## Viewing DSC resource compliance details
](#systems-manager-state-manager-viewing-mof-file-compliance)

## Using Amazon S3 to store artifacts


If you're using Amazon S3 to store PowerShell modules, MOF files, compliance reports, or status reports, then the AWS Identity and Access Management (IAM) role used by AWS Systems Manager SSM Agent must have `GetObject` and `ListBucket` permissions on the bucket. If you don't provide these permissions, the system returns an *Access Denied* error. Below is important information about storing artifacts in Amazon S3.
+ If the bucket is in a different AWS account, create a bucket resource policy that grants the account (or the IAM role) `GetObject` and `ListBucket` permissions.
+ If you want to use custom DSC resources, you can download these resources from an Amazon S3 bucket. You can also install them automatically from the PowerShell gallery. 
+ If you're using Amazon S3 as a module source, upload the module as a Zip file in the following case-sensitive format: *ModuleName*\$1*ModuleVersion*.zip. For example: MyModule\$11.0.0.zip.
+ All files must be in the bucket root. Folder structures aren't supported.

## Resolving credentials in MOF files


Credentials are resolved by using [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/) or [AWS Systems Manager Parameter Store](systems-manager-parameter-store.md). This allows you to set up automatic credential rotation. This also allows DSC to automatically propagate credentials to your servers without redeploying MOFs.

To use an AWS Secrets Manager secret in a configuration, create a PSCredential object where the Username is the SecretId or SecretARN of the secret containing the credential. You can specify any value for the password. The value is ignored. Following is an example.

```
Configuration MyConfig
{
   $ss = ConvertTo-SecureString -String 'a_string' -AsPlaintext -Force
   $credential = New-Object PSCredential('a_secret_or_ARN', $ss)

    Node localhost
    {
       File file_name
       {
           DestinationPath = 'C:\MyFile.txt'
           SourcePath = '\\FileServer\Share\MyFile.txt'
           Credential = $credential
       }
    }
}
```

Compile your MOF using the PsAllowPlaintextPassword setting in configuration data. This is OK because the credential only contains a label. 

In Secrets Manager, ensure that the node has GetSecretValue access in an IAM Managed Policy, and optionally in the Secret Resource Policy if one exists. To work with DSC, the secret must be in the following format.

```
{ 'Username': 'a_name', 'Password': 'a_password' }
```

The secret can have other properties (for example, properties used for rotation), but it must at least have the username and password properties.

We recommended that you use a multi-user rotation method, where you have two different usernames and passwords, and the rotation AWS Lambda function flips between them. This method allows you to have multiple active accounts while eliminating the risk of locking out a user during rotation.

## Using tokens in MOF files


Tokens give you the ability to modify resource property values *after* the MOF has been compiled. This allows you to reuse common MOF files on multiple servers that require similar configurations.

Token substitution only works for Resource Properties of type `String`. However, if your resource has a nested CIM node property, it also resolves tokens from `String` properties in that CIM node. You can't use token substitution for numerals or arrays.

For example, consider a scenario where you're using the xComputerManagement resource and you want to rename the computer using DSC. Normally you would need a dedicated MOF file for that machine. However, with token support, you can create a single MOF file and apply it to all your nodes. In the `ComputerName` property, instead of hardcoding the computer name into the MOF, you can use an Instance Tag type token. The value is resolved during MOF parsing. See the following example.

```
Configuration MyConfig
{
    xComputer Computer
    {
        ComputerName = '{tag:ComputerName}'
    }
}
```

You then set a tag on either the managed node in the Systems Manager console, or an Amazon Elastic Compute Cloud (Amazon EC2) tag in the Amazon EC2 console. When you run the document, the script substitutes the \$1tag:ComputerName\$1 token for the value of the instance tag.

You can also combine multiple tags into a single property, as shown in the following example.

```
Configuration MyConfig
{
    File MyFile
    {
        DestinationPath = '{env:TMP}\{tag:ComputerName}'
        Type = 'Directory'
    }
}
```

There are five different types of tokens you can use:
+ **tag**: Amazon EC2 or managed node tags.
+ **tagb64**: This is the same as tag, but the system use base64 to decode the value. This allows you to use special characters in tag values.
+ **env**: Resolves Environment variables.
+ **ssm**: Parameter Store values. Only String and Secure String types are supported.
+ **tagssm**: This is the same as tag, but if the tag isn't set on the node, the system tries to resolve the value from a Systems Manager parameter with the same name. This is useful in situations when you want a 'default global value' but you want to be able to override it on a single node (for example, one-box deployments).

Here is a Parameter Store example that uses the `ssm` token type. 

```
File MyFile
{
    DestinationPath = "C:\ProgramData\ConnectionData.txt"
    Content = "{ssm:%servicePath%/ConnectionData}"
}
```

Tokens play an important role in reducing redundant code by making MOF files generic and reusable. If you can avoid server-specific MOF file, then there’s no need for a MOF building service. A MOF building service increases costs, slows provisioning time, and increases the risk of configuration drift between grouped nodes due to differing module versions being installed on the build server when their MOFs were compiled.

## Prerequisites for creating associations that run MOF files


Before you create an association that runs MOF files, verify that your managed nodes have the following prerequisites installed:
+ Windows PowerShell version 5.0 or later. For more information, see [Windows PowerShell System Requirements](https://docs.microsoft.com/en-us/powershell/scripting/install/windows-powershell-system-requirements?view=powershell-6) on Microsoft.com.
+ [AWS Tools for Windows PowerShell](https://aws.amazon.com/powershell/) version 3.3.261.0 or later.
+ SSM Agent version 2.2 or later.

## Creating an association that runs MOF files


**To create an association that runs MOF files**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **State Manager**, and then choose **Create association**.

1. In the **Name** field, specify a name. This is optional, but recommended. A name can help you understand the purpose of the association when you created it. Spaces aren't allowed in the name.

1. In the **Document** list, choose **`AWS-ApplyDSCMofs`**.

1. In the **Parameters** section, specify your choices for the required and optional input parameters.

   1. **Mofs To Apply**: Specify one or more MOF files to run when this association runs. Use commas to separate a list of MOF files. Systems Manager iterates through the list of MOF files and runs them in the order specified by the comma separated list.
      + An Amazon S3 bucket name. Bucket names must use lowercase letters. Specify this information by using the following format.

        ```
        s3:amzn-s3-demo-bucket:MOF_file_name.mof
        ```

        If you want to specify an AWS Region, then use the following format.

        ```
        s3:bucket_Region:amzn-s3-demo-bucket:MOF_file_name.mof
        ```
      + A secure website. Specify this information by using the following format.

        ```
        https://domain_name/MOF_file_name.mof
        ```

        Here is an example.

        ```
        https://www.example.com/TestMOF.mof
        ```
      + A file system on a local share. Specify this information by using the following format.

        ```
        \server_name\shared_folder_name\MOF_file_name.mof
        ```

        Here is an example.

        ```
        \StateManagerAssociationsBox\MOFs_folder\MyMof.mof
        ```

   1. **Service Path**: (Optional) A service path is either an Amazon S3 bucket prefix where you want to write reports and status information. Or, a service path is a path for Parameter Store parameter-based tags. When resolving parameter-based tags, the system uses \$1ssm:%servicePath%/*parameter\$1name*\$1 to inject the servicePath value into the parameter name. For example, if your service path is "WebServers/Production" then the systems resolves the parameter as: WebServers/Production/*parameter\$1name*. This is useful for when you're running multiple environments in the same account.

   1. **Report Bucket Name**: (Optional) Enter the name of an Amazon S3 bucket where you want to write compliance data. Reports are saved in this bucket in JSON format.
**Note**  
You can prefix the bucket name with a Region where the bucket is located. Here's an example: us-west-2:MyMOFBucket. If you're using a proxy for Amazon S3 endpoints in a specific Region that doesn't include us-east-1, prefix the bucket name with a Region. If the bucket name isn't prefixed, it automatically discovers the bucket Region by using the us-east-1 endpoint.

   1. **Mof Operation Mode**: Choose State Manager behavior when running the **`AWS-ApplyDSCMofs`** association:
      + **Apply**: Correct node configurations that aren't compliant. 
      + **ReportOnly**: Don't correct node configurations, but instead log all compliance data and report nodes that aren't compliant.

   1. **Status Bucket Name**: (Optional) Enter the name of an Amazon S3 bucket where you want to write MOF execution status information. These status reports are singleton summaries of the most recent compliance run of a node. This means that the report is overwritten the next time the association runs MOF files.
**Note**  
You can prefix the bucket name with a Region where the bucket is located. Here's an example: `us-west-2:amzn-s3-demo-bucket`. If you're using a proxy for Amazon S3 endpoints in a specific Region that doesn't include us-east-1, prefix the bucket name with a Region. If the bucket name isn't prefixed, it automatically discovers the bucket Region using the us-east-1 endpoint.

   1. **Module Source Bucket Name**: (Optional) Enter the name of an Amazon S3 bucket that contains PowerShell module files. If you specify **None**, choose **True** for the next option, **Allow PS Gallery Module Source**.
**Note**  
You can prefix the bucket name with a Region where the bucket is located. Here's an example: `us-west-2:amzn-s3-demo-bucket`. If you're using a proxy for Amazon S3 endpoints in a specific Region that doesn't include us-east-1, prefix the bucket name with a Region. If the bucket name isn't prefixed, it automatically discovers the bucket Region using the us-east-1 endpoint.

   1. **Allow PS Gallery Module Source**: (Optional) Choose **True** to download PowerShell modules from [https://www.powershellgallery.com/](https://www.powershellgallery.com/). If you choose **False**, specify a source for the previous option, **ModuleSourceBucketName**.

   1. **Proxy Uri**: (Optional) Use this option to download MOF files from a proxy server.

   1. **Reboot Behavior**: (Optional) Specify one of the following reboot behaviors if your MOF file execution requires rebooting:
      + **AfterMof**: Reboots the node after all MOF executions are complete. Even if multiple MOF executions request reboots, the system waits until all MOF executions are complete to reboot.
      + **Immediately**: Reboots the node whenever a MOF execution requests it. If running multiple MOF files that request reboots, then the nodes are rebooted multiple times.
      + **Never**: Nodes aren't rebooted, even if the MOF execution explicitly requests a reboot.

   1. **Use Computer Name For Reporting**: (Optional) Turn on this option to use the name of the computer when reporting compliance information. The default value is **false**, which means that the system uses the node ID when reporting compliance information.

   1. **Turn on Verbose Logging**: (Optional) We recommend that you turn on verbose logging when deploying MOF files for the first time.
**Important**  
When allowed, verbose logging writes more data to your Amazon S3 bucket than standard association execution logging. This might result in slower performance and higher storage charges for Amazon S3. To mitigate storage size issues, we recommend that you turn on lifecycle policies on your Amazon S3 bucket. For more information, see [How Do I Create a Lifecycle Policy for an S3 Bucket?](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-lifecycle.html) in the *Amazon Simple Storage Service User Guide*.

   1. **Turn on Debug Logging**: (Optional) We recommend that you turn on debug logging to troubleshoot MOF failures. We also recommend that you deactivate this option for normal use.
**Important**  
When allowed, debug logging writes more data to your Amazon S3 bucket than standard association execution logging. This might result in slower performance and higher storage charges for Amazon S3. To mitigate storage size issues, we recommend that you turn on lifecycle policies on your Amazon S3 bucket. For more information, see [How Do I Create a Lifecycle Policy for an S3 Bucket?](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-lifecycle.html) in the *Amazon Simple Storage Service User Guide*.

   1. **Compliance Type**: (Optional) Specify the compliance type to use when reporting compliance information. The default compliance type is **Custom:DSC**. If you create multiple associations that run MOF files, then be sure to specify a different compliance type for each association. If you don't, each additional association that uses **Custom:DSC** overwrites the existing compliance data.

   1. **Pre Reboot Script**: (Optional) Specify a script to run if the configuration has indicated that a reboot is necessary. The script runs before the reboot. The script must be a single line. Separate additional lines by using semicolons.

1. In the **Targets** section, choose either **Specifying tags** or **Manually Selecting Instance**. If you choose to target resources by using tags, then enter a tag key and a tag value in the fields provided. For more information about using targets, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

1. In the **Specify schedule** section, choose either **On Schedule** or **No schedule**. If you choose **On Schedule**, then use the buttons provided to create a cron or rate schedule for the association. 

1. In the **Advanced options** section:
   + In **Compliance severity**, choose a severity level for the association. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

1. In the **Rate control** section, configure options for running State Manager associations across of fleet of managed nodes. For more information about these options, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**. 

State Manager creates and immediately runs the association on the specified nodes or targets. After the initial execution, the association runs in intervals according to the schedule that you defined and according to the following rules:
+ State Manager runs associations on nodes that are online when the interval starts and skips offline nodes.
+ State Manager attempts to run the association on all configured nodes during an interval.
+ If an association isn't run during an interval (because, for example, a concurrency value limited the number of nodes that could process the association at one time), then State Manager attempts to run the association during the next interval.
+ State Manager records history for all skipped intervals. You can view the history on the **Execution History** tab.

**Note**  
The `AWS-ApplyDSCMofs` is a Systems Manager Command document. This means that you can also run this document by using Run Command, a tool in AWS Systems Manager. For more information, see [AWS Systems Manager Run Command](run-command.md).

## Troubleshooting issues when creating associations that run MOF files


This section includes information to help you troubleshoot issues creating associations that run MOF files.

**Turn on enhanced logging**  
As a first step to troubleshooting, turn on enhanced logging. More specifically, do the following:

1. Verify that the association is configured to write command output to either Amazon S3 or Amazon CloudWatch Logs (CloudWatch).

1. Set the **Enable Verbose Logging** parameter to True.

1. Set the **Enable Debug Logging** parameter to True.

With verbose and debug logging turned on, the **Stdout** output file includes details about the script execution. This output file can help you identify where the script failed. The **Stderr** output file contains errors that occurred during the script execution. 

**Common problems when creating associations that run MOF files**  
This section includes information about common problems that can occur when creating associations that run MOF files and steps to troubleshoot these issues.

**My MOF wasn't applied**  
If State Manager failed to apply the association to your nodes, then start by reviewing the **Stderr** output file. This file can help you understand the root cause of the issue. Also, verify the following:
+ The node has the required access permissions to all MOF-related Amazon S3 buckets. Specifically:
  + **s3:GetObject permissions**: This is required for MOF files in private Amazon S3 buckets and custom modules in Amazon S3 buckets.
  + **s3:PutObject permission**: This is required to write compliance reports and compliance status to Amazon S3 buckets.
+ If you're using tags, then ensure that the node has the required IAM policy. Using tags requires the instance IAM role to have a policy allowing the `ec2:DescribeInstances` and `ssm:ListTagsForResource` actions.
+ Ensure that the node has the expected tags or SSM parameters assigned.
+ Ensure that the tags or SSM parameters aren't misspelled.
+ Try applying the MOF locally on the node to make sure there isn't an issue with the MOF file itself.

**My MOF seemed to fail, but the Systems Manager execution was successful**  
If the `AWS-ApplyDSCMofs` document successfully ran, then the Systems Manager execution status shows **Success**. This status doesn't reflect the compliance status of your node against the configuration requirements in the MOF file. To view the compliance status of your nodes, view the compliance reports. You can view a JSON report in the Amazon S3 Report Bucket. This applies to Run Command and State Manager executions. Also, for State Manager, you can view compliance details on the Systems Manager Compliance page.

**Stderr states: Name resolution failure attempting to reach service**  
This error indicates that the script can't reach a remote service. Most likely, the script can't reach Amazon S3. This issue most often occurs when the script attempts to write compliance reports or compliance status to the Amazon S3 bucket supplied in the document parameters. Typically, this error occurs when a computing environment uses a firewall or transparent proxy that includes an allow list. To resolve this issue:
+ Use Region-specific bucket syntax for all Amazon S3 bucket parameters. For example, the **Mofs to Apply** parameter should be formatted as follows:

  s3:*bucket-region*:*amzn-s3-demo-bucket*:*mof-file-name*.mof.

  Here is an example:` s3:us-west-2:amzn-s3-demo-bucket:my-mof.mof`

  The Report, Status, and Module Source bucket names should be formatted as follows.

  *bucket-region*:*amzn-s3-demo-bucket*. Here is an example: `us-west-1:amzn-s3-demo-bucket;`
+ If Region-specific syntax doesn't fix the problem, then make sure that the targeted nodes can access Amazon S3 in the desired Region. To verify this:

  1. Find the endpoint name for Amazon S3 in the appropriate Amazon S3 Region. For information, see [Amazon S3 Service Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region) in the *Amazon Web Services General Reference*.

  1. Log on to the target node and run the following ping command.

     ```
     ping s3.s3-region.amazonaws.com
     ```

     If the ping failed, it means that either Amazon S3 is down, or a firewall/transparent proxy is blocking access to the Amazon S3 Region, or the node can't access the internet.

## Viewing DSC resource compliance details


Systems Manager captures compliance information about DSC resource failures in the Amazon S3 **Status Bucket** you specified when you ran the `AWS-ApplyDSCMofs` document. Searching for information about DSC resource failures in an Amazon S3 bucket can be time consuming. Instead, you can view this information in the Systems Manager **Compliance** page. 

The **Compliance resources summary** section displays a count of resources that failed. In the following example, the **ComplianceType** is **Custom:DSC** and one resource is noncompliant.

**Note**  
Custom:DSC is the default **ComplianceType** value in the `AWS-ApplyDSCMofs` document. This value is customizable.

![\[Viewing counts in the Compliance resources summary section of the Compliance page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-mof-detailed-status-3.png)


The **Details overview for resources** section displays information about the AWS resource with the noncompliant DSC resource. This section also includes the MOF name, script execution steps, and (when applicable) a **View output** link to view detailed status information. 

![\[Viewing compliance details for a MOF execution resource failure\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-mof-detailed-status-1.png)


The **View output** link displays the last 4,000 characters of the detailed status. Systems Manager starts with the exception as the first element, and then scans back through the verbose messages and prepends as many as it can until it reaches the 4,000 character quota. This process displays the log messages that were output before the exception was thrown, which are the most relevant messages for troubleshooting.

![\[Viewing detailed output for MOF resource compliance issue\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-mof-detailed-status-2.png)


For information about how to view compliance information, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

**Situations that affect compliance reporting**  
If the State Manager association fails, then no compliance data is reported. More specifically, if a MOF fails to process, then Systems Manager doesn’t report any compliance items because the associations fails. For example, if Systems Manager attempts to download a MOF from an Amazon S3 bucket that the node doesn't have permission to access, then the association fails and no compliance data is reported.

If a resource in a second MOF fails, then Systems Manager *does* report compliance data. For example, if a MOF tries to create a file on a drive that doesn’t exist, then Systems Manager reports compliance because the `AWS-ApplyDSCMofs` document is able to process completely, which means the association successfully runs. 

# Creating associations that run Ansible playbooks
Creating associations that run Ansible playbooks

You can create State Manager associations that run Ansible playbooks by using the `AWS-ApplyAnsiblePlaybooks` SSM document. State Manager is a tool in AWS Systems Manager. This document offers the following benefits for running playbooks:
+ Support for running complex playbooks
+ Support for downloading playbooks from GitHub and Amazon Simple Storage Service (Amazon S3)
+ Support for compressed playbook structure
+ Enhanced logging
+ Ability to specify which playbook to run when playbooks are bundled

**Note**  
Systems Manager includes two SSM documents that allow you to create State Manager associations that run Ansible playbooks: `AWS-RunAnsiblePlaybook` and `AWS-ApplyAnsiblePlaybooks`. The `AWS-RunAnsiblePlaybook` document is deprecated. It remains available in Systems Manager for legacy purposes. We recommend that you use the `AWS-ApplyAnsiblePlaybooks` document because of the enhancements described here.  
Associations that run Ansible playbooks aren't supported on macOS.

**Support for running complex playbooks**

The `AWS-ApplyAnsiblePlaybooks` document supports bundled, complex playbooks because it copies the entire file structure to a local directory before executing the specified main playbook. You can provide source playbooks in Zip files or in a directory structure. The Zip file or directory can be stored in GitHub or Amazon S3. 

**Support for downloading playbooks from GitHub**

The `AWS-ApplyAnsiblePlaybooks` document uses the `aws:downloadContent` plugin to download playbook files. Files can be stored in GitHub in a single file or as a combined set of playbook files. To download content from GitHub, specify information about your GitHub repository in JSON format. Here is an example.

```
{
   "owner":"TestUser",
   "repository":"GitHubTest",
   "path":"scripts/python/test-script",
   "getOptions":"branch:master",
   "tokenInfo":"{{ssm-secure:secure-string-token}}"
}
```

**Support for downloading playbooks from Amazon S3**

You can also store and download Ansible playbooks in Amazon S3 as either a single .zip file or a directory structure. To download content from Amazon S3, specify the path to the file. Here are two examples.

**Example 1: Download a specific playbook file**

```
{
   "path":"https://s3.amazonaws.com/amzn-s3-demo-bucket/playbook.yml"
}
```

**Example 2: Download the contents of a directory**

```
{
   "path":"https://s3.amazonaws.com/amzn-s3-demo-bucket/ansible/webservers/"
}
```

**Important**  
If you specify Amazon S3, then the AWS Identity and Access Management (IAM) instance profile on your managed nodes must include permissions for the S3 bucket. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). 

**Support for compressed playbook structure**

The `AWS-ApplyAnsiblePlaybooks` document allows you to run compressed .zip files in the downloaded bundle. The document checks if the downloaded files contain a compressed file in .zip format. If a .zip is found, the document automatically decompresses the file and then runs the specified Ansible automation.

**Enhanced logging**

The `AWS-ApplyAnsiblePlaybooks` document includes an optional parameter for specifying different levels of logging. Specify -v for low verbosity, -vv or –vvv for medium verbosity, and -vvvv for debug level logging. These options directly map to Ansible verbosity options.

**Ability to specify which playbook to run when playbooks are bundled**

The `AWS-ApplyAnsiblePlaybooks` document includes a required parameter for specifying which playbook to run when multiple playbooks are bundled. This option provides flexibility for running playbooks to support different use cases.

## Understanding installed dependencies


If you specify **True** for the **InstallDependencies** parameter, then Systems Manager verifies that your nodes have the following dependencies installed:
+ **Ubuntu Server/Debian Server**: Apt-get (Package Management), Python 3, Ansible, Unzip
+ **Amazon Linux** supported versions: Ansible
+ **RHEL**: Python 3, Ansible, Unzip

If one or more of these dependencies aren't found, then Systems Manager automatically installs them.

## Create an association that runs Ansible playbooks (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association that runs Ansible playbooks by using the `AWS-ApplyAnsiblePlaybooks` document.

**To create an association that runs Ansible playbooks (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **State Manager**, and then choose **Create association**.

1. For **Name**, specify a name that helps you remember the purpose of the association.

1. In the **Document** list, choose **`AWS-ApplyAnsiblePlaybooks`**.

1. In the **Parameters** section, for **Source Type**, choose either **GitHub** or **S3**.

   **GitHub**

   If you choose **GitHub**, enter repository information in the following format.

   ```
   {
      "owner":"user_name",
      "repository":"name",
      "path":"path_to_directory_or_playbook_to_download",
      "getOptions":"branch:branch_name",
      "tokenInfo":"{{(Optional)_token_information}}"
   }
   ```

   **S3**

   If you choose **S3**, enter path information in the following format.

   ```
   {
      "path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
   }
   ```

1. For **Install Dependencies**, choose an option.

1. (Optional) For **Playbook File**, enter a file name. If a Zip file contains the playbook, specify a relative path to the Zip file.

1. (Optional) For **Extra Variables**, enter variables that you want State Manager to send to Ansible at runtime.

1. (Optional) For **Check**, choose an option.

1. (Optional) For **Verbose**, choose an option.

1. For **Targets**, choose an option. For information about using targets, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

1. In the **Specify schedule** section, choose either **On schedule** or **No schedule**. If you choose **On schedule**, then use the buttons provided to create a cron or rate schedule for the association. 

1. In the **Advanced options** section, for **Compliance severity**, choose a severity level for the association. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

1. In the **Rate control** section, configure options to run State Manager associations across a fleet of managed nodes. For information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors that are allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors that are allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**.

**Note**  
If you use tags to create an association on one or more target nodes, and then you remove the tags from a node, that node no longer runs the association. The node is disassociated from the State Manager document. 

## Create an association that runs Ansible playbooks (CLI)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) to create a State Manager association that runs Ansible playbooks by using the `AWS-ApplyAnsiblePlaybooks` document. 

**To create an association that runs Ansible playbooks (CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run one of the following commands to create an association that runs Ansible playbooks by targeting nodes using tags. Replace each *example resource placeholder* with your own information. Command (A) specifies GitHub as the source type. Command (B) specifies Amazon S3 as the source type.

   **(A) GitHub source**

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets Key=tag:TagKey,Values=TagValue \
       --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner_name\", \"repository\": \"name\", \"getOptions\": \"branch:master\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"],"TimeoutSeconds":["3600"]}' \
       --association-name "name" \
       --schedule-expression "cron_or_rate_expression"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" ^
       --targets Key=tag:TagKey,Values=TagValue ^
       --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner_name\", \"repository\": \"name\", \"getOptions\": \"branch:master\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"], "TimeoutSeconds":["3600"]}' ^
       --association-name "name" ^
       --schedule-expression "cron_or_rate_expression"
   ```

------

   Here is an example.

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets "Key=tag:OS,Values=Linux" \
       --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"ansibleDocumentTest\", \"repository\": \"Ansible\", \"getOptions\": \"branch:master\"}"],"InstallDependencies":["True"],"PlaybookFile":["hello-world-playbook.yml"],"ExtraVariables":["SSM=True"],"Check":["False"],"Verbose":["-v"]}' \
       --association-name "AnsibleAssociation" \
       --schedule-expression "cron(0 2 ? * SUN *)"
   ```

   **(B) S3 source**

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets Key=tag:TagKey,Values=TagValue \
       --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
       --association-name "name" \
       --schedule-expression "cron_or_rate_expression"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" ^
       --targets Key=tag:TagKey,Values=TagValue ^
       --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' ^
       --association-name "name" ^
       --schedule-expression "cron_or_rate_expression"
   ```

------

   Here is an example.

   ```
   aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
       --targets "Key=tag:OS,Values=Linux" \
       --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/amzn-s3-demo-bucket/playbook.yml\"}"],"InstallDependencies":["True"],"PlaybookFile":["playbook.yml"],"ExtraVariables":["SSM=True"],"Check":["False"],"Verbose":["-v"]}' \
       --association-name "AnsibleAssociation" \
       --schedule-expression "cron(0 2 ? * SUN *)"
   ```
**Note**  
State Manager associations don't support all cron and rate expressions. For more information about creating cron and rate expressions for associations, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

   The system attempts to create the association on the nodes and immediately apply the state. 

1. Run the following command to view an updated status of the association you just created. 

   ```
   aws ssm describe-association --association-id "ID"
   ```

# Creating associations that run Chef recipes
Creating associations that run Chef recipes

You can create State Manager associations that run Chef recipes by using the `AWS-ApplyChefRecipes` SSM document. State Manager is a tool in AWS Systems Manager. You can target Linux-based Systems Manager managed nodes with the `AWS-ApplyChefRecipes` SSM document. This document offers the following benefits for running Chef recipes:
+ Supports multiple releases of Chef (Chef 11 through Chef 18).
+ Automatically installs the Chef client software on target nodes.
+ Optionally runs [Systems Manager compliance checks](systems-manager-compliance.md) on target nodes, and stores the results of compliance checks in an Amazon Simple Storage Service (Amazon S3) bucket.
+ Runs multiple cookbooks and recipes in a single run of the document.
+ Optionally runs recipes in `why-run` mode, to show which recipes change on target nodes without making changes.
+ Optionally applies custom JSON attributes to `chef-client` runs.
+ Optionally applies custom JSON attributes from a source file that is stored at a location that you specify.

You can use [Git](#state-manager-chef-git), [GitHub](#state-manager-chef-github), [HTTP](#state-manager-chef-http), or [Amazon S3](#state-manager-chef-s3) buckets as download sources for Chef cookbooks and recipes that you specify in an `AWS-ApplyChefRecipes` document.

**Note**  
Associations that run Chef recipes aren't supported on macOS.

## Getting started


Before you create an `AWS-ApplyChefRecipes` document, prepare your Chef cookbooks and cookbook repository. If you don't already have a Chef cookbook that you want to use, you can get started by using a test `HelloWorld` cookbook that AWS has prepared for you. The `AWS-ApplyChefRecipes` document already points to this cookbook by default. Your cookbooks should be set up similarly to the following directory structure. In the following example, `jenkins` and `nginx` are examples of Chef cookbooks that are available in the [https://supermarket.chef.io/](https://supermarket.chef.io/) on the Chef website.

Though AWS can't officially support cookbooks on the [https://supermarket.chef.io/](https://supermarket.chef.io/) website, many of them work with the `AWS-ApplyChefRecipes` document. The following are examples of criteria to determine when you're testing a community cookbook:
+ The cookbook should support the Linux-based operating systems of the Systems Manager managed nodes that you're targeting.
+ The cookbook should be valid for the Chef client version (Chef 11 through Chef 18) that you use.
+ The cookbook is compatible with Chef Infra Client, and, doesn't require a Chef server.

Verify that you can reach the `Chef.io` website, so that any cookbooks you specify in your run list can be installed when the Systems Manager document (SSM document) runs. Using a nested `cookbooks` folder is supported, but not required; you can store cookbooks directly under the root level.

```
<Top-level directory, or the top level of the archive file (ZIP or tgz or tar.gz)>
    └── cookbooks (optional level)
        ├── jenkins
        │   ├── metadata.rb
        │   └── recipes
        └── nginx
            ├── metadata.rb
            └── recipes
```

**Important**  
Before you create a State Manager association that runs Chef recipes, be aware that the document run installs the Chef client software on your Systems Manager managed nodes, unless you set the value of **Chef client version** to `None`. This operation uses an installation script from Chef to install Chef components on your behalf. Before you run an `AWS-ApplyChefRecipes` document, be sure your enterprise can comply with any applicable legal requirements, including license terms applicable to the use of Chef software. For more information, see the [Chef website](https://www.chef.io/).

Systems Manager can deliver compliance reports to an S3 bucket, the Systems Manager console, or make compliance results available in response to Systems Manager API commands. To run Systems Manager compliance reports, the instance profile attached to Systems Manager managed nodes must have permissions to write to the S3 bucket. The instance profile must have permissions to use the Systems Manager `PutComplianceItem` API. For more information about Systems Manager compliance, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

### Logging the document run


When you run a Systems Manager document (SSM document) by using a State Manager association, you can configure the association to choose the output of the document run, and you can send the output to Amazon S3 or Amazon CloudWatch Logs (CloudWatch Logs). To help ease troubleshooting when an association has finished running, verify that the association is configured to write command output to either an Amazon S3 bucket or CloudWatch Logs. For more information, see [Working with associations in Systems Manager](state-manager-associations.md).

## Applying JSON attributes to targets when running a recipe


You can specify JSON attributes for your Chef client to apply to target nodes during an association run. When setting up the association, you can provide raw JSON or provide the path to a JSON file stored in Amazon S3.

Use JSON attributes when you want to customize how the recipe is run without having to modify the recipe itself, for example:
+ **Overriding a small number of attributes**

  Use custom JSON to avoid having to maintain multiple versions of a recipe to accommodate minor differences.
+ **Providing variable values**

  Use custom JSON to specify values that may change from run-to-run. For example, if your Chef cookbooks configure a third-party application that accepts payments, you might use custom JSON to specify the payment endpoint URL. 

**Specifying attributes in raw JSON**

The following is an example of the format you can use to specify custom JSON attributes for your Chef recipe.

```
{"filepath":"/tmp/example.txt", "content":"Hello, World!"}
```

**Specifying a path to a JSON file**  
The following is an example of the format you can use to specify the path to custom JSON attributes for your Chef recipe.

```
{"sourceType":"s3", "sourceInfo":"someS3URL1"}, {"sourceType":"s3", "sourceInfo":"someS3URL2"}
```

## Use Git as a cookbook source


The `AWS-ApplyChefRecipes` document uses the [aws:downloadContent](documents-command-ssm-plugin-reference.md#aws-downloadContent) plugin to download Chef cookbooks. To download content from Git, specify information about your Git repository in JSON format as in the following example. Replace each *example-resource-placeholder* with your own information.

```
{
   "repository":"GitCookbookRepository",
   "privateSSHKey":"{{ssm-secure:ssh-key-secure-string-parameter}}",
   "skipHostKeyChecking":"false",
   "getOptions":"branch:refs/head/main",
   "username":"{{ssm-secure:username-secure-string-parameter}}",
   "password":"{{ssm-secure:password-secure-string-parameter}}"
}
```

## Use GitHub as a cookbook source


The `AWS-ApplyChefRecipes` document uses the [aws:downloadContent](documents-command-ssm-plugin-reference.md#aws-downloadContent) plugin to download cookbooks. To download content from GitHub, specify information about your GitHub repository in JSON format as in the following example. Replace each *example-resource-placeholder* with your own information.

```
{
   "owner":"TestUser",
   "repository":"GitHubCookbookRepository",
   "path":"cookbooks/HelloWorld",
   "getOptions":"branch:refs/head/main",
   "tokenInfo":"{{ssm-secure:token-secure-string-parameter}}"
}
```

## Use HTTP as a cookbook source


You can store Chef cookbooks at a custom HTTP location as either a single `.zip` or `tar.gz` file, or a directory structure. To download content from HTTP, specify the path to the file or directory in JSON format as in the following example. Replace each *example-resource-placeholder* with your own information.

```
{
   "url":"https://my.website.com/chef-cookbooks/HelloWorld.zip",
   "allowInsecureDownload":"false",
   "authMethod":"Basic",
   "username":"{{ssm-secure:username-secure-string-parameter}}",
   "password":"{{ssm-secure:password-secure-string-parameter}}"
}
```

## Use Amazon S3 as a cookbook source


You can also store and download Chef cookbooks in Amazon S3 as either a single `.zip` or `tar.gz` file, or a directory structure. To download content from Amazon S3, specify the path to the file in JSON format as in the following examples. Replace each *example-resource-placeholder* with your own information.

**Example 1: Download a specific cookbook**

```
{
   "path":"https://s3.amazonaws.com/chef-cookbooks/HelloWorld.zip"
}
```

**Example 2: Download the contents of a directory**

```
{
   "path":"https://s3.amazonaws.com/chef-cookbooks-test/HelloWorld"
}
```

**Important**  
If you specify Amazon S3, the AWS Identity and Access Management (IAM) instance profile on your managed nodes must be configured with the `AmazonS3ReadOnlyAccess` policy. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

## Create an association that runs Chef recipes (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association that runs Chef cookbooks by using the `AWS-ApplyChefRecipes` document.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **State Manager**, and then choose **Create association**.

1. For **Name**, enter a name that helps you remember the purpose of the association.

1. In the **Document** list, choose **`AWS-ApplyChefRecipes`**.

1. In **Parameters**, for **Source Type**, select either **Git**, **GitHub**, **HTTP**, or **S3**.

1. For **Source info**, enter cookbook source information using the appropriate format for the **Source Type** that you selected in step 6. For more information, see the following topics:
   + [Use Git as a cookbook source](#state-manager-chef-git)
   + [Use GitHub as a cookbook source](#state-manager-chef-github)
   + [Use HTTP as a cookbook source](#state-manager-chef-http)
   + [Use Amazon S3 as a cookbook source](#state-manager-chef-s3)

1. In **Run list**, list the recipes that you want to run in the following format, separating each recipe with a comma as shown. Don't include a space after the comma. Replace each *example-resource-placeholder* with your own information.

   ```
   recipe[cookbook-name1::recipe-name],recipe[cookbook-name2::recipe-name]
   ```

1. (Optional) Specify custom JSON attributes that you want the Chef client to pass to your target nodes.

   1. In **JSON attributes content**, add any attributes that you want the Chef client to pass to your target nodes.

   1. In **JSON attributes sources**, add the paths to any attributes that you want the Chef client to pass to your target nodes.

   For more information, see [Applying JSON attributes to targets when running a recipe](#apply-custom-json-attributes).

1. For **Chef client version**, specify a Chef version. Valid values are `11` through `18`, or `None`. If you specify a number between `11` `18` (inclusive), Systems Manager installs the correct Chef client version on your target nodes. If you specify `None`, Systems Manager doesn't install the Chef client on target nodes before running the document's recipes.

1. (Optional) For **Chef client arguments**, specify additional arguments that are supported for the version of Chef you're using. To learn more about supported arguments, run `chef-client -h` on a node that is running the Chef client.

1. (Optional) Turn on **Why-run** to show changes made to target nodes if the recipes are run, without actually changing target nodes.

1. For **Compliance severity**, choose the severity of Systems Manager Compliance results that you want reported. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you specify. Compliance reports are stored in an S3 bucket that you specify as the value of the **Compliance report bucket** parameter (step 14). For more information about Compliance, see [Learn details about Compliance](compliance-about.md) in this guide.

   Compliance scans measure drift between configuration that is specified in your Chef recipes and node resources. Valid values are `Critical`, `High`, `Medium`, `Low`, `Informational`, `Unspecified`, or `None`. To skip compliance reporting, choose `None`.

1. For **Compliance type**, specify the compliance type for which you want results reported. Valid values are `Association` for State Manager associations, or `Custom:`*custom-type*. The default value is `Custom:Chef`.

1. For **Compliance report bucket**, enter the name of an S3 bucket in which to store information about every Chef run performed by this document, including resource configuration and Compliance results.

1. In **Rate control**, configure options to run State Manager associations across a fleet of managed nodes. For information about using rate controls, see [Understanding targets and rate controls in State Manager associations](systems-manager-state-manager-targets-and-rate-controls.md).

   In **Concurrency**, choose an option:
   + Choose **targets** to enter an absolute number of targets that can run the association simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the association simultaneously.

   In **Error threshold**, choose an option:
   + Choose **errors** to enter an absolute number of errors that are allowed before State Manager stops running associations on additional targets.
   + Choose **percentage** to enter a percentage of errors that are allowed before State Manager stops running associations on additional targets.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. Choose **Create Association**.

## Create an association that runs Chef recipes (CLI)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) to create a State Manager association that runs Chef cookbooks by using the `AWS-ApplyChefRecipes` document.

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run one of the following commands to create an association that runs Chef cookbooks on target nodes that have the specified tags. Use the command that is appropriate for your cookbook source type and operating system. Replace each *example-resource-placeholder* with your own information.

   1. **Git source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["Git"],"SourceInfo":["{\"repository\":\"repository-name\", \"getOptions\": \"branch:branch-name\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json-content}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" \
          --schedule-expression "cron-or-rate-expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue ^
          --parameters '{"SourceType":["Git"],"SourceInfo":["{\"repository\":\"repository-name\", \"getOptions\": \"branch:branch-name\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' ^
          --association-name "name" ^
          --schedule-expression "cron-or-rate-expression"
      ```

------

   1. **GitHub source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner-name\", \"repository\": \"name\", \"path\": \"path-to-directory-or-cookbook-to-download\", \"getOptions\": \"branch:branch-name\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json}"], "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" \
          --schedule-expression "cron-or-rate-expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"owner-name\", \"repository\": \"name\", \"path\": \"path-to-directory-or-cookbook-to-download\", \"getOptions\": \"branch:branch-name\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json}"], "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' ^
          --association-name "name" ^
          --schedule-expression "cron-or-rate-expression"
      ```

------

      Here is an example.

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:OS,Values=Linux \
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"ChefRecipeTest\", \"repository\": \"ChefCookbooks\", \"path\": \"cookbooks/HelloWorld\", \"getOptions\": \"branch:master\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' \
          --association-name "MyChefAssociation" \
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:OS,Values=Linux ^
          --parameters '{"SourceType":["GitHub"],"SourceInfo":["{\"owner\":\"ChefRecipeTest\", \"repository\": \"ChefCookbooks\", \"path\": \"cookbooks/HelloWorld\", \"getOptions\": \"branch:master\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' ^
          --association-name "MyChefAssociation" ^
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------

   1. **HTTP source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["HTTP"],"SourceInfo":["{\"url\":\"url-to-zip-file|directory|cookbook\", \"authMethod\": \"auth-method\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json-content}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" \
          --schedule-expression "cron-or-rate-expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue ^
          --parameters '{"SourceType":["HTTP"],"SourceInfo":["{\"url\":\"url-to-zip-file|directory|cookbook\", \"authMethod\": \"auth-method\", \"username\": \"{{ ssm-secure:username-secure-string-parameter }}\", \"password\": \"{{ ssm-secure:password-secure-string-parameter }}\"}"], "RunList":["{\"recipe[cookbook-name-1::recipe-name]\", \"recipe[cookbook-name-2::recipe-name]\"}"], "JsonAttributesContent": ["{custom-json-content}"], "JsonAttributesSources": "{\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-1\"}, {\"sourceType\":\"s3\", \"sourceInfo\":\"s3-bucket-endpoint-2\"}", "ChefClientVersion": ["version-number"], "ChefClientArguments":["{chef-client-arguments}"], "WhyRun": boolean, "ComplianceSeverity": ["severity-value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["s3-bucket-name"]}' \
          --association-name "name" ^
          --schedule-expression "cron-or-rate-expression"
      ```

------

   1. **Amazon S3 source**

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets Key=tag:TagKey,Values=TagValue \
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_cookbook_to_download\"}"], "RunList":["{\"recipe[cookbook_name1::recipe_name]\", \"recipe[cookbook_name2::recipe_name]\"}"], "JsonAttributesContent": ["{Custom_JSON}"], "ChefClientVersion": ["version_number"], "ChefClientArguments":["{chef_client_arguments}"], "WhyRun": true_or_false, "ComplianceSeverity": ["severity_value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["amzn-s3-demo-bucket"]}' \
          --association-name "name" \
          --schedule-expression "cron_or_rate_expression"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets Key=tag:TagKey,Values=TagValue ^
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_cookbook_to_download\"}"], "RunList":["{\"recipe[cookbook_name1::recipe_name]\", \"recipe[cookbook_name2::recipe_name]\"}"], "JsonAttributesContent": ["{Custom_JSON}"], "ChefClientVersion": ["version_number"], "ChefClientArguments":["{chef_client_arguments}"], "WhyRun": true_or_false, "ComplianceSeverity": ["severity_value"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["amzn-s3-demo-bucket"]}' ^
          --association-name "name" ^
          --schedule-expression "cron_or_rate_expression"
      ```

------

      Here is an example.

------
#### [ Linux & macOS ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" \
          --targets "Key=tag:OS,Values= Linux" \
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/amzn-s3-demo-bucket/HelloWorld\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' \
          --association-name "name" \
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------
#### [ Windows ]

      ```
      aws ssm create-association --name "AWS-ApplyChefRecipes" ^
          --targets "Key=tag:OS,Values= Linux" ^
          --parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/amzn-s3-demo-bucket/HelloWorld\"}"], "RunList":["{\"recipe[HelloWorld::HelloWorldRecipe]\", \"recipe[HelloWorld::InstallApp]\"}"], "JsonAttributesContent": ["{\"state\": \"visible\",\"colors\": {\"foreground\": \"light-blue\",\"background\": \"dark-gray\"}}"], "ChefClientVersion": ["14"], "ChefClientArguments":["{--fips}"], "WhyRun": false, "ComplianceSeverity": ["Medium"], "ComplianceType": ["Custom:Chef"], "ComplianceReportBucket": ["ChefComplianceResultsBucket"]}' ^
          --association-name "name" ^
          --schedule-expression "cron(0 2 ? * SUN *)"
      ```

------

      The system creates the association, and unless your specified cron or rate expression prevents it, the system runs the association on the target nodes.
**Note**  
State Manager associations don't support all cron and rate expressions. For more information about creating cron and rate expressions for associations, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. Run the following command to view the status of the association you just created. 

   ```
   aws ssm describe-association --association-id "ID"
   ```

## Viewing Chef resource compliance details


Systems Manager captures compliance information about Chef-managed resources in the Amazon S3 **Compliance report bucket** value that you specified when you ran the `AWS-ApplyChefRecipes` document. Searching for information about Chef resource failures in an S3 bucket can be time consuming. Instead, you can view this information on the Systems Manager **Compliance** page.

A Systems Manager Compliance scan collects information about resources on your managed nodes that were created or checked in the most recent Chef run. The resources can include files, directories, `systemd` services, `yum` packages, templated files, `gem` packages, and dependent cookbooks, among others.

The **Compliance resources summary** section displays a count of resources that failed. In the following example, the **ComplianceType** is **Custom:Chef** and one resource is noncompliant.

**Note**  
`Custom:Chef` is the default **ComplianceType** value in the `AWS-ApplyChefRecipes` document. This value is customizable.

![\[Viewing counts in the Compliance resources summary section of the Compliance page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-chef-compliance-summary.png)


The **Details overview for resources** section shows information about the AWS resource that isn't in compliance. This section also includes the Chef resource type against which compliance was run, severity of issue, compliance status, and links to more information when applicable.

![\[Viewing compliance details for a Chef managed resource failure\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/state-manager-chef-compliance-details.png)


**View output** shows the last 4,000 characters of the detailed status. Systems Manager starts with the exception as the first element, finds verbose messages, and shows them until it reaches the 4,000 character quota. This process displays the log messages that were output before the exception was thrown, which are the most relevant messages for troubleshooting.

For information about how to view compliance information, see [AWS Systems Manager Compliance](systems-manager-compliance.md).

**Important**  
If the State Manager association fails, no compliance data is reported. For example, if Systems Manager attempts to download a Chef cookbook from an S3 bucket that the node doesn't have permission to access, the association fails, and Systems Manager reports no compliance data.

# Walkthrough: Automatically update SSM Agent with the AWS CLI


The following procedure walks you through the process of creating a State Manager association using the AWS Command Line Interface. The association automatically updates the SSM Agent according to a schedule that you specify. For more information about SSM Agent, see [Working with SSM Agent](ssm-agent.md). To customize the update schedule for SSM Agent using the console, see [Automatically updating SSM Agent](ssm-agent-automatic-updates.md#ssm-agent-automatic-updates-console).

To be notified about SSM Agent updates, subscribe to the [SSM Agent Release Notes](https://github.com/aws/amazon-ssm-agent/blob/master/RELEASENOTES.md) page on GitHub.

**Before you begin**  
Before you complete the following procedure, verify that you have at least one running Amazon Elastic Compute Cloud (Amazon EC2) instance for Linux, macOS, or Windows Server that is configured for Systems Manager. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md). 

If you create an association by using either the AWS CLI or AWS Tools for Windows PowerShell, use the `--Targets` parameter to target instances, as shown in the following example. Don't use the `--InstanceID` parameter. The `--InstanceID` parameter is a legacy parameter.

**To create an association for automatically updating SSM Agent**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create an association by targeting instances using Amazon Elastic Compute Cloud (Amazon EC2) tags. Replace each *example resource placeholder* with your own information. The `Schedule` parameter sets a schedule to run the association every Sunday morning at 2:00 a.m. (UTC).

   State Manager associations don't support all cron and rate expressions. For more information about creating cron and rate expressions for associations, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --targets Key=tag:tag_key,Values=tag_value \
   --name AWS-UpdateSSMAgent \
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --targets Key=tag:tag_key,Values=tag_value ^
   --name AWS-UpdateSSMAgent ^
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------

   You can target multiple instances by specifying instances IDs in a comma-separated list.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID \
   --name AWS-UpdateSSMAgent \
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID ^
   --name AWS-UpdateSSMAgent ^
   --schedule-expression "cron(0 2 ? * SUN *)"
   ```

------

   You can specify the version of the SSM Agent you want to update to.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID \
   --name AWS-UpdateSSMAgent \
   --schedule-expression "cron(0 2 ? * SUN *)" \
   --parameters version=ssm_agent_version_number
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --targets Key=instanceids,Values=instance_ID,instance_ID,instance_ID ^
   --name AWS-UpdateSSMAgent ^
   --schedule-expression "cron(0 2 ? * SUN *)" ^
   --parameters version=ssm_agent_version_number
   ```

------

   The system returns information like the following.

   ```
   {
       "AssociationDescription": {
           "ScheduleExpression": "cron(0 2 ? * SUN *)",
           "Name": "AWS-UpdateSSMAgent",
           "Overview": {
               "Status": "Pending",
               "DetailedStatus": "Creating"
           },
           "AssociationId": "123..............",
           "DocumentVersion": "$DEFAULT",
           "LastUpdateAssociationDate": 1504034257.98,
           "Date": 1504034257.98,
           "AssociationVersion": "1",
           "Targets": [
               {
                   "Values": [
                       "TagValue"
                   ],
                   "Key": "tag:TagKey"
               }
           ]
       }
   }
   ```

   The system attempts to create the association on the instance(s) and applies the state following creation. The association status shows `Pending`.

1. Run the following command to view an updated status of the association you created. 

   ```
   aws ssm list-associations
   ```

   If your instances *aren't* running the most recent version of the SSM Agent, the status shows `Failed`. When a new version of SSM Agent is published, the association automatically installs the new agent, and the status shows `Success`.

# Walkthrough: Automatically update PV drivers on EC2 instances for Windows Server


Amazon Windows Server Amazon Machine Images (AMIs) contain a set of drivers to permit access to virtualized hardware. These drivers are used by Amazon Elastic Compute Cloud (Amazon EC2) to map instance store and Amazon Elastic Block Store (Amazon EBS) volumes to their devices. We recommend that you install the latest drivers to improve stability and performance of your EC2 instances for Windows Server. For more information about PV drivers, see [AWS PV Drivers](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/xen-drivers-overview.html#xen-driver-awspv).

The following walkthrough shows you how to configure a State Manager association to automatically download and install new AWS PV drivers when the drivers become available. State Manager is a tool in AWS Systems Manager.

**Before you begin**  
Before you complete the following procedure, verify that you have at least one Amazon EC2 instance for Windows Server running that is configured for Systems Manager. For more information, see [Setting up managed nodes for AWS Systems Manager](systems-manager-setting-up-nodes.md). 

**To create a State Manager association that automatically updates PV drivers**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**.

1. Choose **Create association**.

1. In the **Name** field, enter a descriptive name for the association.

1. In the **Document** list, choose `AWS-ConfigureAWSPackage`.

1. In the **Parameters** area, do the following:
   + For **Action**, choose **Install**.
   + For **Installation type**, choose **Uninstall and reinstall**.
**Note**  
In-place upgrades are not supported for this package. It must be uninstalled and reinstalled.
   + For **Name**, enter **AWSPVDriver**.

     You don't need to enter anything for **Version** and **Additional Arguments**.

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.
**Note**  
If you choose to target instances by using tags, and you specify tags that map to Linux instances, the association succeeds on the Windows Server instance but fails on the Linux instances. The overall status of the association shows **Failed**.

1. In the **Specify schedule** area, choose whether to run the association on a schedule that you configure, or just once. Updated PV drivers are released a several times a year, so you can schedule the association to run once a month, if you want.

1. In the **Advanced options** area, for **Compliance severity**, choose a severity level for the association. Compliance reporting indicates whether the association state is compliant or noncompliant, along with the severity level you indicate here. For more information, see [About State Manager association compliance](compliance-about.md#compliance-about-association).

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. (Optional) In the **CloudWatch alarm** section, for **Alarm name**, choose a CloudWatch alarm to apply to your association for monitoring. 
**Note**  
Note the following information about this step.  
The alarms list displays a maximum of 100 alarms. If you don't see your alarm in the list, use the AWS Command Line Interface to create the association. For more information, see [Create an association (command line)](state-manager-associations-creating.md#create-state-manager-association-commandline).
To attach a CloudWatch alarm to your command, the IAM principal that creates the association must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).
If your alarm activates, any pending command invocations or automations do not run.

1. Choose **Create association**, and then choose **Close**. The system attempts to create the association on the instances and immediately apply the state. 

   If you created the association on one or more Amazon EC2 instances for Windows Server, the status changes to **Success**. If your instances aren't configured for Systems Manager, or if you inadvertently targeted Linux instances, the status shows **Failed**.

   If the status is **Failed**, choose the association ID, choose the **Resources** tab, and then verify that the association was successfully created on your EC2 instances for Windows Server. If EC2 instances for Windows Server show a status of **Failed**, verify that the SSM Agent is running on the instance, and verify that the instance is configured with an AWS Identity and Access Management (IAM) role for Systems Manager. For more information, see [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md).

# AWS Systems Manager change management tools
Change management tools

AWS Systems Manager provides the following tools for making changes to your AWS resources.

**Topics**
+ [

# AWS Systems Manager Automation
](systems-manager-automation.md)
+ [

# AWS Systems Manager Change Calendar
](systems-manager-change-calendar.md)
+ [

# AWS Systems Manager Change Manager
](change-manager.md)
+ [

# AWS Systems Manager Documents
](documents.md)
+ [

# AWS Systems Manager Maintenance Windows
](maintenance-windows.md)
+ [

# AWS Systems Manager Quick Setup
](systems-manager-quick-setup.md)

# AWS Systems Manager Automation
Automation

Automation, a tool in AWS Systems Manager, simplifies common maintenance, deployment, and remediation tasks for AWS services like Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), Amazon Redshift, Amazon Simple Storage Service (Amazon S3), and many more. To get started with Automation, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/automation). In the navigation pane, choose **Automation**. 

Automation helps you to build automated solutions to deploy, configure, and manage AWS resources at scale. With Automation, you have granular control over the concurrency of your automations. This means you can specify how many resources to target concurrently, and how many errors can occur before an automation is stopped. 

To help you get started with Automation, AWS develops and maintains several pre-defined runbooks. Depending on your use case, you can use these pre-defined runbooks that perform a variety of tasks, or create your own custom runbooks that might better suit your needs. To monitor the progress and status of your automations, you can use the Systems Manager Automation console, or your preferred command line tool. Automation also integrates with Amazon EventBridge to help you build event-driven architecture at scale.

**Note**  
For customers who are new to Systems Manager Automation as of August 14, 2025, the Automation free tier is not available. For customers already using Automation, the free tier of service ends on December 31, 2025. For information about current service costs, see [AWS Systems Manager pricing](https://aws.amazon.com/systems-manager/pricing/).

## How can Automation benefit my organization?


Automation offers these benefits:
+ **Scripting support in runbook content**

  Using the `aws:executeScript` action, you can run custom Python and PowerShell functions directly from your runbooks. This provides you greater flexibility in creating your custom runbooks because you can complete various tasks that other Automation actions don't support. You also have greater control over the logic of the runbook. For an example of how this action can be used and how it can help to improve an existing automated solution, see [Authoring Automation runbooks](automation-authoring-runbooks.md).
+  **Run automations across multiple AWS accounts and AWS Regions from a centralized location** 

  Administrators can run automations on resources across multiple accounts and Regions from the Systems Manager console.
+  **Enhanced operations security** 

  Administrators have a centralized place to grant and revoke access to runbooks. Using only AWS Identity and Access Management (IAM) policies, you can control which individual users or groups in your organization can use Automation and which runbooks they can access.
+  **Automate common IT tasks** 

  Automating common tasks can help improve operational efficiency, enforce organizational standards, and reduce operator errors. For example, you can use the `AWS-UpdateCloudFormationStackWithApproval` runbook to update resources that were deployed by using an AWS CloudFormation template. The update applies a new template. You can configure the Automation to request approval by one or more users before the update begins.
+  **Safely perform disruptive tasks in bulk** 

  Automation includes features, like rate controls, that allow you to control the deployment of an automation across your fleet by specifying a concurrency value and an error threshold. For more information about working with rate controls, see [Run automated operations at scale](running-automations-scale.md).
+ **Streamline complex tasks**

  Automation provides pre-defined runbooks that streamline complex and time-consuming tasks such as creating golden Amazon Machine Images (AMIs). For example, you can use the `AWS-UpdateLinuxAmi` and `AWS-UpdateWindowsAmi` runbooks to create golden AMIs from a source AMI. Using these runbooks, you can run custom scripts before and after updates are applied. You can also include or exclude specific software packages from being installed. For examples of how to use these runbooks, see [Tutorials](automation-tutorials.md).
+ **Define constraints for inputs**

  You can define constraints in custom runbooks to limit the values that Automation will accept for a particular input parameter. For example, `allowedPattern` will only accept values for an input parameter that match the regular expression you define. If you specify `allowedValues` for an input parameter, only the values you've specified in the runbook are accepted.
+  **Log automation action output to Amazon CloudWatch Logs** 

  To meet operational or security requirements in your organization, you might need to provide a record of the scripts run during a runbook. With CloudWatch Logs, you can monitor, store, and access log files from various AWS services. You can send output from the `aws:executeScript` action to a CloudWatch Logs log group for debugging and troubleshooting purposes. Log data can be sent to your log group with or without AWS KMS encryption using your KMS key. For more information, see [Logging Automation action output with CloudWatch Logs](automation-action-logging.md).
+  **Amazon EventBridge integration** 

  Automation is supported as a *target* type in Amazon EventBridge rules. This means you can trigger runbooks by using events. For more information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).
+ **Share organizational best practices**

  You can define best practices for resource management, operations tasks, and more in runbooks that you share across accounts and Regions.

## Who should use Automation?

+ Any AWS customer who wants to improve their operational efficiency at scale, reduce errors associated with manual intervention, and reduce time to resolution of common issues.
+ Infrastructure experts who want to automate deployment and configuration tasks.
+ Administrators who want to reliably resolve common issues, improve troubleshooting efficiency, and reduce repetitive operations.
+ Users who want to automate a task they normally perform manually.

## What is an automation?


An *automation* consists of all of the tasks that are defined in a runbook, and are performed by the Automation service. Automation uses the following components to run automations.


****  

| Concept | Details | 
| --- | --- | 
|  Automation runbook  |  A Systems Manager Automation runbook defines the automation (the actions that Systems Manager performs on your managed nodes and AWS resources). Automation includes several pre-defined runbooks that you can use to perform common tasks like restarting one or more Amazon EC2 instances or creating an Amazon Machine Image (AMI). You can create your own runbooks as well. Runbooks use YAML or JSON, and they include steps and parameters that you specify. Steps run in sequential order. For more information, see [Creating your own runbooks](automation-documents.md). Runbooks are Systems Manager documents of type `Automation`, as opposed to `Command`, `Policy`, `Session` documents. Runbooks support schema version 0.3. Command documents use schema version 1.2, 2.0, or 2.2. Policy documents use schema version 2.0 or later.  | 
|  Automation action  |  The automation defined in a runbook includes one or more steps. Each step is associated with a particular action. The action determines the inputs, behavior, and outputs of the step. Steps are defined in the `mainSteps` section of your runbook. Automation supports 20 distinct action types. For more information, see the [Systems Manager Automation actions reference](automation-actions.md).  | 
|  Automation quota  |  Each AWS account can run 100 automations simultaneously. This includes child automations (automations that are started by another automation), and rate control automations. If you attempt to run more automations than this, Systems Manager adds the additional automations to a queue and displays a status of Pending. This quota can be adjusted using adaptive concurrency. For more information, see [Allowing Automation to adapt to your concurrency needs](adaptive-concurrency.md).For more information about running automations, see [Run an automated operation powered by Systems Manager Automation](running-simple-automations.md).  | 
|  Automation queue quota  |  If you attempt to run more automations than the concurrent automation limit, subsequent automations are added to a queue. Each AWS account can queue 5,000 automations. When an automation is complete (or reaches a terminal state), the first automation in the queue is started.  | 
|  Rate control automation quota  |  Each AWS account can run 25 rate control automations simultaneously. If you attempt to run more rate control automations than the concurrent rate control automation limit, Systems Manager adds the subsequent rate control automations to a queue and displays a status of Pending. For more information about running rate control automations, see [Run automated operations at scale](running-automations-scale.md).  | 
|  Rate control automation queue quota  |  If you attempt to run more automations than the concurrent rate control automation limit, subsequent automations are added to a queue. Each AWS account can queue 1,000 rate control automations. When an automation is complete (or reaches a terminal state), the first automation in the queue is started.  | 

**Topics**
+ [

## How can Automation benefit my organization?
](#automation-benefits)
+ [

## Who should use Automation?
](#automation-who)
+ [

## What is an automation?
](#what-is-an-automation)
+ [

# Setting up Automation
](automation-setup.md)
+ [

# Run an automated operation powered by Systems Manager Automation
](running-simple-automations.md)
+ [

# Rerunning automation executions
](automation-rerun-executions.md)
+ [

# Run an automation that requires approvals
](running-automations-require-approvals.md)
+ [

# Run automated operations at scale
](running-automations-scale.md)
+ [

# Running automations in multiple AWS Regions and accounts
](running-automations-multiple-accounts-regions.md)
+ [

# Run automations based on EventBridge events
](running-automations-event-bridge.md)
+ [

# Run an automation step by step
](automation-working-executing-manually.md)
+ [

# Scheduling automations with State Manager associations
](scheduling-automations-state-manager-associations.md)
+ [

# Schedule automations with maintenance windows
](scheduling-automations-maintenance-windows.md)
+ [

# Systems Manager Automation actions reference
](automation-actions.md)
+ [

# Creating your own runbooks
](automation-documents.md)
+ [

# Systems Manager Automation Runbook Reference
](automation-documents-reference.md)
+ [

# Tutorials
](automation-tutorials.md)
+ [

# Learn about statuses returned by Systems Manager Automation
](automation-statuses.md)
+ [

# Troubleshooting Systems Manager Automation
](automation-troubleshooting.md)

# Setting up Automation


To set up Automation, a tool in AWS Systems Manager, you must verify user access to the Automation service and situationally configure roles so that the service can perform actions on your resources. We also recommend that you opt in to the adaptive concurrency mode in your Automation preferences. Adaptive concurrency automatically scales your automation quota to meet your needs. For more information, see [Allowing Automation to adapt to your concurrency needs](adaptive-concurrency.md).

To ensure proper access to AWS Systems Manager Automation, review the following user and service role requirements.

## Verifying user access for runbooks


Verify that you have permission to use runbooks. If your user, group, or role is assigned administrator permissions, then you have access to Systems Manager Automation. If you don't have administrator permissions, then an administrator must give you permission by assigning the `AmazonSSMFullAccess` managed policy, or a policy that provides comparable permissions, to your user, group, or role.

**Important**  
The IAM policy `AmazonSSMFullAccess` grants permissions to Systems Manager actions. However, some runbooks require permissions to other services, such as the runbook `AWS-ReleaseElasticIP`, which requires IAM permissions for `ec2:ReleaseAddress`. Therefore, you must review the actions taken in a runbook to ensure your user, group, or role is assigned the necessary permissions to perform the actions included in the runbook.

## Configuring a service role (assume role) access for automations


Automations can be initiated under the context of a service role (or *assume role*). This allows the service to perform actions on your behalf. If you don't specify an assume role, Automation uses the context of the user who invoked the automation.

However, the following situations require that you specify a service role for Automation:
+ When you want to restrict a user's permissions on a resource, but you want the user to run an automation that requires elevated permissions. In this scenario, you can create a service role with elevated permissions and allow the user to run the automation.
+ When you create a Systems Manager State Manager association that runs a runbook.
+ When you have operations that you expect to run longer than 12 hours.
+ When you're running a runbook not owned by Amazon that uses the `aws:executeScript` action to call an AWS API operation or to act on an AWS resource. For information, see [Permissions for using runbooks](automation-document-script-considerations.md#script-permissions).

If you need to create a service role for Automation, you can use one of the following methods.

**Topics**
+ [

## Verifying user access for runbooks
](#automation-setup-user-access)
+ [

## Configuring a service role (assume role) access for automations
](#automation-setup-configure-role)
+ [

# Create service roles for Automation by using CloudFormation
](automation-setup-cloudformation.md)
+ [

# Create the service roles for Automation using the console
](automation-setup-iam.md)
+ [

# Setting up identity based policies examples
](automation-setup-identity-based-policies.md)
+ [

# Allowing Automation to adapt to your concurrency needs
](adaptive-concurrency.md)
+ [

# Configuring automatic retry for throttled operations
](automation-throttling-retry.md)
+ [

# Implement change controls for Automation
](automation-change-calendar-integration.md)

# Create service roles for Automation by using CloudFormation


You can create a service role for Automation, a tool in AWS Systems Manager, from an AWS CloudFormation template. After you create the service role, you can specify the service role in runbooks using the parameter `AutomationAssumeRole`.

## Create the service role using CloudFormation


Use the following procedure to create the required AWS Identity and Access Management (IAM) role for Systems Manager Automation by using CloudFormation.

**To create the required IAM role**

1. Download and unzip the [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationServiceRole.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationServiceRole.zip) file. This file includes the `AWS-SystemsManager-AutomationServiceRole.yaml` CloudFormation template file.

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create Stack**.

1. In the **Specify template** section, choose **Upload a template file**.

1. Choose **Browse**, and then choose the `AWS-SystemsManager-AutomationServiceRole.yaml` CloudFormation template file.

1. Choose **Next**.

1. On the **Specify stack details** page, in the **Stack name** field, enter a name. 

1. On the **Configure stack options** page, you don’t need to make any selections. Choose **Next**.

1. On the **Review** page, scroll down and choose the **I acknowledge that CloudFormation might create IAM resources** option.

1. Choose **Create**.

CloudFormation shows the **CREATE\$1IN\$1PROGRESS** status for approximately three minutes. The status changes to **CREATE\$1COMPLETE** after the stack is created and your roles are ready to use.

**Important**  
If you run an automation workflow that invokes other services by using an AWS Identity and Access Management (IAM) service role, be aware that the service role must be configured with permission to invoke those services. This requirement applies to all AWS Automation runbooks (`AWS-*` runbooks) such as the `AWS-ConfigureS3BucketLogging`, `AWS-CreateDynamoDBBackup`, and `AWS-RestartEC2Instance` runbooks, to name a few. This requirement also applies to any custom Automation runbooks you create that invoke other AWS services by using actions that call other services. For example, if you use the `aws:executeAwsApi`, `aws:createStack`, or `aws:copyImage` actions, configure the service role with permission to invoke those services. You can give permissions to other AWS services by adding an IAM inline policy to the role. For more information, see [(Optional) Add an Automation inline policy or customer managed policy to invoke other AWS services](automation-setup-iam.md#add-inline-policy).

## Copy role information for Automation


Use the following procedure to copy information about the Automation service role from the CloudFormation console. You must specify these roles when you use a runbook.

**Note**  
You don't need to copy role information using this procedure if you run the `AWS-UpdateLinuxAmi` or `AWS-UpdateWindowsAmi` runbooks. These runbooks already have the required roles specified as default values. The roles specified in these runbooks use IAM managed policies. 

**To copy the role names**

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Select the Automation **Stack name** you created in the previous procedure.

1. Choose the **Resources** tab.

1. Choose the **Physical ID** link for **AutomationServiceRole**. The IAM console opens to a summary of the Automation service role.

1. Copy the Amazon Resource Name (ARN) next to **Role ARN**. The ARN is similar to the following: `arn:aws:iam::12345678:role/AutomationServiceRole`

1. Paste the ARN into a text file to use later.

You have finished configuring the service role for Automation. You can now use the Automation service role ARN in your runbooks.

# Create the service roles for Automation using the console


If you need to create a service role for Automation, a tool in AWS Systems Manager, complete the following tasks. For more information about when a service role is required for Automation, see [Setting up Automation](automation-setup.md).

**Topics**
+ [

## Task 1: Create a service role for Automation
](#create-service-role)
+ [

## Task 2: Attach the iam:PassRole policy to your Automation role
](#attach-passrole-policy)

## Task 1: Create a service role for Automation


Use the following procedure to create a service role (or *assume role*) for Systems Manager Automation.

**Note**  
You can also use this role in runbooks, such as the `AWS-CreateManagedLinuxInstance` runbook. Using this role, or the Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role, in runbooks allows Automation to perform actions in your environment, such as launch new instances and perform actions on your behalf.

**To create an IAM role and allow Automation to assume it**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. Under **Select type of trusted entity**, choose **AWS service**.

1. In the **Choose a use case** section, choose **Systems Manager**, and then choose **Next: Permissions**.

1. On the **Attached permissions policy** page, search for the **AmazonSSMAutomationRole** policy, choose it, and then choose **Next: Review**. 

1. On the **Review** page, enter a name in the **Role name** box, and then enter a description.

1. Choose **Create role**. The system returns you to the **Roles** page.

1. On the **Roles** page, choose the role you just created to open the **Summary** page. Note the **Role Name** and **Role ARN**. You will specify the role ARN when you attach the **iam:PassRole** policy to your IAM account in the next procedure. You can also specify the role name and the ARN in runbooks.

**Note**  
The `AmazonSSMAutomationRole` policy assigns the Automation role permission to a subset of AWS Lambda functions within your account. These functions begin with "Automation". If you plan to use Automation with Lambda functions, the Lambda ARN must use the following format:  
`"arn:aws:lambda:*:*:function:Automation*"`  
If you have existing Lambda functions whose ARNs don't use this format, then you must also attach an additional Lambda policy to your automation role, such as the **AWSLambdaRole** policy. The additional policy or role must provide broader access to Lambda functions within the AWS account.

After creating your service role, we recommend editing the trust policy to help prevent the cross-service confused deputy problem. The *confused deputy problem* is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In AWS, cross-service impersonation can result in the confused deputy problem. Cross-service impersonation can occur when one service (the *calling service*) calls another service (the *called service*). The calling service can be manipulated to use its permissions to act on another customer's resources in a way it should not otherwise have permission to access. To prevent this, AWS provides tools that help you protect your data for all services with service principals that have been given access to resources in your account. 

We recommend using the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys in resource policies to limit the permissions that Automation gives another service to the resource. If the `aws:SourceArn` value doesn't contain the account ID, such as an Amazon S3 bucket ARN, you must use both global condition context keys to limit permissions. If you use both global condition context keys and the `aws:SourceArn` value contains the account ID, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same policy statement. Use `aws:SourceArn` if you want only one resource to be associated with the cross-service access. Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use. The value of `aws:SourceArn` must be the ARN for automation executions. If you don't know the full ARN of the resource or if you're specifying multiple resources, use the `aws:SourceArn` global context condition key with wildcards (`*`) for the unknown portions of the ARN. For example, `arn:aws:ssm:*:123456789012:automation-execution/*`. 

The following example shows how you can use the `aws:SourceArn` and `aws:SourceAccount` global condition context keys for Automation to prevent the confused deputy problem.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "ssm.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        },
        "ArnLike": {
          "aws:SourceArn": "arn:aws:ssm:*:123456789012:automation-execution/*"
        }
      }
    }
  ]
}
```

------

**To modify the role's trust policy**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. In the list of roles in your account, choose the name of your Automation service role.

1. Choose the **Trust relationships** tab, and then choose **Edit trust relationship**.

1. Edit the trust policy using the `aws:SourceArn` and `aws:SourceAccount` global condition context keys for Automation to prevent the confused deputy problem.

1. Choose **Update Trust Policy** to save your changes.

### (Optional) Add an Automation inline policy or customer managed policy to invoke other AWS services


If you run an automation that invokes other AWS services by using an IAM service role, the service role must be configured with permission to invoke those services. This requirement applies to all AWS Automation runbooks (`AWS-*` runbooks) such as the `AWS-ConfigureS3BucketLogging`, `AWS-CreateDynamoDBBackup`, and `AWS-RestartEC2Instance` runbooks, to name a few. This requirement also applies to any custom runbooks you create that invoke other AWS services by using actions that call other services. For example, if you use the `aws:executeAwsApi`, `aws:CreateStack`, or `aws:copyImage` actions, to name a few, then you must configure the service role with permission to invoke those services. You can give permissions to other AWS services by adding an IAM inline policy or customer managed policy to the role. 

**To embed an inline policy for a service role (IAM console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. In the list, choose the name of the role that you want to edit.

1. Choose the **Permissions** tab.

1. In the **Add permissions** dropdown, choose **Attach policies** or **Create inline policy**.

1. If you choose **Attach policies**, select the check box next to the policy you want to add and choose **Add permissions**.

1. If you choose **Create inline policy**, choose the **JSON** tab.

1. Enter a JSON Policy document for the AWS services you want to invoke. Here are two example JSON Policy documents.

   **Amazon S3 PutObject and GetObject Example**

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
           }
       ]
   }
   ```

------

   **Amazon EC2 CreateSnapshot and DescribeSnapShots Example**

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
         {
            "Effect":"Allow",
            "Action":"ec2:CreateSnapshot",
            "Resource":"*"
         },
         {
            "Effect":"Allow",
            "Action":"ec2:DescribeSnapshots",
            "Resource":"*"
         }
      ]
   }
   ```

------

   For details about the IAM policy language, see [IAM JSON Policy Reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html) in the *IAM User Guide*.

1. When you're finished, choose **Review policy**. The [Policy Validator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_policy-validator.html) reports any syntax errors.

1. On the **Review policy** page, enter a **Name** for the policy that you're creating. Review the policy **Summary** to see the permissions that are granted by your policy. Then choose **Create policy** to save your work.

1. After you create an inline policy, it's automatically embedded in your role.

## Task 2: Attach the iam:PassRole policy to your Automation role


Use the following procedure to attach the `iam:PassRole` policy to your Automation service role. This allows the Automation service to pass the role to other services or Systems Manager tools when running automations.

**To attach the iam:PassRole policy to your Automation role**

1. In the **Summary** page for the role you just created, choose the **Permissions** tab.

1. Choose **Add inline policy**.

1. On the **Create policy** page, choose the **Visual editor** tab.

1. Choose **Service**, and then choose **IAM**.

1. Choose **Select actions**.

1. In the **Filter actions** text box, type **PassRole**, and then choose the **PassRole** option.

1. Choose **Resources**. Verify that **Specific** is selected, and then choose **Add ARN**.

1. In the **Specify ARN for role** field, paste the Automation role ARN that you copied at the end of Task 1. The system populates the **Account** and **Role name with path** fields.
**Note**  
If you want the Automation service role to attach an IAM instance profile role to an EC2 instance, then you must add the ARN of the IAM instance profile role. This allows the Automation service role to pass the IAM instance profile role to the target EC2 instance.

1. Choose **Add**.

1. Choose **Review policy**.

1. On the **Review Policy** page, enter a name and then choose **Create Policy**.

# Setting up identity based policies examples


The following sections provide example IAM identity-based policies for AWS Systems Manager Automation service. For more information about how to create an IAM identity-based policy using these example JSON Policy documents, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-json-editor) in the *IAM User Guide*.

**Note**  
All examples contain fictitious account IDs. The account ID shouldn't be specified in the Amazon Resource Name (ARN) for AWS owned public documents.

 **Examples** 
+  [Example 1: Allow a user to run an automation document and view the automation execution](#automation-setup-identity-based-policies-example-1) 
+  [Example 2: Allow a user to run a specific version of an automation document](#automation-setup-identity-based-policies-example-2) 
+  [Example 3: Allow a user to execute automation documents with a specific tag](#automation-setup-identity-based-policies-example-3) 
+  [Example 4: Allow a user to run an automation document when a specific tag parameter is provided for the automation execution](#automation-setup-identity-based-policies-example-4) 

## Example 1: Allow a user to run an automation document and view the automation execution


The following example IAM policy allows a user to do the following:
+ Run the automation document specified in the policy. The name of the document is determined by the following entry.

  ```
  arn:aws:ssm:*:111122223333:document/{{DocumentName}}
  ```
+ Stop and send signals to an automation execution.
+ View details about the automation execution after it has been started.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": "ssm:StartAutomationExecution",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:111122223333:document/{{DocumentName}}",
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ]
        },
        {
            "Action": [
                "ssm:StopAutomationExecution",
                "ssm:GetAutomationExecution",
                "ssm:DescribeAutomationExecutions",
                "ssm:DescribeAutomationStepExecutions",
                "ssm:SendAutomationSignal"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

## Example 2: Allow a user to run a specific version of an automation document


The following example IAM policy allows a user to run a specific version of an automation document:
+ The name of the automation document is determined by the following entry.

  ```
  arn:aws:ssm:*:111122223333:document/{{DocumentName}}
  ```
+ The version of the automation document is determined by the following entry.

  ```
  "ssm:DocumentVersion": "5"
  ```

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": "ssm:StartAutomationExecution",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:111122223333:document/{{DocumentName}}"
            ],
            "Condition": {
                "ForAnyValue:StringEquals": {
                   "ssm:DocumentVersion": ["5"]
                }
            }
        },
        {
            "Action": [
                "ssm:StartAutomationExecution"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "ssm:StopAutomationExecution",
                "ssm:GetAutomationExecution",
                "ssm:DescribeAutomationExecutions",
                "ssm:DescribeAutomationStepExecutions",
                "ssm:SendAutomationSignal"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

## Example 3: Allow a user to execute automation documents with a specific tag


The following example IAM policy allows a user to run any automation document that has a specific tag:
+ The name of the automation document is determined by the following entry.

  ```
  arn:aws:ssm:*:111122223333:document/{{DocumentName}}
  ```
+ The tag of the automation document is determined by the following entry.

  ```
  "ssm:DocumentVersion": "5"
  ```

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": "ssm:StartAutomationExecution",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:111122223333:document/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/stage": "production"
                }
            }
        },
        {
            "Action": [
                "ssm:StartAutomationExecution"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "ssm:StopAutomationExecution",
                "ssm:GetAutomationExecution",
                "ssm:DescribeAutomationExecutions",
                "ssm:DescribeAutomationStepExecutions",
                "ssm:SendAutomationSignal"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

## Example 4: Allow a user to run an automation document when a specific tag parameter is provided for the automation execution


The following example IAM policy grants permissions to a user to run automation documents when a specific tag parameter is provided for the automation execution:
+ Run the automation document specified in the policy. The name of the document is determined by the following entry.

  ```
  arn:aws:ssm:*:111122223333:document/{{DocumentName}}
  ```
+ Must provide a specific tag parameter for the automation execution. The tag parameter for the automation execution resource is determined by the following entry.

  ```
  "aws:ResourceTag/stage": "production"
  ```
+ Stop and send signals to automation executions that have the specified tag.
+ View details about the automation executions that have the specified tag.
+ Add the specified tag to SSM resources.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": "ssm:StartAutomationExecution",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:111122223333:document/{{DocumentName}}"
            ]
        },
        {
            "Action": [
                "ssm:StartAutomationExecution",
                "ssm:StopAutomationExecution",
                "ssm:GetAutomationExecution",
                "ssm:DescribeAutomationExecutions",
                "ssm:DescribeAutomationStepExecutions",
                "ssm:SendAutomationSignal"
            ],
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ],
            "Effect": "Allow",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/environment": "beta"
                }
            }
        },
        {
            "Action": "ssm:AddTagsToResource",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:ssm:*:111122223333:automation-execution/*"
            ]
        }
    ]
}
```

------

# Allowing Automation to adapt to your concurrency needs


By default, Automation allows you to run up to 100 concurrent automations at a time. Automation also provides an optional setting that you can use to adjust your concurrency automation quota automatically. With this setting, your concurrency automation quota can accommodate up to 500 concurrent automations, depending on available resources. 

**Note**  
If your automation calls API operations, adaptively scaling to your targets can result in throttling exceptions. If recurring throttling exceptions occur when running automations with adaptive concurrency turned on, you might have to request quota increases for the API operation if available.

**To turn on adaptive concurrency using the AWS Management Console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Enable adaptive concurrency**.

1. Choose **Save**.

**To turn on adaptive concurrency using the command line**
+ Open the AWS CLI or Tools for Windows PowerShell and run the following command to turn on adaptive concurrency for your account in the requesting Region.

------
#### [ Linux & macOS ]

  ```
  aws ssm update-service-setting \
      --setting-id /ssm/automation/enable-adaptive-concurrency \
      --setting-value True
  ```

------
#### [ Windows ]

  ```
  aws ssm update-service-setting ^
      --setting-id /ssm/automation/enable-adaptive-concurrency ^
      --setting-value True
  ```

------
#### [ PowerShell ]

  ```
  Update-SSMServiceSetting `
      -SettingId "/ssm/automation/enable-adaptive-concurrency" `
      -SettingValue "True"
  ```

------

# Configuring automatic retry for throttled operations


There is a limit on the number of concurrent automation executions that can run in each account. Attempting to run several automations concurrently in an account can lead to throttling issues. You can use the automatic throttling retry capability to configure retry behavior for throttled automation steps.

Automatic throttling retry for automation actions provides a more resilient execution environment for high-scale operations. The throttling retry capability supports all [automation actions](automation-actions.md) except for `aws:executeScript`.

The throttling retry setting works in addition to the existing `maxAttempts` step property. When both are configured, the system first attempts throttling retries within the specified time limit, then applies the `maxAttempts` setting if the step continues to fail.

**To configure throttling retry using the AWS Management Console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. In the **Throttling retry time limit** field, enter a value between 0 and 3600 seconds. This specifies the maximum time that the system retries a step that is throttled.

1. Choose **Save**.

**To configure throttling retry using the command line**
+ Open the AWS CLI or Tools for Windows PowerShell and run the following command to configure throttling retry for your account in the requesting Region.

------
#### [ Linux & macOS ]

  ```
  aws ssm update-service-setting \
      --setting-id /ssm/automation/throttled-retry-time-limit \
      --setting-value 3600
  ```

------
#### [ Windows ]

  ```
  aws ssm update-service-setting ^
      --setting-id /ssm/automation/throttled-retry-time-limit ^
      --setting-value 3600
  ```

------
#### [ PowerShell ]

  ```
  Update-SSMServiceSetting `
      -SettingId "/ssm/automation/throttled-retry-time-limit" `
      -SettingValue "3600"
  ```

------

# Implement change controls for Automation


By default, Automation allows you to use runbooks without date and time constraints. By integrating Automation with Change Calendar, you can implement change controls to all automations in your AWS account. With this setting, AWS Identity and Access Management (IAM) principals in your account can only run automations during the time periods allowed by your change calendar. To learn more about working with Change Calendar, see [Working with Change Calendar](systems-manager-change-calendar-working.md).

**To turn on change controls (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the **Preferences** tab, and then choose **Edit**.

1. Select the check box next to **Turn on Change Calendar integration**.

1. In the **Choose a change calendar** dropdown list, choose the change calendar that you want Automation to follow.

1. Choose **Save**.

# Run an automated operation powered by Systems Manager Automation


When you run an automation, by default, the automation runs in the context of the user who initiated the automation. This means, for example, if your user has administrator permissions, then the automation runs with administrator permissions and full access to the resources being configured by the automation. As a security best practice, we recommend that you run automation by using an IAM service role that is known in this case as an *assume* role that is configured with the AmazonSSMAutomationRole managed policy. You might need to add additional IAM policies to your assume role to use various runbooks. Using an IAM service role to run automation is called *delegated administration*.

When you use a service role, the automation is allowed to run against the AWS resources, but the user who ran the automation has restricted access (or no access) to those resources. For example, you can configure a service role and use it with Automation to restart one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Automation is a tool in AWS Systems Manager. The automation restarts the instances, but the service role doesn't give the user permission to access those instances.

You can specify a service role at runtime when you run an automation, or you can create custom runbooks and specify the service role directly in the runbook. If you specify a service role, either at runtime or in a runbook, then the service runs in the context of the specified service role. If you don't specify a service role, then the system creates a temporary session in the context of the user and runs the automation.

**Note**  
You must specify a service role for automation that you expect to run longer than 12 hours. If you start a long-running automation in the context of a user, the user's temporary session expires after 12 hours.

Delegated administration ensures elevated security and control of your AWS resources. It also allows an enhanced auditing experience because actions are being performed against your resources by a central service role instead of multiple IAM accounts.

**Before you begin**  
Before you complete the following procedures, you must create the IAM service role and configure a trust relationship for Automation, a tool in AWS Systems Manager. For more information, see [Task 1: Create a service role for Automation](automation-setup-iam.md#create-service-role).

The following procedures describe how to use the Systems Manager console or your preferred command line tool to run a simple automation.

## Running a simple automation (console)


The following procedure describes how to use the Systems Manager console to run a simple automation.

**To run a simple automation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Automation document** list, choose a runbook. Choose one or more options in the **Document categories** pane to filter SSM documents according to their purpose. To view a runbook that you own, choose the **Owned by me** tab. To view a runbook that is shared with your account, choose the **Shared with me** tab. To view all runbooks, choose the **All documents** tab.
**Note**  
You can view information about a runbook by choosing the runbook name.

1. In the **Document details** section, verify that **Document version** is set to the version that you want to run. The system includes the following version options: 
   + **Default version at runtime** – Choose this option if the Automation runbook is updated periodically and a new default version is assigned.
   + **Latest version at runtime** – Choose this option if the Automation runbook is updated periodically, and you want to run the version that was most recently updated.
   + **1 (Default)** – Choose this option to run the first version of the document, which is the default.

1. Choose **Next**.

1. In the **Execution Mode** section, choose **Simple execution**.

1. In the **Input parameters** section, specify the required inputs. Optionally, you can choose an IAM service role from the **AutomationAssumeRole** list.

1. (Optional) Choose a CloudWatch alarm to apply to your automation for monitoring. To attach a CloudWatch alarm to your automation, the IAM principal that starts the automation must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). Note that if your alarm activates, the automation is stopped. If you use AWS CloudTrail, you will see the API call in your trail. 

1. Choose **Execute**. 

The console displays the status of the automation. If the automation fails to run, see [Troubleshooting Systems Manager Automation](automation-troubleshooting.md).

After an automation execution completes, you can rerun the execution with the same or modified parameters. For more information, see [Rerunning automation executions](automation-rerun-executions.md).

## Running a simple automation (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows) or AWS Tools for PowerShell to run a simple automation.

**To run a simple automation**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to start a simple automation. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name runbook name \
       --parameters runbook parameters
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name runbook name ^
       --parameters runbook parameters
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
     -DocumentName runbook name `
     -Parameter runbook parameters
   ```

------

   Here is an example using the runbook `AWS-RestartEC2Instance` to restart the specified EC2 instance.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name "AWS-RestartEC2Instance" \
       --parameters "InstanceId=i-02573cafcfEXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name "AWS-RestartEC2Instance" ^
       --parameters "InstanceId=i-02573cafcfEXAMPLE"
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
     -DocumentName AWS-RestartEC2Instance `
     -Parameter @{"InstanceId"="i-02573cafcfEXAMPLE"}
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AutomationExecutionId": "4105a4fc-f944-11e6-9d32-0123456789ab"
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AutomationExecutionId": "4105a4fc-f944-11e6-9d32-0123456789ab"
   }
   ```

------
#### [ PowerShell ]

   ```
   4105a4fc-f944-11e6-9d32-0123456789ab
   ```

------

1. Run the following command to retrieve the status of the automation.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-automation-executions \
       --filter "Key=ExecutionId,Values=4105a4fc-f944-11e6-9d32-0123456789ab"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-automation-executions ^
       --filter "Key=ExecutionId,Values=4105a4fc-f944-11e6-9d32-0123456789ab"
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAutomationExecutionList | `
     Where {$_.AutomationExecutionId -eq "4105a4fc-f944-11e6-9d32-0123456789ab"}
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AutomationExecutionMetadataList": [
           {
               "AutomationExecutionStatus": "InProgress",
               "CurrentStepName": "stopInstances",
               "Outputs": {},
               "DocumentName": "AWS-RestartEC2Instance",
               "AutomationExecutionId": "4105a4fc-f944-11e6-9d32-0123456789ab",
               "DocumentVersion": "1",
               "ResolvedTargets": {
                   "ParameterValues": [],
                   "Truncated": false
               },
               "AutomationType": "Local",
               "Mode": "Auto",
               "ExecutionStartTime": 1564600648.159,
               "CurrentAction": "aws:changeInstanceState",
               "ExecutedBy": "arn:aws:sts::123456789012:assumed-role/Administrator/Admin",
               "LogFile": "",
               "Targets": []
           }
       ]
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AutomationExecutionMetadataList": [
           {
               "AutomationExecutionStatus": "InProgress",
               "CurrentStepName": "stopInstances",
               "Outputs": {},
               "DocumentName": "AWS-RestartEC2Instance",
               "AutomationExecutionId": "4105a4fc-f944-11e6-9d32-0123456789ab",
               "DocumentVersion": "1",
               "ResolvedTargets": {
                   "ParameterValues": [],
                   "Truncated": false
               },
               "AutomationType": "Local",
               "Mode": "Auto",
               "ExecutionStartTime": 1564600648.159,
               "CurrentAction": "aws:changeInstanceState",
               "ExecutedBy": "arn:aws:sts::123456789012:assumed-role/Administrator/Admin",
               "LogFile": "",
               "Targets": []
           }
       ]
   }
   ```

------
#### [ PowerShell ]

   ```
   AutomationExecutionId       : 4105a4fc-f944-11e6-9d32-0123456789ab
   AutomationExecutionStatus   : InProgress
   AutomationType              : Local
   CurrentAction               : aws:changeInstanceState
   CurrentStepName             : startInstances
   DocumentName                : AWS-RestartEC2Instance
   DocumentVersion             : 1
   ExecutedBy                  : arn:aws:sts::123456789012:assumed-role/Administrator/Admin
   ExecutionEndTime            : 1/1/0001 12:00:00 AM
   ExecutionStartTime          : 7/31/2019 7:17:28 PM
   FailureMessage              : 
   LogFile                     : 
   MaxConcurrency              : 
   MaxErrors                   : 
   Mode                        : Auto
   Outputs                     : {}
   ParentAutomationExecutionId : 
   ResolvedTargets             : Amazon.SimpleSystemsManagement.Model.ResolvedTargets
   Target                      : 
   TargetMaps                  : {}
   TargetParameterName         : 
   Targets                     : {}
   ```

------

# Rerunning automation executions


You can rerun AWS Systems Manager automation executions to repeat tasks with either identical or modified parameters. The rerun capability allows you to efficiently replicate automation executions without manually recreating automation configurations, reducing operational overhead and potential configuration errors.

When you rerun an automation execution, Systems Manager preserves the original runbook parameters, Amazon CloudWatch alarms, and tags from the previous execution. The system creates a new execution with a new execution ID and updated timestamps. You can rerun any type of automation execution, including simple executions, rate control executions, cross-account and cross-region executions, and manual executions.

## Rerun an automation execution (console)


The following procedures describe how to use the Systems Manager console to rerun an automation execution.

**To rerun an automation execution from the Automation home page**

Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. In the executions list, select the execution that you want to rerun.

1. Choose **Rerun execution**.

1. On the **Execute automation document** page, review the pre-populated parameters, execution mode, and target configuration from the original execution.

1. (Optional) Modify any parameters, targets, or other settings as needed for your rerun.

1. Choose **Execute** to start the rerun with a new execution ID.

**To rerun an automation execution from the execution details page**

Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the execution ID of the automation that you want to rerun.

1. On the execution details page, choose **Rerun execution**.

1. On the **Execute automation document** page, review the pre-populated parameters, execution mode, and target configuration from the original execution.

1. (Optional) Modify any parameters, targets, or other settings as needed for your rerun.

1. Choose **Execute** to start the rerun with a new execution ID.

**To copy an automation execution to a new execution**

Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the execution ID of the automation that you want to copy.

1. On the execution details page, choose **Actions**, and then choose **Copy to new**.

1. On the **Execute automation document** page, review the pre-populated parameters, execution mode, and target configuration from the original execution.

1. (Optional) Modify any parameters, targets, or other settings as needed for your new execution.

1. Choose **Execute** to start the new execution.

# Run an automation that requires approvals


The following procedures describe how to use the AWS Systems Manager console and AWS Command Line Interface (AWS CLI) to run an automation with approvals using simple execution. The automation uses the automation action `aws:approve`, which temporarily pauses the automation until the designated principals either approve or deny the action. The automation runs in the context of the current user. This means that you don't need to configure additional IAM permissions as long as you have permission to use the runbook, and any actions called by the runbook. If you have administrator permissions in IAM, then you already have permission to use this runbook.

**Before you begin**  
In addition to the standard inputs required by the runbook, the `aws:approve` action requires the following two parameters: 
+ A list of approvers. The list of approvers must contain at least one approver in the form of a user name or a user ARN. If multiple approvers are provided, a corresponding minimum approval count must also be specified within the runbook. 
+ An Amazon Simple Notification Service (Amazon SNS) topic ARN. The Amazon SNS topic name must start with `Automation`.

This procedure assumes that you have already created an Amazon SNS topic, which is required to deliver the approval request. For information, see [Create a Topic](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html#CreateTopic) in the *Amazon Simple Notification Service Developer Guide*.

## Running an automation with approvers (console)


**To run an automation with approvers**

The following procedure describes how to use the Systems Manager console to run an automation with approvers.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Automation document** list, choose a runbook. Choose one or more options in the **Document categories** pane to filter SSM documents according to their purpose. To view a runbook that you own, choose the **Owned by me** tab. To view a runbook that is shared with your account, choose the **Shared with me** tab. To view all runbooks, choose the **All documents** tab.
**Note**  
You can view information about a runbook by choosing the runbook name.

1. In the **Document details** section, verify that **Document version** is set to the version that you want to run. The system includes the following version options: 
   + **Default version at runtime** – Choose this option if the Automation runbook is updated periodically and a new default version is assigned.
   + **Latest version at runtime** – Choose this option if the Automation runbook is updated periodically, and you want to run the version that was most recently updated.
   + **1 (Default)** – Choose this option to run the first version of the document, which is the default.

1. Choose **Next**.

1. On the **Execute automation document** page, choose **Simple execution**.

1. In the **Input parameters** section, specify the required input parameters.

   For example, if you chose the `AWS-StartEC2InstanceWithApproval` runbook, then you must specify or choose instance IDs for the **InstanceId** parameter. 

1. In the **Approvers** section, specify the user names or user ARNs of approvers for the automation action.

1. In the **SNSTopicARN** section, specify the SNS topic ARN to use for sending approval notification. The SNS topic name must start with **Automation**.

1. Optionally, you can choose an IAM service role from the **AutomationAssumeRole** list. If you're targeting more than 100 accounts and Regions, you must specify the `AWS-SystemsManager-AutomationAdministrationRole`.

1. Choose **Execute automation**. 

The specified approver receives an Amazon SNS notification with details to approve or reject the automation. This approval action is valid for 7 days from the date of issue and can be issued using the Systems Manager console or the AWS Command Line Interface (AWS CLI).

If you chose to approve the automation, the automation continues to run the steps included in the specified runbook. The console displays the status of the automation. If the automation fails to run, see [Troubleshooting Systems Manager Automation](automation-troubleshooting.md).

**To approve or deny an automation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then select the automation that was run in the previous procedure.

1. Choose **Actions** and then choose **Approve/Deny**.

1. Choose to **Approve** or **Deny** and optionally provide a comment.

1. Choose **Submit**.

## Running an automation with approvers (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows) or AWS Tools for PowerShell to run an automation with approvers.

**To run an automation with approvers**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to run an automation with approvers. Replace each *example resource placeholder* with your own information. In the document name section, specify a runbook that includes the automation action, `aws:approve`.

   For `Approvers`, specify the user names or user ARNs of approvers for the action. For `SNSTopic`, specify the SNS topic ARN to use to send approval notification. The Amazon SNS topic name must start with `Automation`.
**Note**  
The specific names of the parameter values for approvers and the SNS topic depend on the values specified within the runbook you choose. 

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name "AWS-StartEC2InstanceWithApproval" \
       --parameters "InstanceId=i-02573cafcfEXAMPLE,Approvers=arn:aws:iam::123456789012:role/Administrator,SNSTopicArn=arn:aws:sns:region:123456789012:AutomationApproval"
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name "AWS-StartEC2InstanceWithApproval" ^
       --parameters "InstanceId=i-02573cafcfEXAMPLE,Approvers=arn:aws:iam::123456789012:role/Administrator,SNSTopicArn=arn:aws:sns:region:123456789012:AutomationApproval"
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
       -DocumentName AWS-StartEC2InstanceWithApproval `
       -Parameters @{
           "InstanceId"="i-02573cafcfEXAMPLE"
           "Approvers"="arn:aws:iam::123456789012:role/Administrator"
           "SNSTopicArn"="arn:aws:sns:region:123456789012:AutomationApproval"
       }
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AutomationExecutionId": "df325c6d-b1b1-4aa0-8003-6cb7338213c6"
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AutomationExecutionId": "df325c6d-b1b1-4aa0-8003-6cb7338213c6"
   }
   ```

------
#### [ PowerShell ]

   ```
   df325c6d-b1b1-4aa0-8003-6cb7338213c6
   ```

------

**To approve an automation**
+ Run the following command to approve an automation. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

  ```
  aws ssm send-automation-signal \
      --automation-execution-id "df325c6d-b1b1-4aa0-8003-6cb7338213c6" \
      --signal-type "Approve" \
      --payload "Comment=your comments"
  ```

------
#### [ Windows ]

  ```
  aws ssm send-automation-signal ^
      --automation-execution-id "df325c6d-b1b1-4aa0-8003-6cb7338213c6" ^
      --signal-type "Approve" ^
      --payload "Comment=your comments"
  ```

------
#### [ PowerShell ]

  ```
  Send-SSMAutomationSignal `
      -AutomationExecutionId df325c6d-b1b1-4aa0-8003-6cb7338213c6 `
      -SignalType Approve `
      -Payload @{"Comment"="your comments"}
  ```

------

  There is no output if the command succeeds.

**To deny an automation**
+ Run the following command to deny an automation. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

  ```
  aws ssm send-automation-signal \
      --automation-execution-id "df325c6d-b1b1-4aa0-8003-6cb7338213c6" \
      --signal-type "Deny" \
      --payload "Comment=your comments"
  ```

------
#### [ Windows ]

  ```
  aws ssm send-automation-signal ^
      --automation-execution-id "df325c6d-b1b1-4aa0-8003-6cb7338213c6" ^
      --signal-type "Deny" ^
      --payload "Comment=your comments"
  ```

------
#### [ PowerShell ]

  ```
  Send-SSMAutomationSignal `
      -AutomationExecutionId df325c6d-b1b1-4aa0-8003-6cb7338213c6 `
      -SignalType Deny `
      -Payload @{"Comment"="your comments"}
  ```

------

  There is no output if the command succeeds.

# Run automated operations at scale


With AWS Systems Manager Automation, you can run automations on a fleet of AWS resources by using *targets*. Additionally, you can control the deployment of the automation across your fleet by specifying a concurrency value and an error threshold. The concurrency and error threshold features are collectively called *rate controls*. The concurrency value determines how many resources are allowed to run the automation simultaneously. Automation also provides an adaptive concurrency mode you can opt in to. Adaptive concurrency automatically scales your automation quota from 100 concurrently running automations up to 500. An error threshold determines how many automations are allowed to fail before Systems Manager stops sending the automation to other resources.

For more information about concurrency and error thresholds, see [Control automations at scale](running-automations-scale-controls.md). For more information about targets, see [Mapping targets for an automation](running-automations-map-targets.md).

The following procedures show you how to turn on adaptive concurrency, and how to run an automation with targets and rate controls by using the Systems Manager console and AWS Command Line Interface (AWS CLI).

## Running an automation with targets and rate controls (console)


The following procedure describes how to use the Systems Manager console to run an automation with targets and rate controls.

**To run an automation with targets and rate controls**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Automation document** list, choose a runbook. Choose one or more options in the **Document categories** pane to filter SSM documents according to their purpose. To view a runbook that you own, choose the **Owned by me** tab. To view a runbook that is shared with your account, choose the **Shared with me** tab. To view all runbooks, choose the **All documents** tab.
**Note**  
You can view information about a runbook by choosing the runbook name.

1. In the **Document details** section, verify that **Document version** is set to the version that you want to run. The system includes the following version options: 
   + **Default version at runtime** – Choose this option if the Automation runbook is updated periodically and a new default version is assigned.
   + **Latest version at runtime** – Choose this option if the Automation runbook is updated periodically, and you want to run the version that was most recently updated.
   + **1 (Default)** – Choose this option to run the first version of the document, which is the default.

1. Choose **Next**.

1. In the **Execution Mode** section, choose **Rate Control**. You must use this mode or **Multi-account and Region** if you want to use targets and rate controls.

1. In the **Targets** section, choose how you want to target the AWS resources where you want to run the Automation. These options are required.

   1. Use the **Parameter** list to choose a parameter. The items in the **Parameter** list are determined by the parameters in the Automation runbook that you selected at the start of this procedure. By choosing a parameter you define the type of resource on which the Automation workflow runs. 

   1. Use the **Targets** list to choose how you want to target resources.

      1. If you chose to target resources by using parameter values, then enter the parameter value for the parameter you chose in the **Input parameters** section.

      1. If you chose to target resources by using AWS Resource Groups, then choose the name of the group from the **Resource Group** list.

      1. If you chose to target resources by using tags, then enter the tag key and (optionally) the tag value in the fields provided. Choose **Add**.

      1. If you want to run an Automation runbook on all instances in the current AWS account and AWS Region, then choose **All instances**.

1. In the **Input parameters** section, specify the required inputs. Optionally, you can choose an IAM service role from the **AutomationAssumeRole** list.
**Note**  
You might not need to choose some of the options in the **Input parameters** section. This is because you targeted resources by using tags or a resource group. For example, if you chose the `AWS-RestartEC2Instance` runbook, then you don't need to specify or choose instance IDs in the **Input parameters** section. The Automation execution locates the instances to restart by using the tags or resource group you specified.

1. Use the options in the **Rate control** section to restrict the number of AWS resources that can run the Automation within each account-Region pair. 

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the Automation workflow simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the Automation workflow simultaneously.

1. In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors allowed before Automation stops sending the workflow to other resources.
   + Choose **percentage** to enter a percentage of errors allowed before Automation stops sending the workflow to other resources.

1. (Optional) Choose a CloudWatch alarm to apply to your automation for monitoring. To attach a CloudWatch alarm to your automation, the IAM principal that starts the automation must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). Note that if your alarm activates, the automation is stopped. If you use AWS CloudTrail, you will see the API call in your trail.

1. Choose **Execute**. 

To view automations started by your rate control automation, in the navigation pane, choose Automation, and then select **Show child automations**.

After an automation execution completes, you can rerun the execution with the same or modified parameters. For more information, see [Rerunning automation executions](automation-rerun-executions.md).

## Running an automation with targets and rate controls (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows) or AWS Tools for PowerShell to run an automation with targets and rate controls.

**To run an automation with targets and rate controls**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to view a list of documents.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-documents
   ```

------
#### [ Windows ]

   ```
   aws ssm list-documents
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentList
   ```

------

   Note the name of the runbook that you want to use.

1. Run the following command to view details about the runbook. Replace the *runbook name* with the name of the runbook whose details you want to view. Also, note a parameter name (for example, `InstanceId`) that you want to use for the `--target-parameter-name` option. This parameter determines the type of resource on which the automation runs.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-document \
       --name runbook name
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-document ^
       --name runbook name
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentDescription `
       -Name runbook name
   ```

------

1. Create a command that uses the targets and rate control options you want to run. Replace each *example resource placeholder* with your own information.

   *Targeting using tags*

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name runbook name \
       --targets Key=tag:key name,Values=value \
       --target-parameter-name parameter name \
       --parameters "input parameter name=input parameter value,input parameter 2 name=input parameter 2 value" \
       --max-concurrency 10 \
       --max-errors 25%
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name runbook name ^
       --targets Key=tag:key name,Values=value ^
       --target-parameter-name parameter name ^
       --parameters "input parameter name=input parameter value,input parameter 2 name=input parameter 2 value" ^
       --max-concurrency 10 ^
       --max-errors 25%
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "tag:key name"
   $Targets.Values = "value"
   
   Start-SSMAutomationExecution `
       DocumentName "runbook name" `
       -Targets $Targets `
       -TargetParameterName "parameter name" `
       -Parameter @{"input parameter name"="input parameter value";"input parameter 2 name"="input parameter 2 value"} `
       -MaxConcurrency "10" `
       -MaxError "25%"
   ```

------

   *Targeting using parameter values*

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name runbook name \
       --targets Key=ParameterValues,Values=value,value 2,value 3 \
       --target-parameter-name parameter name \
       --parameters "input parameter name=input parameter value" \
       --max-concurrency 10 \
       --max-errors 25%
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name runbook name ^
       --targets Key=ParameterValues,Values=value,value 2,value 3 ^
       --target-parameter-name parameter name ^
       --parameters "input parameter name=input parameter value" ^
       --max-concurrency 10 ^
       --max-errors 25%
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "ParameterValues"
   $Targets.Values = "value","value 2","value 3"
   
   Start-SSMAutomationExecution `
       -DocumentName "runbook name" `
       -Targets $Targets `
       -TargetParameterName "parameter name" `
       -Parameter @{"input parameter name"="input parameter value"} `
       -MaxConcurrency "10" `
       -MaxError "25%"
   ```

------

   *Targeting using AWS Resource Groups*

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name runbook name \
       --targets Key=ResourceGroup,Values=Resource group nname \
       --target-parameter-name parameter name \
       --parameters "input parameter name=input parameter value" \
       --max-concurrency 10 \
       --max-errors 25%
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name runbook name ^
       --targets Key=ResourceGroup,Values=Resource group name ^
       --target-parameter-name parameter name ^
       --parameters "input parameter name=input parameter value" ^
       --max-concurrency 10 ^
       --max-errors 25%
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "ResourceGroup"
   $Targets.Values = "Resource group name"
   
   Start-SSMAutomationExecution `
       -DocumentName "runbook name" `
       -Targets $Targets `
       -TargetParameterName "parameter name" `
       -Parameter @{"input parameter name"="input parameter value"} `
       -MaxConcurrency "10" `
       -MaxError "25%"
   ```

------

   *Targeting all Amazon EC2 instances in the current AWS account and AWS Region*

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name runbook name \
       --targets "Key=AWS::EC2::Instance,Values=*"  \
       --target-parameter-name instanceId \
       --parameters "input parameter name=input parameter value" \
       --max-concurrency 10 \
       --max-errors 25%
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name runbook name ^
       --targets Key=AWS::EC2::Instance,Values=* ^
       --target-parameter-name instanceId ^
       --parameters "input parameter name=input parameter value" ^
       --max-concurrency 10 ^
       --max-errors 25%
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "AWS::EC2::Instance"
   $Targets.Values = "*"
   
   Start-SSMAutomationExecution `
       -DocumentName "runbook name" `
       -Targets $Targets `
       -TargetParameterName "instanceId" `
       -Parameter @{"input parameter name"="input parameter value"} `
       -MaxConcurrency "10" `
       -MaxError "25%"
   ```

------

   The command returns an execution ID. Copy this ID to the clipboard. You can use this ID to view the status of the automation.

------
#### [ Linux & macOS ]

   ```
   {
       "AutomationExecutionId": "a4a3c0e9-7efd-462a-8594-01234EXAMPLE"
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AutomationExecutionId": "a4a3c0e9-7efd-462a-8594-01234EXAMPLE"
   }
   ```

------
#### [ PowerShell ]

   ```
   a4a3c0e9-7efd-462a-8594-01234EXAMPLE
   ```

------

1. Run the following command to view the automation. Replace each *automation execution ID* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-automation-executions \
       --filter Key=ExecutionId,Values=automation execution ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-automation-executions ^
       --filter Key=ExecutionId,Values=automation execution ID
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAutomationExecutionList | `
       Where {$_.AutomationExecutionId -eq "automation execution ID"}
   ```

------

1. To view details about the automation progress, run the following command. Replace each *automation execution ID* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-automation-execution \
       --automation-execution-id automation execution ID
   ```

------
#### [ Windows ]

   ```
   aws ssm get-automation-execution ^
       --automation-execution-id automation execution ID
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAutomationExecution `
       -AutomationExecutionId automation execution ID
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AutomationExecution": {
           "StepExecutionsTruncated": false,
           "AutomationExecutionStatus": "Success",
           "MaxConcurrency": "1",
           "Parameters": {},
           "MaxErrors": "1",
           "Outputs": {},
           "DocumentName": "AWS-StopEC2Instance",
           "AutomationExecutionId": "a4a3c0e9-7efd-462a-8594-01234EXAMPLE",
           "ResolvedTargets": {
               "ParameterValues": [
                   "i-02573cafcfEXAMPLE"
               ],
               "Truncated": false
           },
           "ExecutionEndTime": 1564681619.915,
           "Targets": [
               {
                   "Values": [
                       "DEV"
                   ],
                   "Key": "tag:ENV"
               }
           ],
           "DocumentVersion": "1",
           "ExecutionStartTime": 1564681576.09,
           "ExecutedBy": "arn:aws:sts::123456789012:assumed-role/Administrator/Admin",
           "StepExecutions": [
               {
                   "Inputs": {
                       "InstanceId": "i-02573cafcfEXAMPLE"
                   },
                   "Outputs": {},
                   "StepName": "i-02573cafcfEXAMPLE",
                   "ExecutionEndTime": 1564681619.093,
                   "StepExecutionId": "86c7b811-3896-4b78-b897-01234EXAMPLE",
                   "ExecutionStartTime": 1564681576.836,
                   "Action": "aws:executeAutomation",
                   "StepStatus": "Success"
               }
           ],
           "TargetParameterName": "InstanceId",
           "Mode": "Auto"
       }
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AutomationExecution": {
           "StepExecutionsTruncated": false,
           "AutomationExecutionStatus": "Success",
           "MaxConcurrency": "1",
           "Parameters": {},
           "MaxErrors": "1",
           "Outputs": {},
           "DocumentName": "AWS-StopEC2Instance",
           "AutomationExecutionId": "a4a3c0e9-7efd-462a-8594-01234EXAMPLE",
           "ResolvedTargets": {
               "ParameterValues": [
                   "i-02573cafcfEXAMPLE"
               ],
               "Truncated": false
           },
           "ExecutionEndTime": 1564681619.915,
           "Targets": [
               {
                   "Values": [
                       "DEV"
                   ],
                   "Key": "tag:ENV"
               }
           ],
           "DocumentVersion": "1",
           "ExecutionStartTime": 1564681576.09,
           "ExecutedBy": "arn:aws:sts::123456789012:assumed-role/Administrator/Admin",
           "StepExecutions": [
               {
                   "Inputs": {
                       "InstanceId": "i-02573cafcfEXAMPLE"
                   },
                   "Outputs": {},
                   "StepName": "i-02573cafcfEXAMPLE",
                   "ExecutionEndTime": 1564681619.093,
                   "StepExecutionId": "86c7b811-3896-4b78-b897-01234EXAMPLE",
                   "ExecutionStartTime": 1564681576.836,
                   "Action": "aws:executeAutomation",
                   "StepStatus": "Success"
               }
           ],
           "TargetParameterName": "InstanceId",
           "Mode": "Auto"
       }
   }
   ```

------
#### [ PowerShell ]

   ```
   AutomationExecutionId       : a4a3c0e9-7efd-462a-8594-01234EXAMPLE
   AutomationExecutionStatus   : Success
   CurrentAction               : 
   CurrentStepName             : 
   DocumentName                : AWS-StopEC2Instance
   DocumentVersion             : 1
   ExecutedBy                  : arn:aws:sts::123456789012:assumed-role/Administrator/Admin
   ExecutionEndTime            : 8/1/2019 5:46:59 PM
   ExecutionStartTime          : 8/1/2019 5:46:16 PM
   FailureMessage              : 
   MaxConcurrency              : 1
   MaxErrors                   : 1
   Mode                        : Auto
   Outputs                     : {}
   Parameters                  : {}
   ParentAutomationExecutionId : 
   ProgressCounters            : 
   ResolvedTargets             : Amazon.SimpleSystemsManagement.Model.ResolvedTargets
   StepExecutions              : {i-02573cafcfEXAMPLE}
   StepExecutionsTruncated     : False
   Target                      : 
   TargetLocations             : {}
   TargetMaps                  : {}
   TargetParameterName         : InstanceId
   Targets                     : {tag:Name}
   ```

------
**Note**  
You can also monitor the status of the automation in the console. In the **Automation executions** list, choose the automation you just ran and then choose the **Execution steps** tab. This tab shows the status of the automation actions.

# Mapping targets for an automation


Use the `Targets` parameter to quickly define which resources are targeted by an automation. For example, if you want to run an automation that restarts your managed instances, then instead of manually selecting dozens of instance IDs in the console or typing them in a command, you can target instances by specifying Amazon Elastic Compute Cloud (Amazon EC2) tags with the `Targets` parameter.

When you run an automation that uses a target, AWS Systems Manager creates a child automation for each target. For example, if you target Amazon Elastic Block Store (Amazon EBS) volumes by specifying tags, and those tags resolve to 100 Amazon EBS volumes, then Systems Manager creates 100 child automations. The parent automation is complete when all child automations reach a final state.

**Note**  
Any `input parameters` that you specify at runtime (either in the **Input parameters** section of the console or by using the `parameters` option from the command line) are automatically processed by all child automations.

You can target resources for an automation by using tags, Resource Groups,and parameter values. Additionally, you can use the `TargetMaps` option to target multiple parameter values from the command line or a file. The following section describes each of these targeting options in more detail.

## Targeting a tag


You can specify a single tag as the target of an automation. Many AWS resources support tags, including Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances, Amazon Elastic Block Store (Amazon EBS) volumes and snapshots, Resource Groups,and Amazon Simple Storage Service (Amazon S3) buckets, to name a few. You can quickly run automation on your AWS resources by targeting a tag. A tag is a key-value pair, such as Operating\$1System:Linux or Department:Finance. If you assign a specific name to a resource, then you can also use the word "Name" as a key, and the name of the resource as the value.

When you specify a tag as the target for an automation, you also specify a target parameter. The target parameter uses the `TargetParameterName` option. By choosing a target parameter, you define the type of resource on which the automation runs. The target parameter you specify with the tag must be a valid parameter defined in the runbook. For example, if you want to target dozens of EC2 instances by using tags, then choose the `InstanceId` target parameter. By choosing this parameter, you define *instances* as the resource type for the automation. When creating a custom runbook you must specify the **Target type** as `/AWS::EC2::Instance` to ensure only instances are used. Otherwise, all resources with the same tag will be targeted. When targeting instances with a tag, terminated instances might be included.

The following screenshot uses the `AWS-DetachEBSVolume` runbook. The logical target parameter is `VolumeId`.

![\[Using a tag as a target for a Systems Manager Automation\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/automation-rate-control-tags-1-new.png)


The `AWS-DetachEBSVolume` runbook also includes a special property called **Target type**, which is set to `/AWS::EC2::Volume`. This means that if the tag-key pair `Finance:TestEnv` returns different types of resources (for example, EC2 instances, Amazon EBS volumes, Amazon EBS snapshots) then only Amazon EBS volumes will be used.

**Important**  
Target parameter names are case sensitive. If you run automations by using either the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell, then you must enter the target parameter name exactly as it's defined in the runbook. If you don't, the system returns an `InvalidAutomationExecutionParametersException` error. You can use the [DescribeDocument](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeDocument.html) API operation to see information about the available target parameters in a specific runbook. Following is an example AWS CLI command that provides information about the `AWS-DeleteSnapshot` document.  

```
aws ssm describe-document \
    --name AWS-DeleteSnapshot
```

Here are some example AWS CLI commands that target resources by using a tag.

**Example 1: Targeting a tag using a key-value pair to restart Amazon EC2 instances**

This example restarts all Amazon EC2 instances that are tagged with a key of *Department* and a value of *HumanResources*. The target parameter uses the *InstanceId* parameter from the runbook. The example uses an additional parameter to run the automation by using an Automation service role (also called an *assume role*).

```
aws ssm start-automation-execution \
    --document-name AWS-RestartEC2Instance \
    --targets Key=tag:Department,Values=HumanResources \
    --target-parameter-name InstanceId \
    --parameters "AutomationAssumeRole=arn:aws:iam::111122223333:role/AutomationServiceRole"
```

**Example 2: Targeting a tag using a key-value pair to delete Amazon EBS snapshots**

The following example uses the `AWS-DeleteSnapshot` runbook to delete all snapshots with a key of *Name* and a value of *January2018Backups*. The target parameter uses the *VolumeId* parameter.

```
aws ssm start-automation-execution \
    --document-name AWS-DeleteSnapshot \
    --targets Key=tag:Name,Values=January2018Backups \
    --target-parameter-name VolumeId
```

## Targeting AWS Resource Groups


You can specify a single AWS resource group as the target of an automation. Systems Manager creates a child automation for every object in the target Resource Group.

For example, say that one of your Resource Groups is named PatchedAMIs. This Resource Group includes a list of 25 Windows Amazon Machine Images (AMIs) that are routinely patched. If you run an automation that uses the `AWS-CreateManagedWindowsInstance` runbook and target this Resource Group, then Systems Manager creates a child automation for each of the 25 AMIs. This means, that by targeting the PatchedAMIs Resource Group, the automation creates 25 instances from a list of patched AMIs. The parent automation is complete when all child automations complete processing or reach a final state.

The following AWS CLI command applies to the PatchAMIs Resource Group example. The command takes the *AmiId* parameter for the `--target-parameter-name` option. The command doesn't include an additional parameter defining which type of instance to create from each AMI. The `AWS-CreateManagedWindowsInstance` runbook defaults to the t2.medium instance type, so this command would create 25 t2.medium Amazon EC2 instances for Windows Server.

```
aws ssm start-automation-execution \
    --document-name AWS-CreateManagedWindowsInstance \
    --targets Key=ResourceGroup,Values=PatchedAMIs  \
    --target-parameter-name AmiId
```

The following console example uses a Resource Group called t2-micro-instances.

![\[Targeting an AWS resource group with a Systems Manager automation\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/automation-rate-control-resource-groups-new.png)


## Targeting parameter values


You can also target a parameter value. You enter `ParameterValues` as the key and then enter the specific resource value where you want the automation to run. If you specify multiple values, Systems Manager runs a child automation on each value specified.

For example, say that your runbook includes an **InstanceID** parameter. If you target the values of the **InstanceID** parameter when you run the Automation, then Systems Manager runs a child automation for each instance ID value specified. The parent automation is complete when the automation finishes running each specified instance, or if the automation fails. You can target a maximum of 50 parameter values.

The following example uses the `AWS-CreateImage` runbook. The target parameter name specified is *InstanceId*. The key uses *ParameterValues*. The values are two Amazon EC2 instance IDs. This command creates an automation for each instance, which produces an AMI from each instance. 

```
aws ssm start-automation-execution 
    --document-name AWS-CreateImage \
    --target-parameter-name InstanceId \
    --targets Key=ParameterValues,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE
```

**Note**  
`AutomationAssumeRole` isn't a valid parameter. Don’t choose this item when running automation that target a parameter value.

### Targeting parameter value maps


The `TargetMaps` option expands your ability to target `ParameterValues`. You can enter an array of parameter values by using `TargetMaps` at the command line. You can specify a maximum of 50 parameter values at the command line. If you want to run commands that specify more than 50 parameter values, then you can enter the values in a JSON file. You can then call the file from the command line.

**Note**  
The `TargetMaps` option isn't supported in the console.

Use the following format to specify multiple parameter values by using the `TargetMaps` option in a command. Replace each *example resource placeholder* with your own information.

```
aws ssm start-automation-execution \
    --document-name runbook name \
    --target-maps “parameter=value, parameter 2=value, parameter 3=value”  “parameter 4=value, parameter 5=value, parameter 6=value”
```

If you want to enter more than 50 parameter values for the `TargetMaps` option, then specify the values in a file by using the following JSON format. Using a JSON file also improves readability when providing multiple parameter values.

```
[

    {“parameter”: "value", “parameter 2”: "value", “parameter 3”: "value"},

    {“parameter 4”: "value", “parameter 5”: "value", "parameter 6": "value"}

]
```

Save the file with a .json file extension. You can call the file by using the following command. Replace each *example resource placeholder* with your own information.

```
aws ssm start-automation-execution \
    --document-name runbook name \
    –-parameters input parameters \
    --target-maps path to file/file name.json
```

You can also download the file from an Amazon Simple Storage Service (Amazon S3) bucket, as long as you have permission to read data from the bucket. Use the following command format. Replace each *example resource placeholder* with your own information.

```
aws ssm start-automation-execution \
    --document-name runbook name \
    --target-maps http://amzn-s3-demo-bucket.s3.amazonaws.com/file_name.json
```

Here is an example scenario to help you understand the `TargetMaps` option. In this scenario, a user wants to create Amazon EC2 instances of different types from different AMIs. To perform this task, the user creates a runbook named AMI\$1Testing. This runbook defines two input parameters: `instanceType` and `imageId`. 

```
{
  "description": "AMI Testing",
  "schemaVersion": "0.3",
  "assumeRole": "{{assumeRole}}",
  "parameters": {
    "assumeRole": {
      "type": "String",
      "description": "Role under which to run the automation",
      "default": ""
    },
    "instanceType": {
      "type": "String",
      "description": "Type of EC2 Instance to launch for this test"
    },
    "imageId": {
      "type": "String",
      "description": "Source AMI id from which to run instance"
    }
  },
  "mainSteps": [
    {
      "name": "runInstances",
      "action": "aws:runInstances",
      "maxAttempts": 1,
      "onFailure": "Abort",
      "inputs": {
        "ImageId": "{{imageId}}",
        "InstanceType": "{{instanceType}}",
        "MinInstanceCount": 1,
        "MaxInstanceCount": 1
      }
    }
  ],
  "outputs": [
    "runInstances.InstanceIds"
  ]
}
```

The user then specifies the following target parameter values in a file named `AMI_instance_types.json`.

```
[
  {
    "instanceType" : ["t2.micro"],     
    "imageId" : ["ami-b70554c8"]     
  },
  {
    "instanceType" : ["t2.small"],     
    "imageId" : ["ami-b70554c8"]     
  },
  {
    "instanceType" : ["t2.medium"],     
    "imageId" : ["ami-cfe4b2b0"]     
  },
  {
    "instanceType" : ["t2.medium"],     
    "imageId" : ["ami-cfe4b2b0"]     
  },
  {
    "instanceType" : ["t2.medium"],     
    "imageId" : ["ami-cfe4b2b0"]     
  }
]
```

The user can run the automation and create the five EC2 instances defined in `AMI_instance_types.json` by running the following command.

```
aws ssm start-automation-execution \
    --document-name AMI_Testing \
    --target-parameter-name imageId \
    --target-maps file:///home/TestUser/workspace/runinstances/AMI_instance_types.json
```

## Targeting all Amazon EC2 instances


You can run an automation on all Amazon EC2 instances in the current AWS account and AWS Region by choosing **All instances** in the **Targets** list. For example, if you want to restart all Amazon EC2 instances your AWS account and the current AWS Region, you can choose the `AWS-RestartEC2Instance` runbook and then choose **All instances** from the **Targets** list.

![\[Targeting all Amazon EC2 instances for a runbook\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/automation-rate-control-target-all-instances.png)


After you choose **All instances**, Systems Manager populates the **Instance** field with an asterisk (\$1) and makes the field unavailable for changes (the field is grayed out). Systems Manager also makes the **InstanceId** field in the **Input parameters** field unavailable for changes. Making these fields unavailable for changes is expected behavior if you choose to target all instances.

# Control automations at scale


You can control the deployment of an automation across a fleet of AWS resources by specifying a concurrency value and an error threshold. Concurrency and error threshold are collectively called *rate controls*.

**Concurrency**  
Use Concurrency to specify how many resources are allowed to run an automation simultaneously. Concurrency helps to limit the impact or downtime on your resources when processing an automation. You can specify either an absolute number of resources, for example 20, or a percentage of the target set, for example 10%.

The queueing system delivers the automation to a single resource and waits until the initial invocation is complete before sending the automation to two more resources. The system exponentially sends the automation to more resources until the concurrency value is met.

**Error thresholds**  
Use an error threshold to specify how many automations are allowed to fail before AWS Systems Manager stops sending the automation to other resources. You can specify either an absolute number of errors, for example 10, or a percentage of the target set, for example 10%.

If you specify an absolute number of 3 errors, for example, the system stops running the automation when the fourth error is received. If you specify 0, then the system stops running the automation on additional targets after the first error result is returned.

If you send an automation to, for example, 50 instances and set the error threshold to 10%, then the system stops sending the command to additional instances when the fifth error is received. Invocations that are already running an automation when an error threshold is reached are allowed to be completed, but some of these automations might fail as well. If you need to ensure that there won’t be more errors than the number specified for the error threshold, then set the **Concurrency** value to 1 so that automations proceed one at a time. 

# Running automations in multiple AWS Regions and accounts


You can run AWS Systems Manager automations across multiple AWS Regions and AWS accounts or AWS Organizations organizational units (OUs) from a central account. Automation is a tool in AWS Systems Manager. Running automations in multiple Regions and accounts or OUs reduces the time required to administer your AWS resources while enhancing the security of your computing environment.

For example, you can do the following by using automation runbooks:
+ Implement patching and security updates centrally.
+ Remediate compliance drift on VPC configurations or Amazon S3 bucket policies.
+ Manage resources, such as Amazon Elastic Compute Cloud (Amazon EC2) EC2 instances, at scale.

The following diagram shows an example of a user who is running the `AWS-RestartEC2Instances` runbook in multiple Regions and accounts from a central account. The automation locates the instances by using the specified tags in the targeted Regions and accounts.

![\[Illustration showing Systems Manager Automation running in multiple Regions and multiple accounts.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/automation-multi-region-and-multi-account.png)


**Choose a central account for Automation**  
If you want to run automations across OUs, the central account must have permissions to list all of the accounts in the OUs. This is only possible from a delegated administrator account, or the management account of the organization. We recommend that you follow AWS Organizations best practices and use a delegated administrator account. For more information about AWS Organizations best practices, see [Best practices for the management account](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html) in the *AWS Organizations User Guide*. To create a delegated administrator account for Systems Manager, you can use the `register-delegated-administrator` command with the AWS CLI as shown in the following example.

```
aws organizations register-delegated-administrator \
    --account-id delegated admin account ID \
    --service-principal ssm.amazonaws.com
```

If you want to run automations across multiple accounts that are not managed by AWS Organizations, we recommend creating a dedicated account for automation management. Running all cross-account automations from a dedicated account simplifies IAM permissions management, troubleshooting efforts, and creates a layer of separation between operations and administration. This approach is also recommended if you use AWS Organizations, but only want to target individual accounts and not OUs.

**How running automations works**  
Running automations across multiple Regions and accounts or OUs works as follows:

1. Sign in to the account that you want to configure as the Automation central account.

1. Use the [Setting up management account permissions for multi-Region and multi-account automation](#setup-management-account-iam-roles) procedure in this topic to create the following IAM roles:
   + `AWS-SystemsManager-AutomationAdministrationRole` - This role gives the user permission to run automations in multiple accounts and OUs.
   + `AWS-SystemsManager-AutomationExecutionRole` - This role gives the user permission to run automations in the targeted accounts.

1. Choose the runbook, Regions, and accounts or OUs where you want to run the automation.
**Note**  
Be sure that the target OU contains the desired accounts. If you choose a custom runbook, the runbook must be shared with all of the target accounts. For information about sharing runbooks, see [Sharing SSM documents](documents-ssm-sharing.md). For information about using shared runbooks, see [Using shared SSM documents](documents-ssm-sharing.md#using-shared-documents).

1. Run the automation.

1. Use the [GetAutomationExecution](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetAutomationExecution.html), [DescribeAutomationStepExecutions](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeAutomationStepExecutions.html), and [DescribeAutomationExecutions](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeAutomationExecutions.html) API operations from the AWS Systems Manager console or the AWS CLI to monitor automation progress. The output of the steps for the automation in your primary account will be the `AutomationExecutionId` of the child automations. To view the output of the child automations created in your target accounts, be sure to specify the appropriate account, Region, and `AutomationExecutionId` in your request.

## Setting up management account permissions for multi-Region and multi-account automation


Use the following procedure to create the required IAM roles for Systems Manager Automation multi-Region and multi-account automation by using AWS CloudFormation. This procedure describes how to create the `AWS-SystemsManager-AutomationAdministrationRole` role. You only need to create this role in the Automation central account. This procedure also describes how to create the `AWS-SystemsManager-AutomationExecutionRole` role. You must create this role in *every* account that you want to target to run multi-Region and multi-account automations. We recommend using CloudFormation StackSets to create the `AWS-SystemsManager-AutomationExecutionRole` role in the accounts you want to target to run multi-Region and multi-account automations.

**To create the required IAM administration role for multi-Region and multi-account automations by using CloudFormation**

1. Download and unzip the [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationAdministrationRole.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationAdministrationRole.zip).

   -or-

   If your accounts are managed by AWS Organizations [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationAdministrationRole (org).zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationAdministrationRole (org).zip).

   These files contain the `AWS-SystemsManager-AutomationAdministrationRole.yaml` and `AWS-SystemsManager-AutomationAdministrationRole (org).yaml` CloudFormation template files, respectively.

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create stack**.

1. In the **Specify template** section, choose **Upload a template**.

1. Choose **Choose file**, and then choose the `AWS-SystemsManager-AutomationAdministrationRole.yaml` or `AWS-SystemsManager-AutomationAdministrationRole (org).yaml` CloudFormation template file, depending on your selection in step 1.

1. Choose **Next**.

1. On the **Specify stack details** page, in the **Stack name** field, enter a name. 

1. Choose **Next**.

1. On the **Configure stack options** page, enter values for any options you want to use. Choose **Next**.

1. On the **Review** page, scroll down and choose the **I acknowledge that CloudFormation might create IAM resources with custom names** option.

1. Choose **Create stack**.

CloudFormation shows the **CREATE\$1IN\$1PROGRESS** status for approximately three minutes. The status changes to **CREATE\$1COMPLETE**.

You must repeat the following procedure in *every* account that you want to target to run multi-Region and multi-account automations.

**To create the required IAM automation role for multi-Region and multi-account automations by using CloudFormation**

1. Download the [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationExecutionRole.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationExecutionRole.zip).

   -or

   If your accounts are managed by AWS Organizations [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationExecutionRole (org).zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWS-SystemsManager-AutomationExecutionRole (org).zip).

   These files contains the `AWS-SystemsManager-AutomationExecutionRole.yaml` and `AWS-SystemsManager-AutomationExecutionRole (org).yaml`CloudFormation template files, respectively.

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create stack**.

1. In the **Specify template** section, choose **Upload a template**.

1. Choose **Choose file**, and then choose the `AWS-SystemsManager-AutomationExecutionRole.yaml` or `AWS-SystemsManager-AutomationExecutionRole (org).yaml` CloudFormation template file, depending on your selection in step 1.

1. Choose **Next**.

1. On the **Specify stack details** page, in the **Stack name** field, enter a name. 

1. In the **Parameters** section, in the **AdminAccountId** field, enter the ID for the Automation central account.

1. If you are setting up this role for an AWS Organizations environment, there is another field in the section called **OrganizationID**. Enter the ID of your AWS organization.

1. Choose **Next**.

1. On the **Configure stack options** page, enter values for any options you want to use. Choose **Next**.

1. On the **Review** page, scroll down and choose the **I acknowledge that CloudFormation might create IAM resources with custom names** option.

1. Choose **Create stack**.

CloudFormation shows the **CREATE\$1IN\$1PROGRESS** status for approximately three minutes. The status changes to **CREATE\$1COMPLETE**.

## Run an automation in multiple Regions and accounts (console)


The following procedure describes how to use the Systems Manager console to run an automation in multiple Regions and accounts from the Automation management account.

**Before you begin**  
Before you complete the following procedure, note the following information:
+ The user or role you use to run a multi-Region or multi-account automation must have the `iam:PassRole` permission for the `AWS-SystemsManager-AutomationAdministrationRole` role.
+ AWS account IDs or OUs where you want to run the automation.
+ [Regions supported by Systems Manager](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) where you want to run the automation.
+ The tag key and the tag value, or the name of the resource group, where you want to run the automation.

**To run an automation in multiple Regions and accounts**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Automation document** list, choose a runbook. Choose one or more options in the **Document categories** pane to filter SSM documents according to their purpose. To view a runbook that you own, choose the **Owned by me** tab. To view a runbook that is shared with your account, choose the **Shared with me** tab. To view all runbooks, choose the **All documents** tab.
**Note**  
You can view information about a runbook by choosing the runbook name.

1. In the **Document details** section, verify that **Document version** is set to the version that you want to run. The system includes the following version options: 
   + **Default version at runtime** – Choose this option if the Automation runbook is updated periodically and a new default version is assigned.
   + **Latest version at runtime** – Choose this option if the Automation runbook is updated periodically, and you want to run the version that was most recently updated.
   + **1 (Default)** – Choose this option to run the first version of the document, which is the default.

1. Choose **Next**.

1. On the **Execute automation document** page, choose **Multi-account and Region**.

1. In the **Target accounts and Regions** section, use the **Accounts, organizational units (OUs), and roots** field to specify the different AWS accounts or AWS organizational units (OUs) where you want to run the automation. Separate multiple accounts or OUs with a comma. 

   1. (Optional) Select the **Include child OUs** checkbox to include all child organizational units within the specified OUs.

   1. (Optional) In the **Exclude accounts and organizational units (OUs)** field, enter a comma-separated list of account IDs and OU IDs that you want to exclude from the expanded entities entered above.

1. Use the **Regions** list to choose one or more Regions where you want to run the automation.

1. Use the **Multi-Region and account rate control** options to restrict the automation to a limited number of accounts running in a limited number of Regions. These options don't restrict the number of AWS resources that can run the automations. 

   1. In the **Location (account-Region pair) concurrency** section, choose an option to restrict the number of automations that can run in multiple accounts and Regions at the same time. For example, if you choose to run an automation in five (5) AWS accounts, which are located in four (4) AWS Regions, then Systems Manager runs automations in a total of 20 account-Region pairs. You can use this option to specify an absolute number, such as **2**, so that the automation only runs in two account-Region pairs at the same time. Or you can specify a percentage of the account-Region pairs that can run at the same time. For example, with 20 account-Region pairs, if you specify 20%, then the automation simultaneously runs in a maximum of five (5) account-Region pairs. 
      + Choose **targets** to enter an absolute number of account-Region pairs that can run the automation simultaneously.
      + Choose **percent** to enter a percentage of the total number of account-Region pairs that can run the automation simultaneously.

   1. In the **Error threshold** section, choose an option:
      + Choose **errors** to enter an absolute number of errors allowed before Automation stops sending the automation to other resources.
      + Choose **percent** to enter a percentage of errors allowed before Automation stops sending the automation to other resources.

1. In the **Targets** section, choose how you want to target the AWS resources where you want to run the Automation. These options are required.

   1. Use the **Parameter** list to choose a parameter. The items in the **Parameter** list are determined by the parameters in the Automation runbook that you selected at the start of this procedure. By choosing a parameter you define the type of resource on which the Automation workflow runs. 

   1. Use the **Targets** list to choose how you want to target resources.

      1. If you chose to target resources by using parameter values, then enter the parameter value for the parameter you chose in the **Input parameters** section.

      1. If you chose to target resources by using AWS Resource Groups, then choose the name of the group from the **Resource Group** list.

      1. If you chose to target resources by using tags, then enter the tag key and (optionally) the tag value in the fields provided. Choose **Add**.

      1. If you want to run an Automation runbook on all instances in the current AWS account and AWS Region, then choose **All instances**.

1. In the **Input parameters** section, specify the required inputs. Choose the `AWS-SystemsManager-AutomationAdministrationRole` IAM service role from the **AutomationAssumeRole** list.
**Note**  
You might not need to choose some of the options in the **Input parameters** section. This is because you targeted resources in multiple Regions and accounts by using tags or a resource group. For example, if you chose the `AWS-RestartEC2Instance` runbook, then you don't need to specify or choose instance IDs in the **Input parameters** section. The automation locates the instances to restart by using the tags you specified. 

1. (Optional) Choose a CloudWatch alarm to apply to your automation for monitoring. To attach a CloudWatch alarm to your automation, the IAM principal that starts the automation must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). Note that if your alarm activates, the automation is cancelled and any `OnCancel` steps you have defined run. If you use AWS CloudTrail, you will see the API call in your trail.

1. Use the options in the **Rate control** section to restrict the number of AWS resources that can run the Automation within each account-Region pair. 

   In the **Concurrency** section, choose an option: 
   + Choose **targets** to enter an absolute number of targets that can run the Automation workflow simultaneously.
   + Choose **percentage** to enter a percentage of the target set that can run the Automation workflow simultaneously.

1. In the **Error threshold** section, choose an option:
   + Choose **errors** to enter an absolute number of errors allowed before Automation stops sending the workflow to other resources.
   + Choose **percentage** to enter a percentage of errors allowed before Automation stops sending the workflow to other resources.

1. Choose **Execute**.

After an automation execution completes, you can rerun the execution with the same or modified parameters. For more information, see [Rerunning automation executions](automation-rerun-executions.md).

## Run an automation in multiple Regions and accounts (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows) or AWS Tools for PowerShell to run an automation in multiple Regions and accounts from the Automation management account.

**Before you begin**  
Before you complete the following procedure, note the following information:
+ AWS account IDs or OUs where you want to run the automation.
+ [Regions supported by Systems Manager](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) where you want to run the automation.
+ The tag key and the tag value, or the name of the resource group, where you want to run the automation.

**To run an automation in multiple Regions and accounts**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Use the following format to create a command to run an automation in multiple Regions and accounts. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
           --document-name runbook name \
           --parameters AutomationAssumeRole=arn:aws:iam::management account ID:role/AWS-SystemsManager-AutomationAdministrationRole \
           --target-parameter-name parameter name \
           --targets Key=tag key,Values=value \
           --target-locations Accounts=account ID,account ID 2,Regions=Region,Region 2,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
           --document-name runbook name ^
           --parameters AutomationAssumeRole=arn:aws:iam::management account ID:role/AWS-SystemsManager-AutomationAdministrationRole ^
           --target-parameter-name parameter name ^
           --targets Key=tag key,Values=value ^
           --target-locations Accounts=account ID,account ID 2,Regions=Region,Region 2,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
       $Targets.Key = "tag key"
       $Targets.Values = "value"
       
       Start-SSMAutomationExecution `
           -DocumentName "runbook name" `
           -Parameter @{
           "AutomationAssumeRole"="arn:aws:iam::management account ID:role/AWS-SystemsManager-AutomationAdministrationRole" } `
           -TargetParameterName "parameter name" `
           -Target $Targets `
           -TargetLocation @{
           "Accounts"="account ID","account ID 2";
           "Regions"="Region","Region 2";
           "ExecutionRoleName"="AWS-SystemsManager-AutomationExecutionRole" }
   ```

------

**Examples: Running an automation in multiple Regions and accounts**  
The following are examples demonstrating how to use the AWS CLI and PowerShell to run automations in multiple accounts and Regions with a single command.

   **Example 1**: This example restarts EC2 instances in three Regions across an entire AWS Organizations organization. This is achieved by targeting the root ID of the organization, and including child OUs.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
           --document-name "AWS-RestartEC2Instance" \
           --target-parameter-name InstanceId \
           --targets '[{"Key":"AWS::EC2::Instance","Values":["*"]}]' \
           --target-locations '[{
               "Accounts": ["r-example"],
               "IncludeChildOrganizationUnits": true,
               "Regions": ["us-east-1", "us-east-2", "us-west-2"]
           }]'
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution \
           --document-name "AWS-RestartEC2Instance" ^
           --target-parameter-name InstanceId ^
           --targets '[{"Key":"AWS::EC2::Instance","Values":["*"]}]' ^
           --target-locations '[{
               "Accounts": ["r-example"],
               "IncludeChildOrganizationUnits": true,
               "Regions": ["us-east-1", "us-east-2", "us-west-2"]
           }]'
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
           -DocumentName "AWS-RestartEC2Instance" `
           -TargetParameterName "InstanceId" `
           -Targets '[{"Key":"AWS::EC2::Instance","Values":["*"]}]'
           -TargetLocation @{
               "Accounts"="r-example";
               "Regions"="us-east-1", "us-east-2", "us-west-2";
               "IncludeChildOrganizationUnits"=true}
   ```

------

   **Example 2**: This example restarts specific EC2 instances in different accounts and Regions.
**Note**  
The `TargetLocationMaxConcurrency` option is available using the AWS CLI and AWS SDKs.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
           --document-name "AWS-RestartEC2Instance" \
           --target-parameter-name InstanceId \
           --target-locations '[{
               "Accounts": ["123456789012"],
               "Targets": [{
                   "Key":"ParameterValues",
                   "Values":["i-02573cafcfEXAMPLE", "i-0471e04240EXAMPLE"]
               }],
               "TargetLocationMaxConcurrency": "100%",
               "Regions": ["us-east-1"]
           }, {
               "Accounts": ["987654321098"],
               "Targets": [{
                   "Key":"ParameterValues",
                   "Values":["i-07782c72faEXAMPLE"]
               }],
               "TargetLocationMaxConcurrency": "100%",
               "Regions": ["us-east-2"]
           }]'
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
           --document-name "AWS-RestartEC2Instance" ^
           --target-parameter-name InstanceId ^
           --target-locations '[{
               "Accounts": ["123456789012"],
               "Targets": [{
                   "Key":"ParameterValues",
                   "Values":["i-02573cafcfEXAMPLE", "i-0471e04240EXAMPLE"]
               }],
               "TargetLocationMaxConcurrency": "100%",
               "Regions": ["us-east-1"]
           }, {
               "Accounts": ["987654321098"],
               "Targets": [{
                   "Key":"ParameterValues",
                   "Values":["i-07782c72faEXAMPLE"]
               }],
               "TargetLocationMaxConcurrency": "100%",
               "Regions": ["us-east-2"]
           }]'
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
           -DocumentName "AWS-RestartEC2Instance" `
           -TargetParameterName "InstanceId" `
           -Targets '[{"Key":"AWS::EC2::Instance","Values":["*"]}]'
           -TargetLocation @({
               "Accounts"="123456789012",
               "Targets"= @{
                   "Key":"ParameterValues",
                   "Values":["i-02573cafcfEXAMPLE", "i-0471e04240EXAMPLE"]
               },
               "TargetLocationMaxConcurrency"="100%",
               "Regions"=["us-east-1"]
           }, {
               "Accounts"="987654321098",
               "Targets": @{
                   "Key":"ParameterValues",
                   "Values":["i-07782c72faEXAMPLE"]
               },
               "TargetLocationMaxConcurrency": "100%",
               "Regions"=["us-east-2"]
           })
   ```

------

   **Example 3**: This example demonstrates specifying multiple AWS accounts and Regions where the automation should run using the `--target-locations-url` option. The value for this option must be a JSON file in a publicly accessible [presigned Amazon S3 URL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html).
**Note**  
`--target-locations-url` is available when using the AWS CLI and AWS SDKs.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name "MyCustomAutomationRunbook" \
       --target-locations-url "https://amzn-s3-demo-bucket.s3.amazonaws.com/target-locations.json"
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name "MyCustomAutomationRunbook" ^
       --target-locations-url "https://amzn-s3-demo-bucket.s3.amazonaws.com/target-locations.json"
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
       -DocumentName "MyCustomAutomationRunbook" `
       -TargetLocationsUrl "https://amzn-s3-demo-bucket.s3.amazonaws.com/target-locations.json"
   ```

------

   Sample content for the JSON file:

   ```
   [
   { 
            "Accounts": [ "123456789012", "987654321098", "456789123012" ],
            "ExcludeAccounts": [ "111222333444", "999888444666" ],
            "ExecutionRoleName": "MyAutomationExecutionRole",
            "IncludeChildOrganizationUnits": true,
            "Regions": [ "us-east-1", "us-west-2", "ap-south-1", "ap-northeast-1" ],
            "Targets": ["Key": "AWS::EC2::Instance", "Values": ["i-2"]],
            "TargetLocationMaxConcurrency": "50%",
            "TargetLocationMaxErrors": "10",
            "TargetsMaxConcurrency": "20",
            "TargetsMaxErrors": "12"
    }
   ]
   ```

   **Example 4**: This example restarts EC2 instances in the `123456789012` and `987654321098` accounts, which are located in the `us-east-2` and `us-west-1` Regions. The instances must be tagged with the tag key-pair value `Env-PROD`.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
           --document-name AWS-RestartEC2Instance \
           --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole \
           --target-parameter-name InstanceId \
           --targets Key=tag:Env,Values=PROD \
           --target-locations Accounts=123456789012,987654321098,Regions=us-east-2,us-west-1,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
           --document-name AWS-RestartEC2Instance ^
           --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole ^
           --target-parameter-name InstanceId ^
           --targets Key=tag:Env,Values=PROD ^
           --target-locations Accounts=123456789012,987654321098,Regions=us-east-2,us-west-1,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
       $Targets.Key = "tag:Env"
       $Targets.Values = "PROD"
       
       Start-SSMAutomationExecution `
           -DocumentName "AWS-RestartEC2Instance" `
           -Parameter @{
           "AutomationAssumeRole"="arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole" } `
           -TargetParameterName "InstanceId" `
           -Target $Targets `
           -TargetLocation @{
           "Accounts"="123456789012","987654321098";
           "Regions"="us-east-2","us-west-1";
           "ExecutionRoleName"="AWS-SystemsManager-AutomationExecutionRole" }
   ```

------

   **Example 5**: This example restarts EC2 instances in the `123456789012` and `987654321098` accounts, which are located in the `eu-central-1` Region. The instances must be members of the `prod-instances` AWS resource group.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
           --document-name AWS-RestartEC2Instance \
           --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole \
           --target-parameter-name InstanceId \
           --targets Key=ResourceGroup,Values=prod-instances \
           --target-locations Accounts=123456789012,987654321098,Regions=eu-central-1,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
           --document-name AWS-RestartEC2Instance ^
           --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole ^
           --target-parameter-name InstanceId ^
           --targets Key=ResourceGroup,Values=prod-instances ^
           --target-locations Accounts=123456789012,987654321098,Regions=eu-central-1,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
       $Targets.Key = "ResourceGroup"
       $Targets.Values = "prod-instances"
       
       Start-SSMAutomationExecution `
           -DocumentName "AWS-RestartEC2Instance" `
           -Parameter @{
           "AutomationAssumeRole"="arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole" } `
           -TargetParameterName "InstanceId" `
           -Target $Targets `
           -TargetLocation @{
           "Accounts"="123456789012","987654321098";
           "Regions"="eu-central-1";
           "ExecutionRoleName"="AWS-SystemsManager-AutomationExecutionRole" }
   ```

------

   **Example 6**: This example restarts EC2 instances in the `ou-1a2b3c-4d5e6c` AWS organizational unit (OU). The instances are located in the `us-west-1` and `us-west-2` Regions. The instances must be members of the `WebServices` AWS resource group.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
           --document-name AWS-RestartEC2Instance \
           --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole \
           --target-parameter-name InstanceId \
           --targets Key=ResourceGroup,Values=WebServices \
           --target-locations Accounts=ou-1a2b3c-4d5e6c,Regions=us-west-1,us-west-2,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
           --document-name AWS-RestartEC2Instance ^
           --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole ^
           --target-parameter-name InstanceId ^
           --targets Key=ResourceGroup,Values=WebServices ^
           --target-locations Accounts=ou-1a2b3c-4d5e6c,Regions=us-west-1,us-west-2,ExecutionRoleName=AWS-SystemsManager-AutomationExecutionRole
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
       $Targets.Key = "ResourceGroup"
       $Targets.Values = "WebServices"
       
       Start-SSMAutomationExecution `
           -DocumentName "AWS-RestartEC2Instance" `
           -Parameter @{
           "AutomationAssumeRole"="arn:aws:iam::123456789012:role/AWS-SystemsManager-AutomationAdministrationRole" } `
           -TargetParameterName "InstanceId" `
           -Target $Targets `
           -TargetLocation @{
           "Accounts"="ou-1a2b3c-4d5e6c";
           "Regions"="us-west-1";
           "ExecutionRoleName"="AWS-SystemsManager-AutomationExecutionRole" }
   ```

------

   The system returns information similar to the following.

------
#### [ Linux & macOS ]

   ```
   {
           "AutomationExecutionId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
       }
   ```

------
#### [ Windows ]

   ```
   {
           "AutomationExecutionId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
       }
   ```

------
#### [ PowerShell ]

   ```
   4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE
   ```

------

1. Run the following command to view details for the automation. Replace *automation execution ID* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-automation-executions \
           --filters Key=ExecutionId,Values=automation execution ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-automation-executions ^
           --filters Key=ExecutionId,Values=automation execution ID
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAutomationExecutionList | `
           Where {$_.AutomationExecutionId -eq "automation execution ID"}
   ```

------

1. Run the following command to view details about the automation progress.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-automation-execution \
           --automation-execution-id 4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm get-automation-execution ^
           --automation-execution-id 4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAutomationExecution `
           -AutomationExecutionId a4a3c0e9-7efd-462a-8594-01234EXAMPLE
   ```

------
**Note**  
You can also monitor the status of the automation in the console. In the **Automation executions** list, choose the automation you just ran and then choose the **Execution steps** tab. This tab shows the status of the automation actions.

**More info**  
[Centralized multi-account and multi-Region patching with AWS Systems Manager Automation](https://aws.amazon.com/blogs/mt/centralized-multi-account-and-multi-region-patching-with-aws-systems-manager-automation/)

# Run automations based on EventBridge events


You can start an automation by specifying a runbook as the target of an Amazon EventBridge event. You can start automations according to a schedule, or when a specific AWS system event occurs. For example, let's say you create a runbook named *BootStrapInstances* that installs software on an instance when an instance starts. To specify the *BootStrapInstances* runbook (and corresponding automation) as a target of an EventBridge event, you first create a new EventBridge rule. (Here's an example rule: **Service name**: EC2, **Event Type**: EC2 Instance State-change Notification, **Specific state(s)**: running, **Any instance**.) Then you use the following procedures to specify the *BootStrapInstances* runbook as the target of the event using the EventBridge console and AWS Command Line Interface (AWS CLI). When a new instance starts, the system runs the automation and installs software.

For information about creating runbooks, see [Creating your own runbooks](automation-documents.md).

## Creating an EventBridge event that uses a runbook (console)


Use the following procedure to configure a runbook as the target of a EventBridge event.

**To configure a runbook as a target of a EventBridge event rule**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to respond to matching events that come from your own AWS account, select **default**. When an AWS service in your account emits an event, it always goes to your account’s default event bus.

1. Choose how the rule is triggered.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/running-automations-event-bridge.html)

1. Choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **Systems Manager Automation**. 

1. For **Document**, choose a runbook to use when your target is invoked.

1. In the **Configure automation parameter(s)** section, either keep the default parameter values (if available) or enter your own values. 
**Note**  
To create a target, you must specify a value for each required parameter. If you don't, the system creates the rule, but the rule won't run.

1. For many target types, EventBridge needs permissions to send events to the target. In these cases, EventBridge can create the IAM role needed for your rule to run. Do one of the following:
   + To create an IAM role automatically, choose **Create a new role for this specific resource**.
   + To use an IAM role that you created earlier, choose **Use existing role** and select the existing role from the dropdown. Note that you might need to update the trust policy for your IAM role to include EventBridge. The following is an example:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": {
                   "Service": [
                       "events.amazonaws.com",
                       "ssm.amazonaws.com"
                   ]
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Tagging Your Amazon EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Review the details of the rule and choose **Create rule**.

## Create an EventBridge event that uses a runbook (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows) or AWS Tools for PowerShell to create an EventBridge event rule and configure a runbook as the target.

**To configure a runbook as a target of an EventBridge event rule**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Create a command to specify a new EventBridge event rule. Replace each *example resource placeholder* with your own information.

   *Triggers based on a schedule*

------
#### [ Linux & macOS ]

   ```
   aws events put-rule \
   --name "rule name" \
   --schedule-expression "cron or rate expression"
   ```

------
#### [ Windows ]

   ```
   aws events put-rule ^
   --name "rule name" ^
   --schedule-expression "cron or rate expression"
   ```

------
#### [ PowerShell ]

   ```
   Write-CWERule `
   -Name "rule name" `
   -ScheduleExpression "cron or rate expression"
   ```

------

   The following example creates an EventBridge event rule that starts every day at 9:00 AM (UTC).

------
#### [ Linux & macOS ]

   ```
   aws events put-rule \
   --name "DailyAutomationRule" \
   --schedule-expression "cron(0 9 * * ? *)"
   ```

------
#### [ Windows ]

   ```
   aws events put-rule ^
   --name "DailyAutomationRule" ^
   --schedule-expression "cron(0 9 * * ? *)"
   ```

------
#### [ PowerShell ]

   ```
   Write-CWERule `
   -Name "DailyAutomationRule" `
   -ScheduleExpression "cron(0 9 * * ? *)"
   ```

------

   *Triggers based on an event*

------
#### [ Linux & macOS ]

   ```
   aws events put-rule \
   --name "rule name" \
   --event-pattern "{\"source\":[\"aws.service\"],\"detail-type\":[\"service event detail type\"]}"
   ```

------
#### [ Windows ]

   ```
   aws events put-rule ^
   --name "rule name" ^
   --event-pattern "{\"source\":[\"aws.service\"],\"detail-type\":[\"service event detail type\"]}"
   ```

------
#### [ PowerShell ]

   ```
   Write-CWERule `
   -Name "rule name" `
   -EventPattern '{"source":["aws.service"],"detail-type":["service event detail type"]}'
   ```

------

   The following example creates an EventBridge event rule that starts when any EC2 instance in the Region changes state.

------
#### [ Linux & macOS ]

   ```
   aws events put-rule \
   --name "EC2InstanceStateChanges" \
   --event-pattern "{\"source\":[\"aws.ec2\"],\"detail-type\":[\"EC2 Instance State-change Notification\"]}"
   ```

------
#### [ Windows ]

   ```
   aws events put-rule ^
   --name "EC2InstanceStateChanges" ^
   --event-pattern "{\"source\":[\"aws.ec2\"],\"detail-type\":[\"EC2 Instance State-change Notification\"]}"
   ```

------
#### [ PowerShell ]

   ```
   Write-CWERule `
   -Name "EC2InstanceStateChanges" `
   -EventPattern '{"source":["aws.ec2"],"detail-type":["EC2 Instance State-change Notification"]}'
   ```

------

   The command returns details for the new EventBridge rule similar to the following.

------
#### [ Linux & macOS ]

   ```
   {
   "RuleArn": "arn:aws:events:us-east-1:123456789012:rule/automationrule"
   }
   ```

------
#### [ Windows ]

   ```
   {
   "RuleArn": "arn:aws:events:us-east-1:123456789012:rule/automationrule"
   }
   ```

------
#### [ PowerShell ]

   ```
   arn:aws:events:us-east-1:123456789012:rule/EC2InstanceStateChanges
   ```

------

1. Create a command to specify a runbook as a target of the EventBridge event rule you created in step 2. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws events put-targets \
   --rule rule name \
   --targets '{"Arn": " arn:aws:ssm:region:account ID:automation-definition/runbook name","Input":"{\"Message\":[\"{\\\"Key\\\":\\\"key name\\\",\\\"Values\\\":[\\\"value\\\"]}\"]}","Id": "target ID","RoleArn": "arn:aws:iam::123456789012:role/service-role/EventBridge service role"}'
   ```

------
#### [ Windows ]

   ```
   aws events put-targets ^
   --rule rule name ^
   --targets '{"Arn": "arn:aws:ssm:region:account ID:automation-definition/runbook name","Input":"{\"Message\":[\"{\\\"Key\\\":\\\"key name\\\",\\\"Values\\\":[\\\"value\\\"]}\"]}","Id": "target ID","RoleArn": "arn:aws:iam::123456789012:role/service-role/EventBridge service role"}'
   ```

------
#### [ PowerShell ]

   ```
   $Target = New-Object Amazon.CloudWatchEvents.Model.Target
   $Target.Id = "target ID"
   $Target.Arn = "arn:aws:ssm:region:account ID:automation-definition/runbook name"
   $Target.RoleArn = "arn:aws:iam::123456789012:role/service-role/EventBridge service role"
   $Target.Input = '{"input parameter":["value"],"AutomationAssumeRole":["arn:aws:iam::123456789012:role/AutomationServiceRole"]}'
   
   Write-CWETarget `
   -Rule "rule name" `
   -Target $Target
   ```

------

   The following example creates an EventBridge event target that starts the specified instance ID using the runbook `AWS-StartEC2Instance`.

------
#### [ Linux & macOS ]

   ```
   aws events put-targets \
   --rule DailyAutomationRule \
   --targets '{"Arn": "arn:aws:ssm:region:*:automation-definition/AWS-StartEC2Instance","Input":"{\"InstanceId\":[\"i-02573cafcfEXAMPLE\"],\"AutomationAssumeRole\":[\"arn:aws:iam::123456789012:role/AutomationServiceRole\"]}","Id": "Target1","RoleArn": "arn:aws:iam::123456789012:role/service-role/AWS_Events_Invoke_Start_Automation_Execution_1213609520"}'
   ```

------
#### [ Windows ]

   ```
   aws events put-targets ^
   --rule DailyAutomationRule ^
   --targets '{"Arn": "arn:aws:ssm:region:*:automation-definition/AWS-StartEC2Instance","Input":"{\"InstanceId\":[\"i-02573cafcfEXAMPLE\"],\"AutomationAssumeRole\":[\"arn:aws:iam::123456789012:role/AutomationServiceRole\"]}","Id": "Target1","RoleArn": "arn:aws:iam::123456789012:role/service-role/AWS_Events_Invoke_Start_Automation_Execution_1213609520"}'
   ```

------
#### [ PowerShell ]

   ```
   $Target = New-Object Amazon.CloudWatchEvents.Model.Target
   $Target.Id = "Target1"
   $Target.Arn = "arn:aws:ssm:region:*:automation-definition/AWS-StartEC2Instance"
   $Target.RoleArn = "arn:aws:iam::123456789012:role/service-role/AWS_Events_Invoke_Start_Automation_Execution_1213609520"
   $Target.Input = '{"InstanceId":["i-02573cafcfEXAMPLE"],"AutomationAssumeRole":["arn:aws:iam::123456789012:role/AutomationServiceRole"]}'
   
   Write-CWETarget `
   -Rule "DailyAutomationRule" `
   -Target $Target
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
   "FailedEntries": [],
   "FailedEntryCount": 0
   }
   ```

------
#### [ Windows ]

   ```
   {
   "FailedEntries": [],
   "FailedEntryCount": 0
   }
   ```

------
#### [ PowerShell ]

   There is no output if the command succeeds for PowerShell.

------

# Run an automation step by step


The following procedures describe how to use the AWS Systems Manager console and AWS Command Line Interface (AWS CLI) to run an automation using the manual execution mode. By using the manual execution mode, the automation starts in a *Waiting* status and pauses in the *Waiting* status between each step. This allows you to control when the automation proceeds, which is useful if you need to review the result of a step before continuing.

The automation runs in the context of the current user. This means that you don't need to configure additional IAM permissions as long as you have permission to use the runbook, and any actions called by the runbook. If you have administrator permissions in IAM, then you already have permission to run this automation.

## Running an automation step by step (console)


The following procedure shows how to use the Systems Manager console to manually run an automation step by step.

**To run an automation step by step**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Automation document** list, choose a runbook. Choose one or more options in the **Document categories** pane to filter SSM documents according to their purpose. To view a runbook that you own, choose the **Owned by me** tab. To view a runbook that is shared with your account, choose the **Shared with me** tab. To view all runbooks, choose the **All documents** tab.
**Note**  
You can view information about a runbook by choosing the runbook name.

1. In the **Document details** section, verify that **Document version** is set to the version that you want to run. The system includes the following version options: 
   + **Default version at runtime** – Choose this option if the Automation runbook is updated periodically and a new default version is assigned.
   + **Latest version at runtime** – Choose this option if the Automation runbook is updated periodically, and you want to run the version that was most recently updated.
   + **1 (Default)** – Choose this option to run the first version of the document, which is the default.

1. Choose **Next**.

1. In the **Execution Mode** section, choose **Manual execution**.

1. In the **Input parameters** section, specify the required inputs. Optionally, you can choose an IAM service role from the **AutomationAssumeRole** list.

1. Choose **Execute**. 

1. Choose **Execute this step** when you're ready to start the first step of the automation. The automation proceeds with step one and pauses before running any subsequent steps specified in the runbook you chose in step 3 of this procedure. If the runbook has multiple steps, you must select **Execute this step** for each step for the automation to proceed. Each time you choose **Execute this step** the action runs.
**Note**  
The console displays the status of the automation. If the automation fails to run a step, see [Troubleshooting Systems Manager Automation](automation-troubleshooting.md).

1. After you complete all steps specified in the runbook, choose **Complete and view results** to finish the automation and view the results.

After an automation execution completes, you can rerun the execution with the same or modified parameters. For more information, see [Rerunning automation executions](automation-rerun-executions.md).

## Running an automation step by step (command line)


The following procedure describes how to use the AWS CLI (on Linux, macOS, or Windows) or AWS Tools for PowerShell to manually run an automation step by step.

**To run an automation step by step**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to start a manual automation. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name runbook name \
       --mode Interactive \
       --parameters runbook parameters
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name runbook name ^
       --mode Interactive ^
       --parameters runbook parameters
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
       -DocumentName runbook name `
       -Mode Interactive `
       -Parameter runbook parameters
   ```

------

   Here is an example using the runbook `AWS-RestartEC2Instance` to restart the specified EC2 instance.

------
#### [ Linux & macOS ]

   ```
   aws ssm start-automation-execution \
       --document-name "AWS-RestartEC2Instance" \
       --mode Interactive \
       --parameters "InstanceId=i-02573cafcfEXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm start-automation-execution ^
       --document-name "AWS-RestartEC2Instance" ^
       --mode Interactive ^
       --parameters "InstanceId=i-02573cafcfEXAMPLE"
   ```

------
#### [ PowerShell ]

   ```
   Start-SSMAutomationExecution `
       -DocumentName AWS-RestartEC2Instance `
       -Mode Interactive 
       -Parameter @{"InstanceId"="i-02573cafcfEXAMPLE"}
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "AutomationExecutionId": "ba9cd881-1b36-4d31-a698-0123456789ab"
   }
   ```

------
#### [ Windows ]

   ```
   {
       "AutomationExecutionId": "ba9cd881-1b36-4d31-a698-0123456789ab"
   }
   ```

------
#### [ PowerShell ]

   ```
   ba9cd881-1b36-4d31-a698-0123456789ab
   ```

------

1. Run the following command when you're ready to start the first step of the automation. Replace each *example resource placeholder* with your own information. The automation proceeds with step one and pauses before running any subsequent steps specified in the runbook you chose in step 1 of this procedure. If the runbook has multiple steps, you must run the following command for each step for the automation to proceed.

------
#### [ Linux & macOS ]

   ```
   aws ssm send-automation-signal \
       --automation-execution-id ba9cd881-1b36-4d31-a698-0123456789ab \
       --signal-type StartStep \
       --payload StepName="stopInstances"
   ```

------
#### [ Windows ]

   ```
   aws ssm send-automation-signal ^
       --automation-execution-id ba9cd881-1b36-4d31-a698-0123456789ab ^
       --signal-type StartStep ^
       --payload StepName="stopInstances"
   ```

------
#### [ PowerShell ]

   ```
   Send-SSMAutomationSignal `
       -AutomationExecutionId ba9cd881-1b36-4d31-a698-0123456789ab `
       -SignalType StartStep 
       -Payload @{"StepName"="stopInstances"}
   ```

------

   There is no output if the command succeeds.

1. Run the following command to retrieve the status of each step execution in the automation.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-automation-step-executions \
       --automation-execution-id ba9cd881-1b36-4d31-a698-0123456789ab
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-automation-step-executions ^
       --automation-execution-id ba9cd881-1b36-4d31-a698-0123456789ab
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMAutomationStepExecution `
       -AutomationExecutionId ba9cd881-1b36-4d31-a698-0123456789ab
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
       "StepExecutions": [
           {
               "StepName": "stopInstances",
               "Action": "aws:changeInstanceState",
               "ExecutionStartTime": 1557167178.42,
               "ExecutionEndTime": 1557167220.617,
               "StepStatus": "Success",
               "Inputs": {
                   "DesiredState": "\"stopped\"",
                   "InstanceIds": "[\"i-02573cafcfEXAMPLE\"]"
               },
               "Outputs": {
                   "InstanceStates": [
                       "stopped"
                   ]
               },
               "StepExecutionId": "654243ba-71e3-4771-b04f-0123456789ab",
               "OverriddenParameters": {},
               "ValidNextSteps": [
                   "startInstances"
               ]
           },
           {
               "StepName": "startInstances",
               "Action": "aws:changeInstanceState",
               "ExecutionStartTime": 1557167273.754,
               "ExecutionEndTime": 1557167480.73,
               "StepStatus": "Success",
               "Inputs": {
                   "DesiredState": "\"running\"",
                   "InstanceIds": "[\"i-02573cafcfEXAMPLE\"]"
               },
               "Outputs": {
                   "InstanceStates": [
                       "running"
                   ]
               },
               "StepExecutionId": "8a4a1e0d-dc3e-4039-a599-0123456789ab",
               "OverriddenParameters": {}
           }
       ]
   }
   ```

------
#### [ Windows ]

   ```
   {
       "StepExecutions": [
           {
               "StepName": "stopInstances",
               "Action": "aws:changeInstanceState",
               "ExecutionStartTime": 1557167178.42,
               "ExecutionEndTime": 1557167220.617,
               "StepStatus": "Success",
               "Inputs": {
                   "DesiredState": "\"stopped\"",
                   "InstanceIds": "[\"i-02573cafcfEXAMPLE\"]"
               },
               "Outputs": {
                   "InstanceStates": [
                       "stopped"
                   ]
               },
               "StepExecutionId": "654243ba-71e3-4771-b04f-0123456789ab",
               "OverriddenParameters": {},
               "ValidNextSteps": [
                   "startInstances"
               ]
           },
           {
               "StepName": "startInstances",
               "Action": "aws:changeInstanceState",
               "ExecutionStartTime": 1557167273.754,
               "ExecutionEndTime": 1557167480.73,
               "StepStatus": "Success",
               "Inputs": {
                   "DesiredState": "\"running\"",
                   "InstanceIds": "[\"i-02573cafcfEXAMPLE\"]"
               },
               "Outputs": {
                   "InstanceStates": [
                       "running"
                   ]
               },
               "StepExecutionId": "8a4a1e0d-dc3e-4039-a599-0123456789ab",
               "OverriddenParameters": {}
           }
       ]
   }
   ```

------
#### [ PowerShell ]

   ```
   Action: aws:changeInstanceState
   ExecutionEndTime     : 5/6/2019 19:45:46
   ExecutionStartTime   : 5/6/2019 19:45:03
   FailureDetails       : 
   FailureMessage       : 
   Inputs               : {[DesiredState, "stopped"], [InstanceIds, ["i-02573cafcfEXAMPLE"]]}
   IsCritical           : False
   IsEnd                : False
   MaxAttempts          : 0
   NextStep             : 
   OnFailure            : 
   Outputs              : {[InstanceStates, Amazon.Runtime.Internal.Util.AlwaysSendList`1[System.String]]}
   OverriddenParameters : {}
   Response             : 
   ResponseCode         : 
   StepExecutionId      : 8fcc9641-24b7-40b3-a9be-0123456789ab
   StepName             : stopInstances
   StepStatus           : Success
   TimeoutSeconds       : 0
   ValidNextSteps       : {startInstances}
   ```

------

1. Run the following command to complete the automation after all steps specified within the chosen runbook have finished. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm stop-automation-execution \
       --automation-execution-id ba9cd881-1b36-4d31-a698-0123456789ab \
       --type Complete
   ```

------
#### [ Windows ]

   ```
   aws ssm stop-automation-execution ^
       --automation-execution-id ba9cd881-1b36-4d31-a698-0123456789ab ^
       --type Complete
   ```

------
#### [ PowerShell ]

   ```
   Stop-SSMAutomationExecution `
       -AutomationExecutionId ba9cd881-1b36-4d31-a698-0123456789ab `
       -Type Complete
   ```

------

   There is no output if the command succeeds.

# Scheduling automations with State Manager associations


You can start an automation by creating a State Manager association with a runbook. State Manager is a tool in AWS Systems Manager. By creating a State Manager association with a runbook, you can target different types of AWS resources. For example, you can create associations that enforce a desired state on an AWS resource, including the following:
+ Attach a Systems Manager role to Amazon Elastic Compute Cloud (Amazon EC2) instances to make them *managed instances*.
+ Enforce desired ingress and egress rules for a security group.
+ Create or delete Amazon DynamoDB backups.
+ Create or delete Amazon Elastic Block Store (Amazon EBS) snapshots.
+ Turn off read and write permissions on Amazon Simple Storage Service (Amazon S3) buckets.
+ Start, restart, or stop managed instances and Amazon Relational Database Service (Amazon RDS) instances.
+ Apply patches to Linux, macOS, and Window AMIs.

Use the following procedures to create a State Manager association that runs an automation using the AWS Systems Manager console and AWS Command Line Interface (AWS CLI). For general information about associations and information about creating an association that uses an SSM `Command` document or `Policy` document, see [Creating associations](state-manager-associations-creating.md).

**Before you begin**  
Be aware of the following important details before you run an automation by using State Manager:
+ Before you can create an association that uses a runbook, verify that you configured permissions for Automation, a tool in AWS Systems Manager. For more information, see [Setting up Automation](automation-setup.md).
+ State Manager associations that use runbooks contribute to the maximum number of concurrently running automations in your AWS account. You can have a maximum of 100 concurrent automations running. For information, see [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*.
+ When running an automation, State Manager does not log the API operations initiated by the automation in AWS CloudTrail.
+ Systems Manager automatically creates a service-linked role so that State Manager has permission to call Systems Manager Automation API operations. If you want, you can create the service-linked role yourself by running the following command from the AWS CLI or AWS Tools for PowerShell.

------
#### [ Linux & macOS ]

  ```
  aws iam create-service-linked-role \
  --aws-service-name ssm.amazonaws.com
  ```

------
#### [ Windows ]

  ```
  aws iam create-service-linked-role ^
  --aws-service-name ssm.amazonaws.com
  ```

------
#### [ PowerShell ]

  ```
  New-IAMServiceLinkedRole `
  -AWSServiceName ssm.amazonaws.com
  ```

------

  For more information about service-linked roles, see [Using service-linked roles for Systems Manager](using-service-linked-roles.md).

## Creating an association that runs an automation (console)


The following procedure describes how to use the Systems Manager console to create a State Manager association that runs an automation.

**To create a State Manager association that runs an automation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **State Manager**, and then choose **Create association**.

1. In the **Name** field, specify a name. This is optional, but recommended.

1. In the **Document** list, choose a runbook. Use the Search bar to filter on **Document type : Equal : Automation** runbooks. To view more runbooks, use the numbers to the right of the Search bar. 
**Note**  
You can view information about a runbook by choosing the runbook name.

1. Choose **Simple execution** to run the automation on one or more targets by specifying the resource ID for those targets. Choose **Rate control** to run the automation across a fleet of AWS resources by specifying a targeting option such as tags or AWS Resource Groups. You can also control the operation of the automation across your resources by specifying concurrency and error thresholds.

   If you chose **Rate control**, the **Targets** section is displayed.

1. In the **Targets** section, choose a method for targeting resources.

   1. (Required) In the **Parameter** list, choose a parameter. The items in the **Parameter** list are determined by the parameters in the runbook that you selected at the start of this procedure. By choosing a parameter, you define the type of resource on which the automation runs. 

   1. (Required) In the **Targets** list, choose a method for targeting the resources.
      + **Resource Group**: Choose the name of the group from the **Resource Group** list. For more information about targeting AWS Resource Groups in runbooks, see [Targeting AWS Resource Groups](running-automations-map-targets.md#target-resource-groups).
      + **Tags**: Enter the tag key and (optionally) the tag value in the fields provided. Choose **Add**. For more information about targeting tags in runbooks, see [Targeting a tag](running-automations-map-targets.md#target-tags).
      + **Parameter Values**: Enter values in the **Input parameters** section. If you specify multiple values, Systems Manager runs a child automation on each value specified.

        For example, say that your runbook includes an **InstanceID** parameter. If you target the values of the **InstanceID** parameter when you run the automation, then Systems Manager runs a child automation for each instance ID value specified. The parent automation is complete when the automation finishes running each specified instance, or if the automation fails. You can target a maximum of 50 parameter values. For more information about targeting parameter values in runbooks, see [Targeting parameter values](running-automations-map-targets.md#target-parameter-values).

1. In the **Input parameters** section, specify the required input parameters.

   If you chose to target resources by using tags or a resource group, then you might not need to choose some of the options in the **Input parameters** section. For example, if you chose the `AWS-RestartEC2Instance` runbook, and you chose to target instances by using tags, then you don't need to specify or choose instance IDs in the **Input parameters** section. The automation locates the instances to restart by using the tags you specified. 
**Important**  
You must specify a role ARN in the **AutomationAssumeRole** field. State Manager uses the assume role to call AWS services specified in the runbook and run Automation associations on your behalf.

1. In the **Specify schedule** section, choose **On Schedule** if you want to run the association at regular intervals. If you choose this option, then use the options provided to create the schedule using Cron or Rate expressions. For more information about Cron and Rate expressions for State Manager, see [Cron and rate expressions for associations](reference-cron-and-rate-expressions.md#reference-cron-and-rate-expressions-association). 
**Note**  
Rate expressions are the preferred scheduling mechanism for State Manager associations that use runbooks. Rate expressions allow more flexibility for running associations in the event that you reach the maximum number of concurrently running automations. With a rate schedule, Systems Manager can retry the automation shortly after receiving notification that concurrent automations have reached their maximum and have been throttled.

   Choose **No schedule** if you want to run the association one time. 

1. (Optional) In the **Rate Control** section, choose **Concurrency** and **Error threshold** options to control the automation deployment across your AWS resources.

   1. In the **Concurrency** section, choose an option: 
      + Choose **targets** to enter an absolute number of targets that can run the automation simultaneously.
      + Choose **percentage** to enter a percentage of the target set that can run the automation simultaneously.

   1. In the **Error threshold** section, choose an option:
      + Choose **errors** to enter an absolute number of errors allowed before Automation stops sending the automation to other resources.
      + Choose **percentage** to enter a percentage of errors allowed before Automation stops sending the automation to other resources.

   For more information about using targets and rate controls with Automation, see [Run automated operations at scale](running-automations-scale.md).

1. Choose **Create Association**. 
**Important**  
When you create an association, the association immediately runs against the specified targets. The association then runs based on the cron or rate expression you chose. If you chose **No schedule**, the association doesn't run again.

## Creating an association that runs an automation (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows Server) or AWS Tools for PowerShell to create a State Manager association that runs an automation.

**Before you begin**  
Before you complete the following procedure, make sure you have created an IAM service role that contains the permissions necessary to run the runbook, and configured a trust relationship for Automation, a tool in AWS Systems Manager. For more information, see [Task 1: Create a service role for Automation](automation-setup-iam.md#create-service-role).

**To create an association that runs an automation**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to view a list of documents.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-documents
   ```

------
#### [ Windows ]

   ```
   aws ssm list-documents
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentList
   ```

------

   Note the name of the runbook that you want to use for the association.

1. Run the following command to view details about the runbook. In the following command, replace *runbook name* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-document \
   --name runbook name
   ```

   Note a parameter name (for example, `InstanceId`) that you want to use for the `--automation-target-parameter-name` option. This parameter determines the type of resource on which the automation runs.

------
#### [ Windows ]

   ```
   aws ssm describe-document ^
   --name runbook name
   ```

   Note a parameter name (for example, `InstanceId`) that you want to use for the `--automation-target-parameter-name` option. This parameter determines the type of resource on which the automation runs.

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentDescription `
   -Name runbook name
   ```

   Note a parameter name (for example, `InstanceId`) that you want to use for the `AutomationTargetParameterName` option. This parameter determines the type of resource on which the automation runs.

------

1. Create a command that runs an automation using a State Manager association. Replace each *example resource placeholder* with your own information.

   *Targeting using tags*

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --association-name association name \
   --targets Key=tag:key name,Values=value \
   --name runbook name \
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole \
   --automation-target-parameter-name target parameter \
   --schedule "cron or rate expression"
   ```

**Note**  
If you create an association by using the AWS CLI, use the `--targets` parameter to target instances for the association. Don't use the `--instance-id` parameter. The `--instance-id` parameter is a legacy parameter. 

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --association-name association name ^
   --targets Key=tag:key name,Values=value ^
   --name runbook name ^
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole ^
   --automation-target-parameter-name target parameter ^
   --schedule "cron or rate expression"
   ```

**Note**  
If you create an association by using the AWS CLI, use the `--targets` parameter to target instances for the association. Don't use the `--instance-id` parameter. The `--instance-id` parameter is a legacy parameter. 

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "tag:key name"
   $Targets.Values = "value"
   
   New-SSMAssociation `
   -AssociationName "association name" `
   -Target $Targets `
   -Name "runbook name" `
   -Parameters @{
   "AutomationAssumeRole"="arn:aws:iam::123456789012:role/RunbookAssumeRole" } `
   -AutomationTargetParameterName "target parameter" `
   -ScheduleExpression "cron or rate expression"
   ```

**Note**  
If you create an association by using the AWS Tools for PowerShell, use the `Target` parameter to target instances for the association. Don't use the `InstanceId` parameter. The `InstanceId` parameter is a legacy parameter. 

------

   *Targeting using parameter values*

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --association-name association name \
   --targets Key=ParameterValues,Values=value,value 2,value 3 \
   --name runbook name \
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole \
   --automation-target-parameter-name target parameter \
   --schedule "cron or rate expression"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --association-name association name ^
   --targets Key=ParameterValues,Values=value,value 2,value 3 ^
   --name runbook name ^
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole ^
   --automation-target-parameter-name target parameter ^
   --schedule "cron or rate expression"
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "ParameterValues"
   $Targets.Values = "value","value 2","value 3"
   
   New-SSMAssociation `
   -AssociationName "association name" `
   -Target $Targets `
   -Name "runbook name" `
   -Parameters @{
   "AutomationAssumeRole"="arn:aws:iam::123456789012:role/RunbookAssumeRole"} `
   -AutomationTargetParameterName "target parameter" `
   -ScheduleExpression "cron or rate expression"
   ```

------

   *Targeting using AWS Resource Groups*

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --association-name association name \
   --targets Key=ResourceGroup,Values=resource group name \
   --name runbook name \
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole \
   --automation-target-parameter-name target parameter \
   --schedule "cron or rate expression"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --association-name association name ^
   --targets Key=ResourceGroup,Values=resource group name ^
   --name runbook name ^
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole ^
   --automation-target-parameter-name target parameter ^
   --schedule "cron or rate expression"
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "ResourceGroup"
   $Targets.Values = "resource group name"
   
   New-SSMAssociation `
   -AssociationName "association name" `
   -Target $Targets `
   -Name "runbook name" `
   -Parameters @{
   "AutomationAssumeRole"="arn:aws:iam::123456789012:role/RunbookAssumeRole"} `
   -AutomationTargetParameterName "target parameter" `
   -ScheduleExpression "cron or rate expression"
   ```

------

   *Targeting multiple accounts and Regions*

------
#### [ Linux & macOS ]

   ```
   aws ssm create-association \
   --association-name association name \
   --targets Key=ResourceGroup,Values=resource group name \
   --name runbook name \
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole \
   --automation-target-parameter-name target parameter \
   --schedule "cron or rate expression" \ 
   --target-locations Accounts=111122223333,444455556666,444455556666,Regions=region,region
   ```

------
#### [ Windows ]

   ```
   aws ssm create-association ^
   --association-name association name ^
   --targets Key=ResourceGroup,Values=resource group name ^
   --name runbook name ^
   --parameters AutomationAssumeRole=arn:aws:iam::123456789012:role/RunbookAssumeRole ^
   --automation-target-parameter-name target parameter ^
   --schedule "cron or rate expression" ^ 
   --target-locations Accounts=111122223333,444455556666,444455556666,Regions=region,region
   ```

------
#### [ PowerShell ]

   ```
   $Targets = New-Object Amazon.SimpleSystemsManagement.Model.Target
   $Targets.Key = "ResourceGroup"
   $Targets.Values = "resource group name"
   
   New-SSMAssociation `
   -AssociationName "association name" `
   -Target $Targets `
   -Name "runbook name" `
   -Parameters @{
   "AutomationAssumeRole"="arn:aws:iam::123456789012:role/RunbookAssumeRole"} `
   -AutomationTargetParameterName "target parameter" `
   -ScheduleExpression "cron or rate expression" `
   -TargetLocations @{
       "Accounts"=["111122223333,444455556666,444455556666"],
       "Regions"=["region,region"]
   ```

------

   The command returns details for the new association similar to the following.

------
#### [ Linux & macOS ]

   ```
   {
   "AssociationDescription": {
       "ScheduleExpression": "cron(0 7 ? * MON *)",
       "Name": "AWS-StartEC2Instance",
       "Parameters": {
           "AutomationAssumeRole": [
               "arn:aws:iam::123456789012:role/RunbookAssumeRole"
           ]
       },
       "Overview": {
           "Status": "Pending",
           "DetailedStatus": "Creating"
       },
       "AssociationId": "1450b4b7-bea2-4e4b-b340-01234EXAMPLE",
       "DocumentVersion": "$DEFAULT",
       "AutomationTargetParameterName": "InstanceId",
       "LastUpdateAssociationDate": 1564686638.498,
       "Date": 1564686638.498,
       "AssociationVersion": "1",
       "AssociationName": "CLI",
       "Targets": [
           {
               "Values": [
                   "DEV"
               ],
               "Key": "tag:ENV"
           }
       ]
   }
   }
   ```

------
#### [ Windows ]

   ```
   {
   "AssociationDescription": {
       "ScheduleExpression": "cron(0 7 ? * MON *)",
       "Name": "AWS-StartEC2Instance",
       "Parameters": {
           "AutomationAssumeRole": [
               "arn:aws:iam::123456789012:role/RunbookAssumeRole"
           ]
       },
       "Overview": {
           "Status": "Pending",
           "DetailedStatus": "Creating"
       },
       "AssociationId": "1450b4b7-bea2-4e4b-b340-01234EXAMPLE",
       "DocumentVersion": "$DEFAULT",
       "AutomationTargetParameterName": "InstanceId",
       "LastUpdateAssociationDate": 1564686638.498,
       "Date": 1564686638.498,
       "AssociationVersion": "1",
       "AssociationName": "CLI",
       "Targets": [
           {
               "Values": [
                   "DEV"
               ],
               "Key": "tag:ENV"
           }
       ]
   }
   }
   ```

------
#### [ PowerShell ]

   ```
   Name                  : AWS-StartEC2Instance
   InstanceId            : 
   Date                  : 8/1/2019 7:31:38 PM
   Status.Name           : 
   Status.Date           : 
   Status.Message        : 
   Status.AdditionalInfo :
   ```

------

**Note**  
If you use tags to create an association on one or more target instances, and then you remove the tags from an instance, that instance no longer runs the association. The instance is disassociated from the State Manager document. 

## Troubleshooting automations run by State Manager associations


Systems Manager Automation enforces a limit of 100 concurrent automations, and 1,000 queued automations per account, per Region. If a State Manager association that uses a runbook shows a status of **Failed** and a detailed status of **AutomationExecutionLimitExceeded**, then your automation might have reached the limit. As a result, Systems Manager throttles the automations. To resolve this issue, do the following:
+ Use a different rate or cron expression for your association. For example, if the association is scheduled to run every 30 minutes, then change the expression so that it runs every hour or two.
+ Delete existing automations that have a status of **Pending**. By deleting these automations, you clear the current queue.

# Schedule automations with maintenance windows


You can start an automation by configuring a runbook as a registered task for a maintenance window. By registering the runbook as a registered task, the maintenance window runs the automation during the scheduled maintenance period. 

For example, let's say you create a runbook named `CreateAMI` that creates an Amazon Machine Image (AMI) of instances registered as targets to the maintenance window. To specify the `CreateAMI` runbook (and corresponding automation) as a registered task of a maintenance window, you first create a maintenance window and register targets. Then you use the following procedure to specify the `CreateAMI` document as a registered task within the maintenance window. When the maintenance window starts during the scheduled period, the system runs the automation and creates an AMI of the registered targets.

For information about creating Automation runbooks, see [Creating your own runbooks](automation-documents.md). Automation is a tool in AWS Systems Manager.

Use the following procedures to configure an automation as a registered task for a maintenance window using the AWS Systems Manager console, AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell.

## Registering an automation task to a maintenance window (console)


The following procedure describes how to use the Systems Manager console to configure an automation as a registered task for a maintenance window.

**Before you begin**  
Before you complete the following procedure, you must create a maintenance window and register at least one target. For more information, see the following procedures: 
+ [Create a maintenance window using the console](sysman-maintenance-create-mw.md).
+ [Assign targets to a maintenance window using the console](sysman-maintenance-assign-targets.md)

**To configure an automation as a registered task for a maintenance window**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the left navigation pane, choose **Maintenance Windows**, and then choose the maintenance window you want to register an Automation task with.

1. Choose **Actions**. Then choose **Register Automation task** to run your choice of an automation on targets by using a runbook.

1. For **Name**, enter a name for the task.

1. For **Description**, enter a description.

1. For **Document**, choose the runbook that defines the tasks to run.

1. For **Document version**, choose the runbook version to use.

1. For **Task priority**, specify a priority for this task. `1` is the highest priority. Tasks in a maintenance window are scheduled in priority order; tasks that have the same priority are scheduled in parallel.

1. In the **Targets** section, if the runbook you chose is one that runs tasks on resources, identify the targets on which you want to run this automation by specifying tags or by selecting instances manually.
**Note**  
If you want to pass the resources through input parameters instead of targets, you don't need to specify a maintenance window target.  
In many cases, you don't need to explicitly specify a target for an automation task. For example, say that you're creating an Automation-type task to update an Amazon Machine Image (AMI) for Linux using the `AWS-UpdateLinuxAmi` runbook. When the task runs, the AMI is updated with the latest available Linux distribution packages and Amazon software. New instances created from the AMI already have these updates installed. Because the ID of the AMI to be updated is specified in the input parameters for the runbook, there is no need to specify a target again in the maintenance window task.

   For information about maintenance window tasks that don't require targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).

1. (Optional) For **Rate control**:
**Note**  
If the task you're running doesn't specify targets, you don;t need to specify rate controls.
   + For **Concurrency**, specify either a number or a percentage of targets on which to run the automation at the same time.

     If you selected targets by choosing tag key-value pairs, and you aren't certain how many targets use the selected tags, then limit the number of automations that can run at the same time by specifying a percentage.

     When the maintenance window runs, a new automation is initiated per target. There is a limit of 100 concurrent automations per AWS account. If you specify a concurrency rate greater than 100, concurrent automations greater than 100 are automatically added to the automation queue. For information, see [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*. 
   + For **Error threshold**, specify when to stop running the automation on other targets after it fails on either a number or a percentage of targets. For example, if you specify three errors, then Systems Manager stops running automations when the fourth error is received. Targets still processing the automation might also send errors.

1. In the **Input Parameters** section, specify parameters for the runbook. For runbooks, the system auto-populates some of the values. You can keep or replace these values.
**Important**  
For runbooks, you can optionally specify an Automation Assume Role. If you don't specify a role for this parameter, then the automation assumes the maintenance window service role you choose in step 11. As such, you must ensure that the maintenance window service role you choose has the appropriate AWS Identity and Access Management (IAM) permissions to perform the actions defined within the runbook.   
For example, the service-linked role for Systems Manager doesn't have the IAM permission `ec2:CreateSnapshot`, which is required to use the runbook `AWS-CopySnapshot`. In this scenario, you must either use a custom maintenance window service role or specify an Automation assume role that has `ec2:CreateSnapshot` permissions. For information, see [Setting up Automation](automation-setup.md).

1. In the ** IAM service role** area, choose a role to provide permissions for Systems Manager to start the automation.

   To create a service role for maintenance window tasks, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).

1. Choose **Register Automation task**.

## Registering an Automation task to a maintenance window (command line)


The following procedure describes how to use the AWS CLI (on Linux or Windows Server) or AWS Tools for PowerShell to configure an automation as a registered task for a maintenance window.

**Before you begin**  
Before you complete the following procedure, you must create a maintenance window and register at least one target. For more information, see the following procedures:
+ [Step 1: Create the maintenance window using the AWS CLI](mw-cli-tutorial-create-mw.md).
+ [Step 2: Register a target node with the maintenance window using the AWS CLI](mw-cli-tutorial-targets.md)

**To configure an automation as a registered task for a maintenance window**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Create a command to configure an automation as a registered task for a maintenance window. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
   --window-id window ID \
   --name task name \
   --task-arn runbook name \
   --targets Key=targets,Values=value \
   --service-role-arn IAM role arn \
   --task-type AUTOMATION \
   --task-invocation-parameters task parameters \
   --priority task priority \
   --max-concurrency 10% \
   --max-errors 5
   ```

**Note**  
If you configure an automation as a registered task by using the AWS CLI, use the `--Task-Invocation-Parameters` parameter to specify parameters to pass to a task when it runs. Don't use the `--Task-Parameters` parameter. The `--Task-Parameters` parameter is a legacy parameter.  
For maintenance window tasks without a target specified, you can't supply values for `--max-errors` and `--max-concurrency`. Instead, the system inserts a placeholder value of `1`, which might be reported in the response to commands such as [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-maintenance-window-tasks.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-maintenance-window-tasks.html) and [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-maintenance-window-task.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-maintenance-window-task.html). These values don't affect the running of your task and can be ignored.  
For information about maintenance window tasks that don't require targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).

------
#### [ Windows ]

   ```
   aws ssm register-task-with-maintenance-window ^
   --window-id window ID ^
   --name task name ^
   --task-arn runbook name ^
   --targets Key=targets,Values=value ^
   --service-role-arn IAM role arn ^
   --task-type AUTOMATION ^
   --task-invocation-parameters task parameters ^
   --priority task priority ^
   --max-concurrency 10% ^
   --max-errors 5
   ```

**Note**  
If you configure an automation as a registered task by using the AWS CLI, use the `--task-invocation-parameters` parameter to specify parameters to pass to a task when it runs. Don't use the `--task-parameters` parameter. The `--task-parameters` parameter is a legacy parameter.  
For maintenance window tasks without a target specified, you can't supply values for `--max-errors` and `--max-concurrency`. Instead, the system inserts a placeholder value of `1`, which might be reported in the response to commands such as [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-maintenance-window-tasks.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-maintenance-window-tasks.html) and [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-maintenance-window-task.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-maintenance-window-task.html). These values don't affect the running of your task and can be ignored.  
For information about maintenance window tasks that don't require targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).

------
#### [ PowerShell ]

   ```
   Register-SSMTaskWithMaintenanceWindow `
   -WindowId window ID `
   -Name "task name" `
   -TaskArn "runbook name" `
   -Target @{ Key="targets";Values="value" } `
   -ServiceRoleArn "IAM role arn" `
   -TaskType "AUTOMATION" `
   -Automation_Parameter @{ "task parameter"="task parameter value"} `
   -Priority task priority `
   -MaxConcurrency 10% `
   -MaxError 5
   ```

**Note**  
If you configure an automation as a registered task by using the AWS Tools for PowerShell, use the `-Automation_Parameter` parameter to specify parameters to pass to a task when the task runs. Don't use the `-TaskParameters` parameter. The `-TaskParameters` parameter is a legacy parameter.  
For maintenance window tasks without a target specified, you can't supply values for `-MaxError` and `-MaxConcurrency`. Instead, the system inserts a placeholder value of 1, which might be reported in the response to commands such as `Get-SSMMaintenanceWindowTaskList` and `Get-SSMMaintenanceWindowTask`. These values don't affect the running of your task and can be ignored.  
For information about maintenance window tasks that don't require targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).

------

   The following example configures an automation as a registered task to a maintenance window with priority 1. It also demonstrates omitting the `--targets`, `--max-errors`, and `--max-concurrency` options for a targetless maintenance window task. The automation uses the `AWS-StartEC2Instance` runbook and the specified Automation assume role to start EC2 instances registered as targets to the maintenance window. The maintenance window runs the automation simultaneously on 5 instances maximum at any given time. Also, the registered task stops running on more instances for a particular interval if the error count exceeds 1.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
   --window-id mw-0c50858d01EXAMPLE \
   --name StartEC2Instances \
   --task-arn AWS-StartEC2Instance \
   --service-role-arn arn:aws:iam::123456789012:role/MaintenanceWindowRole \
   --task-type AUTOMATION \
   --task-invocation-parameters "{\"Automation\":{\"Parameters\":{\"InstanceId\":[\"{{TARGET_ID}}\"],\"AutomationAssumeRole\":[\"arn:aws:iam::123456789012:role/AutomationAssumeRole\"]}}}" \
   --priority 1
   ```

------
#### [ Windows ]

   ```
   aws ssm register-task-with-maintenance-window ^
   --window-id mw-0c50858d01EXAMPLE ^
   --name StartEC2Instances ^
   --task-arn AWS-StartEC2Instance ^
   --service-role-arn arn:aws:iam::123456789012:role/MaintenanceWindowRole ^
   --task-type AUTOMATION ^
   --task-invocation-parameters "{\"Automation\":{\"Parameters\":{\"InstanceId\":[\"{{TARGET_ID}}\"],\"AutomationAssumeRole\":[\"arn:aws:iam::123456789012:role/AutomationAssumeRole\"]}}}" ^
   --priority 1
   ```

------
#### [ PowerShell ]

   ```
   Register-SSMTaskWithMaintenanceWindow `
   -WindowId mw-0c50858d01EXAMPLE `
   -Name "StartEC2" `
   -TaskArn "AWS-StartEC2Instance" `
   -ServiceRoleArn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" `
   -TaskType "AUTOMATION" `
   -Automation_Parameter @{ "InstanceId"="{{TARGET_ID}}";"AutomationAssumeRole"="arn:aws:iam::123456789012:role/AutomationAssumeRole" } `
   -Priority 1
   ```

------

   The command returns details for the new registered task similar to the following.

------
#### [ Linux & macOS ]

   ```
   {
   "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
   }
   ```

------
#### [ Windows ]

   ```
   {
   "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
   }
   ```

------
#### [ PowerShell ]

   ```
   4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE
   ```

------

1. To view the registered task, run the following command. Replace *maintenance windows ID* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-tasks \
   --window-id maintenance window ID
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-tasks ^
   --window-id maintenance window ID
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMMaintenanceWindowTaskList `
   -WindowId maintenance window ID
   ```

------

   The system returns information like the following.

------
#### [ Linux & macOS ]

   ```
   {
   "Tasks": [
       {
           "ServiceRoleArn": "arn:aws:iam::123456789012:role/MaintenanceWindowRole",
           "MaxErrors": "1",
           "TaskArn": "AWS-StartEC2Instance",
           "MaxConcurrency": "1",
           "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
           "TaskParameters": {},
           "Priority": 1,
           "WindowId": "mw-0c50858d01EXAMPLE",
           "Type": "AUTOMATION",
           "Targets": [
           ],
           "Name": "StartEC2"
       }
   ]
   }
   ```

------
#### [ Windows ]

   ```
   {
   "Tasks": [
       {
           "ServiceRoleArn": "arn:aws:iam::123456789012:role/MaintenanceWindowRole",
           "MaxErrors": "1",
           "TaskArn": "AWS-StartEC2Instance",
           "MaxConcurrency": "1",
           "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
           "TaskParameters": {},
           "Priority": 1,
           "WindowId": "mw-0c50858d01EXAMPLE",
           "Type": "AUTOMATION",
           "Targets": [
           ],
           "Name": "StartEC2"
       }
   ]
   }
   ```

------
#### [ PowerShell ]

   ```
   Description    : 
   LoggingInfo    : 
   MaxConcurrency : 5
   MaxErrors      : 1
   Name           : StartEC2
   Priority       : 1
   ServiceRoleArn : arn:aws:iam::123456789012:role/MaintenanceWindowRole
   Targets        : {}
   TaskArn        : AWS-StartEC2Instance
   TaskParameters : {}
   Type           : AUTOMATION
   WindowId       : mw-0c50858d01EXAMPLE
   WindowTaskId   : 4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE
   ```

------

# Systems Manager Automation actions reference
Automation actions reference

This reference describes the Automation actions that you can specify in an Automation runbook. Automation is a tool in AWS Systems Manager. These actions can't be used in other types of Systems Manager (SSM) documents. For information about plugins for other types of SSM documents, see [Command document plugin reference](documents-command-ssm-plugin-reference.md).

Systems Manager Automation runs steps defined in Automation runbooks. Each step is associated with a particular action. The action determines the inputs, behavior, and outputs of the step. Steps are defined in the `mainSteps` section of your runbook.

You don't need to specify the outputs of an action or step. The outputs are predetermined by the action associated with the step. When you specify step inputs in your runbooks, you can reference one or more outputs from an earlier step. For example, you can make the output of `aws:runInstances` available for a subsequent `aws:runCommand` action. You can also reference outputs from earlier steps in the `Output` section of the runbook. 

**Important**  
If you run an automation workflow that invokes other services by using an AWS Identity and Access Management (IAM) service role, be aware that the service role must be configured with permission to invoke those services. This requirement applies to all AWS Automation runbooks (`AWS-*` runbooks) such as the `AWS-ConfigureS3BucketLogging`, `AWS-CreateDynamoDBBackup`, and `AWS-RestartEC2Instance` runbooks, to name a few. This requirement also applies to any custom Automation runbooks you create that invoke other AWS services by using actions that call other services. For example, if you use the `aws:executeAwsApi`, `aws:createStack`, or `aws:copyImage` actions, configure the service role with permission to invoke those services. You can give permissions to other AWS services by adding an IAM inline policy to the role. For more information, see [(Optional) Add an Automation inline policy or customer managed policy to invoke other AWS services](automation-setup-iam.md#add-inline-policy).

**Topics**
+ [

## Properties shared by all actions
](#automation-common)
+ [

# `aws:approve` – Pause an automation for manual approval
](automation-action-approve.md)
+ [

# `aws:assertAwsResourceProperty` – Assert an AWS resource state or event state
](automation-action-assertAwsResourceProperty.md)
+ [

# `aws:branch` – Run conditional automation steps
](automation-action-branch.md)
+ [

# `aws:changeInstanceState` – Change or assert instance state
](automation-action-changestate.md)
+ [

# `aws:copyImage` – Copy or encrypt an Amazon Machine Image
](automation-action-copyimage.md)
+ [

# `aws:createImage` – Create an Amazon Machine Image
](automation-action-create.md)
+ [

# `aws:createStack` – Create an CloudFormation stack
](automation-action-createstack.md)
+ [

# `aws:createTags` – Create tags for AWS resources
](automation-action-createtag.md)
+ [

# `aws:deleteImage` – Delete an Amazon Machine Image
](automation-action-delete.md)
+ [

# `aws:deleteStack` – Delete an CloudFormation stack
](automation-action-deletestack.md)
+ [

# `aws:executeAutomation` – Run another automation
](automation-action-executeAutomation.md)
+ [

# `aws:executeAwsApi` – Call and run AWS API operations
](automation-action-executeAwsApi.md)
+ [

# `aws:executeScript` – Run a script
](automation-action-executeScript.md)
+ [

# `aws:executeStateMachine` – Run an AWS Step Functions state machine
](automation-action-executeStateMachine.md)
+ [

# `aws:invokeWebhook` – Invoke an Automation webhook integration
](invoke-webhook.md)
+ [

# `aws:invokeLambdaFunction` – Invoke an AWS Lambda function
](automation-action-lamb.md)
+ [

# `aws:loop` – Iterate over steps in an automation
](automation-action-loop.md)
+ [

# `aws:pause` – Pause an automation
](automation-action-pause.md)
+ [

# `aws:runCommand` – Run a command on a managed instance
](automation-action-runcommand.md)
+ [

# `aws:runInstances` – Launch an Amazon EC2 instance
](automation-action-runinstance.md)
+ [

# `aws:sleep` – Delay an automation
](automation-action-sleep.md)
+ [

# `aws:updateVariable` – Updates a value for a runbook variable
](automation-action-update-variable.md)
+ [

# `aws:waitForAwsResourceProperty` – Wait on an AWS resource property
](automation-action-waitForAwsResourceProperty.md)
+ [

# Automation system variables
](automation-variables.md)

## Properties shared by all actions


Common properties are parameters or options that are found in all actions. Some options define behavior for a step, such as how long to wait for a step to complete and what to do if the step fails. The following properties are common to all actions.

[description](#descriptProp)  
Information you provide to describe the purpose of a runbook or a step.  
Type: String  
Required: No

[name](#nameProp)  
An identifier that must be unique across all step names in the runbook.  
Type: String  
Allowed pattern: [a-zA-Z0-9\$1]\$1\$1  
Required: Yes

[action](#actProp)  
The name of the action the step is to run. [`aws:runCommand` – Run a command on a managed instance](automation-action-runcommand.md) is an example of an action you can specify here. This document provides detailed information about all available actions.  
Type: String  
Required: Yes

[maxAttempts](#maxProp)  
The number of times the step should be retried in case of failure. If the value is greater than 1, the step isn't considered to have failed until all retry attempts have failed. The default value is 1.  
Type: Integer  
Required: No

[timeoutSeconds](#timeProp)  
The timeout value for the step. If the timeout is reached and the value of `maxAttempts` is greater than 1, then the step isn't considered to have timed out until all retries have been attempted.  
Type: Integer  
Required: No

[onFailure](#failProp)  
Indicates whether the automation should stop, continue, or go to a different step on failure. The default value for this option is abort.  
Type: String  
Valid values: Abort \$1 Continue \$1 step:*step\$1name*  
Required: No

[onCancel](#canProp)  
Indicates which step the automation should go to in the event that a user cancels the automation. Automation runs the cancellation workflow for a maximum of two minutes.  
Type: String  
Valid values: Abort \$1 step:*step\$1name*  
Required: No  
The `onCancel` property doesn't support moving to the following actions:  
+ `aws:approve`
+ `aws:copyImage`
+ `aws:createImage`
+ `aws:createStack`
+ `aws:createTags`
+ `aws:loop`
+ `aws:pause`
+ `aws:runInstances`
+ `aws:sleep`

[isEnd](#endProp)  
This option stops an automation at the end of a specific step. The automation stops if the step failed or succeeded. The default value is false.  
Type: Boolean  
Valid values: true \$1 false  
Required: No

[nextStep](#nextProp)  
Specifies which step in an automation to process next after successfully completing a step.  
Type: String  
Required: No

[isCritical](#critProp)  
Designates a step as critical for the successful completion of the Automation. If a step with this designation fails, then Automation reports the final status of the Automation as Failed. This property is only evaluated if you explicitly define it in your step. If the `onFailure` property is set to `Continue` in a step, the value defaults to false. Otherwise, the default value for this option is true.  
Type: Boolean  
Valid values: true \$1 false  
Required: No

[inputs](#inProp)  
The properties specific to the action.  
Type: Map  
Required: Yes

### Example


```
---
description: "Custom Automation Example"
schemaVersion: '0.3'
assumeRole: "{{ AutomationAssumeRole }}"
parameters:
  AutomationAssumeRole:
    type: String
    description: "(Required) The ARN of the role that allows Automation to perform
      the actions on your behalf. If no role is specified, Systems Manager Automation
      uses your IAM permissions to run this runbook."
    default: ''
  InstanceId:
      type: String
      description: "(Required) The Instance Id whose root EBS volume you want to restore the latest Snapshot."
      default: ''
mainSteps:
- name: getInstanceDetails
  action: aws:executeAwsApi
  onFailure: Abort
  inputs:
    Service: ec2
    Api: DescribeInstances
    InstanceIds:
    - "{{ InstanceId }}"
  outputs:
    - Name: availabilityZone
      Selector: "$.Reservations[0].Instances[0].Placement.AvailabilityZone"
      Type: String
    - Name: rootDeviceName
      Selector: "$.Reservations[0].Instances[0].RootDeviceName"
      Type: String
  nextStep: getRootVolumeId
- name: getRootVolumeId
  action: aws:executeAwsApi
  maxAttempts: 3
  onFailure: Abort
  inputs:
    Service: ec2
    Api: DescribeVolumes
    Filters:
    -  Name: attachment.device
       Values: ["{{ getInstanceDetails.rootDeviceName }}"]
    -  Name: attachment.instance-id
       Values: ["{{ InstanceId }}"]
  outputs:
    - Name: rootVolumeId
      Selector: "$.Volumes[0].VolumeId"
      Type: String
  nextStep: getSnapshotsByStartTime
- name: getSnapshotsByStartTime
  action: aws:executeScript
  timeoutSeconds: 45
  onFailure: Abort
  inputs:
    Runtime: python3.8
    Handler: getSnapshotsByStartTime
    InputPayload:
      rootVolumeId : "{{ getRootVolumeId.rootVolumeId }}"
    Script: |-
      def getSnapshotsByStartTime(events,context):
        import boto3

        #Initialize client
        ec2 = boto3.client('ec2')
        rootVolumeId = events['rootVolumeId']
        snapshotsQuery = ec2.describe_snapshots(
          Filters=[
            {
              "Name": "volume-id",
              "Values": [rootVolumeId]
            }
          ]
        )
        if not snapshotsQuery['Snapshots']:
          noSnapshotFoundString = "NoSnapshotFound"
          return { 'noSnapshotFound' : noSnapshotFoundString }
        else:
          jsonSnapshots = snapshotsQuery['Snapshots']
          sortedSnapshots = sorted(jsonSnapshots, key=lambda k: k['StartTime'], reverse=True)
          latestSortedSnapshotId = sortedSnapshots[0]['SnapshotId']
          return { 'latestSnapshotId' : latestSortedSnapshotId }
  outputs:
  - Name: Payload
    Selector: $.Payload
    Type: StringMap
  - Name: latestSnapshotId
    Selector: $.Payload.latestSnapshotId
    Type: String
  - Name: noSnapshotFound
    Selector: $.Payload.noSnapshotFound
    Type: String 
  nextStep: branchFromResults
- name: branchFromResults
  action: aws:branch
  onFailure: Abort
  onCancel: step:startInstance
  inputs:
    Choices:
    - NextStep: createNewRootVolumeFromSnapshot
      Not:
        Variable: "{{ getSnapshotsByStartTime.noSnapshotFound }}"
        StringEquals: "NoSnapshotFound"
  isEnd: true
- name: createNewRootVolumeFromSnapshot
  action: aws:executeAwsApi
  onFailure: Abort
  inputs:
    Service: ec2
    Api: CreateVolume
    AvailabilityZone: "{{ getInstanceDetails.availabilityZone }}"
    SnapshotId: "{{ getSnapshotsByStartTime.latestSnapshotId }}"
  outputs:
    - Name: newRootVolumeId
      Selector: "$.VolumeId"
      Type: String
  nextStep: stopInstance
- name: stopInstance
  action: aws:executeAwsApi
  onFailure: Abort
  inputs:
    Service: ec2
    Api: StopInstances
    InstanceIds:
    - "{{ InstanceId }}"
  nextStep: verifyVolumeAvailability
- name: verifyVolumeAvailability
  action: aws:waitForAwsResourceProperty
  timeoutSeconds: 120
  inputs:
    Service: ec2
    Api: DescribeVolumes
    VolumeIds:
    - "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
    PropertySelector: "$.Volumes[0].State"
    DesiredValues:
    - "available"
  nextStep: verifyInstanceStopped
- name: verifyInstanceStopped
  action: aws:waitForAwsResourceProperty
  timeoutSeconds: 120
  inputs:
    Service: ec2
    Api: DescribeInstances
    InstanceIds:
    - "{{ InstanceId }}"
    PropertySelector: "$.Reservations[0].Instances[0].State.Name"
    DesiredValues:
    - "stopped"
  nextStep: detachRootVolume
- name: detachRootVolume
  action: aws:executeAwsApi
  onFailure: Abort
  isCritical: true
  inputs:
    Service: ec2
    Api: DetachVolume
    VolumeId: "{{ getRootVolumeId.rootVolumeId }}"
  nextStep: verifyRootVolumeDetached
- name: verifyRootVolumeDetached
  action: aws:waitForAwsResourceProperty
  timeoutSeconds: 30
  inputs:
    Service: ec2
    Api: DescribeVolumes
    VolumeIds:
    - "{{ getRootVolumeId.rootVolumeId }}"
    PropertySelector: "$.Volumes[0].State"
    DesiredValues:
    - "available"
  nextStep: attachNewRootVolume
- name: attachNewRootVolume
  action: aws:executeAwsApi
  onFailure: Abort
  inputs:
    Service: ec2
    Api: AttachVolume
    Device: "{{ getInstanceDetails.rootDeviceName }}"
    InstanceId: "{{ InstanceId }}"
    VolumeId: "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
  nextStep: verifyNewRootVolumeAttached
- name: verifyNewRootVolumeAttached
  action: aws:waitForAwsResourceProperty
  timeoutSeconds: 30
  inputs:
    Service: ec2
    Api: DescribeVolumes
    VolumeIds:
    - "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
    PropertySelector: "$.Volumes[0].Attachments[0].State"
    DesiredValues:
    - "attached"
  nextStep: startInstance
- name: startInstance
  action: aws:executeAwsApi
  onFailure: Abort
  inputs:
    Service: ec2
    Api: StartInstances
    InstanceIds:
    - "{{ InstanceId }}"
```

# `aws:approve` – Pause an automation for manual approval


Temporarily pauses an automation until designated principals either approve or reject the action. After the required number of approvals is reached, the automation resumes. You can insert the approval step any place in the `mainSteps` section of your runbook. 

**Note**  
This action doesn't support multi-account and Region automations. The default timeout for this action is 7 days (604800 seconds) and the maximum value is 30 days (2592000 seconds). You can limit or extend the timeout by specifying the `timeoutSeconds` parameter for an `aws:approve` step.

In the following example, the `aws:approve` action temporarily pauses the automation until one approver either accepts or rejects the automation. Upon approval, the automation runs a simple PowerShell command. 

------
#### [ YAML ]

```
---
description: RunInstancesDemo1
schemaVersion: '0.3'
assumeRole: "{{ assumeRole }}"
parameters:
  assumeRole:
    type: String
  message:
    type: String
mainSteps:
- name: approve
  action: aws:approve
  timeoutSeconds: 1000
  onFailure: Abort
  inputs:
    NotificationArn: arn:aws:sns:us-east-2:12345678901:AutomationApproval
    Message: "{{ message }}"
    MinRequiredApprovals: 1
    Approvers:
    - arn:aws:iam::12345678901:user/AWS-User-1
- name: run
  action: aws:runCommand
  inputs:
    InstanceIds:
    - i-1a2b3c4d5e6f7g
    DocumentName: AWS-RunPowerShellScript
    Parameters:
      commands:
      - date
```

------
#### [ JSON ]

```
{
   "description":"RunInstancesDemo1",
   "schemaVersion":"0.3",
   "assumeRole":"{{ assumeRole }}",
   "parameters":{
      "assumeRole":{
         "type":"String"
      },
      "message":{
         "type":"String"
      }
   },
   "mainSteps":[
      {
         "name":"approve",
         "action":"aws:approve",
         "timeoutSeconds":1000,
         "onFailure":"Abort",
         "inputs":{
            "NotificationArn":"arn:aws:sns:us-east-2:12345678901:AutomationApproval",
            "Message":"{{ message }}",
            "MinRequiredApprovals":1,
            "Approvers":[
               "arn:aws:iam::12345678901:user/AWS-User-1"
            ]
         }
      },
      {
         "name":"run",
         "action":"aws:runCommand",
         "inputs":{
            "InstanceIds":[
               "i-1a2b3c4d5e6f7g"
            ],
            "DocumentName":"AWS-RunPowerShellScript",
            "Parameters":{
               "commands":[
                  "date"
               ]
            }
         }
      }
   ]
}
```

------

You can approve or deny Automations that are waiting for approval in the console.

**To approve or deny waiting Automations**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the option next to an Automation with a status of **Waiting**.  
![\[Accessing the Approve/Deny Automation page\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/automation-approve-action-aws.png)

1. Choose **Approve/Deny**.

1. Review the details of the Automation.

1. Choose either **Approve** or **Deny**, type an optional comment, and then choose **Submit**.

**Input example**

------
#### [ YAML ]

```
NotificationArn: arn:aws:sns:us-west-1:12345678901:Automation-ApprovalRequest
Message: Please approve this step of the Automation.
MinRequiredApprovals: 3
Approvers:
- IamUser1
- IamUser2
- arn:aws:iam::12345678901:user/IamUser3
- arn:aws:iam::12345678901:role/IamRole
```

------
#### [ JSON ]

```
{
   "NotificationArn":"arn:aws:sns:us-west-1:12345678901:Automation-ApprovalRequest",
   "Message":"Please approve this step of the Automation.",
   "MinRequiredApprovals":3,
   "Approvers":[
      "IamUser1",
      "IamUser2",
      "arn:aws:iam::12345678901:user/IamUser3",
      "arn:aws:iam::12345678901:role/IamRole"
   ]
}
```

------

NotificationArn  
The Amazon Resource Name (ARN of an Amazon Simple Notification Service (Amazon SNS) topic for Automation approvals. When you specify an `aws:approve` step in a runbook, Automation sends a message to this topic letting principals know that they must either approve or reject an Automation step. The title of the Amazon SNS topic must be prefixed with "Automation".  
Type: String  
Required: No

Message  
The information you want to include in the Amazon SNS topic when the approval request is sent. The maximum message length is 4096 characters.   
Type: String  
Required: No

MinRequiredApprovals  
The minimum number of approvals required to resume the automation. If you don't specify a value, the system defaults to one. The value for this parameter must be a positive number. The value for this parameter can't exceed the number of approvers defined by the `Approvers` parameter.   
Type: Integer  
Required: No

Approvers  
A list of AWS authenticated principals who are able to either approve or reject the action. The maximum number of approvers is 10. You can specify principals by using any of the following formats:  
+ A user name
+ A user ARN
+ An IAM role ARN
+ An IAM assume role ARN
Type: StringList  
Required: Yes

EnhancedApprovals  
This input is only used for Change Manager templates. A list of AWS authenticated principals who are able to either approve or reject the action, the type of IAM principal, and the minimum number of approvers. The following is an example:  

```
schemaVersion: "0.3"
emergencyChange: false
autoApprovable: false
mainSteps:
    - name: ApproveAction1
    action: aws:approve
    timeoutSeconds: 604800
    inputs:
        Message: Please approve this change request
        MinRequiredApprovals: 3
        EnhancedApprovals:
        Approvers:
            - approver: John Stiles
            type: IamUser
            minRequiredApprovals: 0
            - approver: Ana Carolina Silva
            type: IamUser
            minRequiredApprovals: 0
            - approver: GroupOfThree
            type: IamGroup
            minRequiredApprovals: 0
            - approver: RoleOfTen
            type: IamRole
            minRequiredApprovals: 0
```
Type: StringList  
Required: Yes

**Output**

ApprovalStatus  
The approval status of the step. The status can be one of the following: Approved, Rejected, or Waiting. Waiting means that Automation is waiting for input from approvers.  
Type: String

ApproverDecisions  
A JSON map that includes the approval decision of each approver.  
Type: MapList

# `aws:assertAwsResourceProperty` – Assert an AWS resource state or event state


The `aws:assertAwsResourceProperty` action allows you to assert a specific resource state or event state for a specific Automation step.

**Note**  
The `aws:assertAwsResourceProperty` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

For more examples of how to use this action, see [Additional runbook examples](automation-document-examples.md).

**Input**  
Inputs are defined by the API operation that you choose. 

------
#### [ YAML ]

```
action: aws:assertAwsResourceProperty
inputs:
  Service: The official namespace of the service
  Api: The API operation or method name
  API operation inputs or parameters: A value
  PropertySelector: Response object
  DesiredValues:
  - Desired property values
```

------
#### [ JSON ]

```
{
  "action": "aws:assertAwsResourceProperty",
  "inputs": {
    "Service":"The official namespace of the service",
    "Api":"The API operation or method name",
    "API operation inputs or parameters":"A value",
    "PropertySelector": "Response object",
    "DesiredValues": [
      "Desired property values"
    ]
  }
}
```

------

Service  
The AWS service namespace that contains the API operation that you want to run. For example, the namespace for Systems Manager is `ssm`. The namespace for Amazon EC2 is `ec2`. You can view a list of supported AWS service namespaces in the [Available Services](https://docs.aws.amazon.com/cli/latest/reference/#available-services) section of the *AWS CLI Command Reference*.  
Type: String  
Required: Yes

Api  
The name of the API operation that you want to run. You can view the API operations (also called methods) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all API operations (methods) for Amazon Relational Database Service (Amazon RDS) are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html).  
Type: String  
Required: Yes

API operation inputs  
One or more API operation inputs. You can view the available inputs (also called parameters) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all methods for Amazon RDS are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html). Choose the [describe\$1db\$1instances](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances) method and scroll down to see the available parameters, such as **DBInstanceIdentifier**, **Name**, and **Values**. Use the following format to specify more than one input.  

```
inputs:
  Service: The official namespace of the service
  Api: The API operation name
  API input 1: A value
  API Input 2: A value
  API Input 3: A value
```

```
"inputs":{
      "Service":"The official namespace of the service",
      "Api":"The API operation name",
      "API input 1":"A value",
      "API Input 2":"A value",
      "API Input 3":"A value"
}
```
Type: Determined by chosen API operation  
Required: Yes

PropertySelector  
The JSONPath to a specific attribute in the response object. You can view the response objects by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all methods for Amazon RDS are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html). Choose the [describe\$1db\$1instances](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances) method and scroll down to the **Response Structure** section. **DBInstances** is listed as a response object.  
Type: String  
Required: Yes

DesiredValues  
The expected status or state on which to continue the automation. If you specify a Boolean value, you must use a capital letter such as True or False.  
Type: StringList  
Required: Yes

# `aws:branch` – Run conditional automation steps


The `aws:branch` action allows you to create a dynamic automation that evaluates different choices in a single step and then jumps to a different step in the runbook based on the results of that evaluation. 

When you specify the `aws:branch` action for a step, you specify `Choices` that the automation must evaluate. The `Choices` can be based on either a value that you specified in the `Parameters` section of the runbook, or a dynamic value generated as the output from the previous step. The automation evaluates each choice by using a Boolean expression. If the first choice is true, then the automation jumps to the step designated for that choice. If the first choice is false, the automation evaluates the next choice. The automation continues evaluating each choice until it process a true choice. The automation then jumps to the designated step for the true choice.

If none of the choices are true, the automation checks to see if the step contains a `default` value. A default value defines a step that the automation should jump to if none of the choices are true. If no `default` value is specified for the step, then the automation processes the next step in the runbook.

The `aws:branch` action supports complex choice evaluations by using a combination of `And`, `Not`, and `Or` operators. For more information about how to use `aws:branch`, including example runbooks and examples that use different operators, see [Using conditional statements in runbooks](automation-branch-condition.md).

**Input**  
Specify one or more `Choices` in a step. The `Choices` can be based on either a value that you specified in the `Parameters` section of the runbook, or a dynamic value generated as the output from the previous step. Here is a YAML sample that evaluates a parameter.

```
mainSteps:
- name: chooseOS
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runWindowsCommand
      Variable: "{{Name of a parameter defined in the Parameters section. For example: OS_name}}"
      StringEquals: windows
    - NextStep: runLinuxCommand
      Variable: "{{Name of a parameter defined in the Parameters section. For example: OS_name}}"
      StringEquals: linux
    Default:
      sleep3
```

Here is a YAML sample that evaluates output from a previous step.

```
mainSteps:
- name: chooseOS
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runPowerShellCommand
      Variable: "{{Name of a response object. For example: GetInstance.platform}}"
      StringEquals: Windows
    - NextStep: runShellCommand
      Variable: "{{Name of a response object. For example: GetInstance.platform}}"
      StringEquals: Linux
    Default:
      sleep3
```

Choices  
One or more expressions that the Automation should evaluate when determining the next step to process. Choices are evaluated by using a Boolean expression. Each choice must define the following options:  
+ **NextStep**: The next step in the runbook to process if the designated choice is true.
+ **Variable**: Specify either the name of a parameter that is defined in the `Parameters` section of the runbook. Or specify an output object from a previous step in the runbook. For more information about creating variables for `aws:branch`, see [About creating the output variable](automation-branch-condition.md#branch-action-output).
+ **Operation**: The criteria used to evaluate the choice. The `aws:branch` action supports the following operations:

**String operations**
  + StringEquals
  + EqualsIgnoreCase
  + StartsWith
  + EndsWith
  + Contains

**Numeric operations**
  + NumericEquals
  + NumericGreater
  + NumericLesser
  + NumericGreaterOrEquals
  + NumericLesser
  + NumericLesserOrEquals

**Boolean operation**
  + BooleanEquals
**Important**  
When you create a runbook, the system validates each operation in the runbook. If an operation isn't supported, the system returns an error when you try to create the runbook.

Default  
The name of a step the automation should jump to if none of the `Choices` are true.  
Type: String  
Required: No

**Note**  
The `aws:branch` action supports `And`, `Or`, and `Not` operators. For examples of `aws:branch` that use operators, see [Using conditional statements in runbooks](automation-branch-condition.md).

# `aws:changeInstanceState` – Change or assert instance state


Changes or asserts the state of the instance.

This action can be used in assert mode (doesn't run the API to change the state but verifies the instance is in the desired state.) To use assert mode, set the `CheckStateOnly` parameter to true. This mode is useful when running the Sysprep command on Windows Server, which is an asynchronous command that can run in the background for a long time. You can ensure that the instance is stopped before you create an Amazon Machine Image (AMI).

**Note**  
The default timeout value for this action is 3600 seconds (one hour). You can limit or extend the timeout by specifying the `timeoutSeconds` parameter for an `aws:changeInstanceState` step.

**Note**  
The `aws:changeInstanceState` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**

------
#### [ YAML ]

```
name: stopMyInstance
action: aws:changeInstanceState
maxAttempts: 3
timeoutSeconds: 3600
onFailure: Abort
inputs:
  InstanceIds:
  - i-1234567890abcdef0
  CheckStateOnly: true
  DesiredState: stopped
```

------
#### [ JSON ]

```
{
    "name":"stopMyInstance",
    "action": "aws:changeInstanceState",
    "maxAttempts": 3,
    "timeoutSeconds": 3600,
    "onFailure": "Abort",
    "inputs": {
        "InstanceIds": ["i-1234567890abcdef0"],
        "CheckStateOnly": true,
        "DesiredState": "stopped"
    }
}
```

------

InstanceIds  
The IDs of the instances.  
Type: StringList  
Required: Yes

CheckStateOnly  
If false, sets the instance state to the desired state. If true, asserts the desired state using polling.  
Default: `false`  
Type: Boolean  
Required: No

DesiredState  
The desired state. When set to `running`, this action waits for the Amazon EC2 state to be `Running`, the Instance Status to be `OK`, and the System Status to be `OK` before completing.  
Type: String  
Valid values: `running` \$1 `stopped` \$1 `terminated`  
Required: Yes

Force  
If set, forces the instances to stop. The instances don't have an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures. This option isn't recommended for EC2 instances for Windows Server.  
Type: Boolean  
Required: No

AdditionalInfo  
Reserved.  
Type: String  
Required: No

**Output**  
None

# `aws:copyImage` – Copy or encrypt an Amazon Machine Image


Copies an Amazon Machine Image (AMI) from any AWS Region into the current Region. This action can also encrypt the new AMI.

**Note**  
The `aws:copyImage` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
This action supports most `CopyImage` parameters. For more information, see [CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CopyImage.html).

The following example creates a copy of an AMI in the Seoul region (`SourceImageID`: ami-0fe10819. `SourceRegion`: ap-northeast-2). The new AMI is copied to the region where you initiated the Automation action. The copied AMI will be encrypted because the optional `Encrypted` flag is set to `true`.

------
#### [ YAML ]

```
name: createEncryptedCopy
action: aws:copyImage
maxAttempts: 3
onFailure: Abort
inputs:
  SourceImageId: ami-0fe10819
  SourceRegion: ap-northeast-2
  ImageName: Encrypted Copy of LAMP base AMI in ap-northeast-2
  Encrypted: true
```

------
#### [ JSON ]

```
{   
    "name": "createEncryptedCopy",
    "action": "aws:copyImage",
    "maxAttempts": 3,
    "onFailure": "Abort",
    "inputs": {
        "SourceImageId": "ami-0fe10819",
        "SourceRegion": "ap-northeast-2",
        "ImageName": "Encrypted Copy of LAMP base AMI in ap-northeast-2",
        "Encrypted": true
    }   
}
```

------

SourceRegion  
The region where the source AMI exists.  
Type: String  
Required: Yes

SourceImageId  
The AMI ID to copy from the source Region.  
Type: String  
Required: Yes

ImageName  
The name for the new image.  
Type: String  
Required: Yes

ImageDescription  
A description for the target image.  
Type: String  
Required: No

Encrypted  
Encrypt the target AMI.  
Type: Boolean  
Required: No

KmsKeyId  
The full Amazon Resource Name (ARN) of the AWS KMS key to use when encrypting the snapshots of an image during a copy operation. For more information, see [CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/api_copyimage.html).  
Type: String  
Required: No

ClientToken  
A unique, case-sensitive identifier that you provide to ensure request idempotency. For more information, see [CopyImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/api_copyimage.html).  
Type: String  
Required: NoOutput

ImageId  
The ID of the copied image.

ImageState  
The state of the copied image.  
Valid values: `available` \$1 `pending` \$1 `failed`

# `aws:createImage` – Create an Amazon Machine Image


Creates an Amazon Machine Image (AMI) from an instance that is either running, stopping, or stopped, and polls for the `ImageState` to be `available`.

**Note**  
The `aws:createImage` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
This action supports the following `CreateImage` parameters. For more information, see [CreateImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateImage.html).

------
#### [ YAML ]

```
name: createMyImage
action: aws:createImage
maxAttempts: 3
onFailure: Abort
inputs:
  InstanceId: i-1234567890abcdef0
  ImageName: AMI Created on{{global:DATE_TIME}}
  NoReboot: true
  ImageDescription: My newly created AMI
```

------
#### [ JSON ]

```
{
    "name": "createMyImage",
    "action": "aws:createImage",
    "maxAttempts": 3,
    "onFailure": "Abort",
    "inputs": {
        "InstanceId": "i-1234567890abcdef0",
        "ImageName": "AMI Created on{{global:DATE_TIME}}",
        "NoReboot": true,
        "ImageDescription": "My newly created AMI"
    }
}
```

------

InstanceId  
The ID of the instance.  
Type: String  
Required: Yes

ImageName  
The name for the image.  
Type: String  
Required: Yes

ImageDescription  
A description of the image.  
Type: String  
Required: No

NoReboot  
A Boolean literal.  
By default, Amazon Elastic Compute Cloud (Amazon EC2) attempts to shut down and reboot the instance before creating the image. If the **No Reboot** option is set to `true`, Amazon EC2 doesn't shut down the instance before creating the image. When this option is used, file system integrity on the created image can't be guaranteed.   
If you don't want the instance to run after you create an AMI from it, first use the [`aws:changeInstanceState` – Change or assert instance state](automation-action-changestate.md) action to stop the instance, and then use this `aws:createImage` action with the **NoReboot** option set to `true`.  
Type: Boolean  
Required: No

BlockDeviceMappings  
The block devices for the instance.  
Type: Map  
Required: NoOutput

ImageId  
The ID of the newly created image.  
Type: String

ImageState  
The current state of the image. If the state is available, the image is successfully registered and can be used to launch an instance.  
Type: String

# `aws:createStack` – Create an CloudFormation stack


Creates an AWS CloudFormation stack from a template.

**Note**  
The `aws:createStack` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

For supplemental information about creating CloudFormation stacks, see [CreateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html) in the *AWS CloudFormation API Reference*. 

**Input**

------
#### [ YAML ]

```
name: makeStack
action: aws:createStack
maxAttempts: 1
onFailure: Abort
inputs:
  Capabilities:
  - CAPABILITY_IAM
  StackName: myStack
  TemplateURL: http://s3.amazonaws.com/amzn-s3-demo-bucket/myStackTemplate
  TimeoutInMinutes: 5
  Parameters:
    - ParameterKey: LambdaRoleArn
      ParameterValue: "{{LambdaAssumeRole}}"
    - ParameterKey: createdResource
      ParameterValue: createdResource-{{automation:EXECUTION_ID}}
```

------
#### [ JSON ]

```
{
    "name": "makeStack",
    "action": "aws:createStack",
    "maxAttempts": 1,
    "onFailure": "Abort",
    "inputs": {
        "Capabilities": [
            "CAPABILITY_IAM"
        ],
        "StackName": "myStack",
        "TemplateURL": "http://s3.amazonaws.com/amzn-s3-demo-bucket/myStackTemplate",
        "TimeoutInMinutes": 5,
        "Parameters": [
          {
            "ParameterKey": "LambdaRoleArn",
            "ParameterValue": "{{LambdaAssumeRole}}"
          },
          {
            "ParameterKey": "createdResource",
            "ParameterValue": "createdResource-{{automation:EXECUTION_ID}}"
          }
    }
}
```

------

Capabilities  
A list of values that you specify before CloudFormation can create certain stacks. Some stack templates include resources that can affect permissions in your AWS account. For those stacks, you must explicitly acknowledge their capabilities by specifying this parameter.   
Valid values include `CAPABILITY_IAM`, `CAPABILITY_NAMED_IAM`, and `CAPABILITY_AUTO_EXPAND`.   
**CAPABILITY\$1IAM and CAPABILITY\$1NAMED\$1IAM**  
If you have IAM resources, you can specify either capability. If you have IAM resources with custom names, you must specify `CAPABILITY_NAMED_IAM`. If you don't specify this parameter, this action returns an `InsufficientCapabilities` error. The following resources require you to specify either `CAPABILITY_IAM` or `CAPABILITY_NAMED_IAM`.
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-accesskey.html)
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-group.html)
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html)
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html)
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html)
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-user.html)
+ [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-addusertogroup.html)
If your stack template contains these resources, we recommend that you review all permissions associated with them and edit their permissions, if necessary.   
For more information, see [Acknowledging IAM Resources in CloudFormation Templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html#capabilities).   
**CAPABILITY\$1AUTO\$1EXPAND**  
Some template contain macros. Macros perform custom processing on templates; this can include simple actions like find-and-replace operations, all the way to extensive transformations of entire templates. Because of this, users typically create a change set from the processed template, so that they can review the changes resulting from the macros before actually creating the stack. If your stack template contains one or more macros, and you choose to create a stack directly from the processed template, without first reviewing the resulting changes in a change set, you must acknowledge this capability. 
For more information, see [Using AWS CloudFormation Macros to Perform Custom Processing on Templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-macros.html) in the *AWS CloudFormation User Guide*.  
Type: array of Strings  
Valid Values: `CAPABILITY_IAM | CAPABILITY_NAMED_IAM | CAPABILITY_AUTO_EXPAND`  
Required: No

ClientRequestToken  
A unique identifier for this CreateStack request. Specify this token if you set maxAttempts in this step to a value greater than 1. By specifying this token, CloudFormation knows that you aren't attempting to create a new stack with the same name.  
Type: String  
Required: No  
Length Constraints: Minimum length of 1. Maximum length of 128.  
Pattern: [a-zA-Z0-9][-a-zA-Z0-9]\$1

DisableRollback  
Set to `true` to turn off rollback of the stack if stack creation failed.  
Conditional: You can specify either the `DisableRollback` parameter or the `OnFailure` parameter, but not both.   
Default: `false`  
Type: Boolean  
Required: No

NotificationARNs  
The Amazon Simple Notification Service (Amazon SNS) topic ARNs for publishing stack-related events. You can find SNS topic ARNs using the Amazon SNS console, [https://console.aws.amazon.com/sns/v3/home](https://console.aws.amazon.com/sns/v3/home).   
Type: array of Strings  
Array Members: Maximum number of 5 items.  
Required: No

OnFailure  
Determines the action to take if stack creation failed. You must specify `DO_NOTHING`, `ROLLBACK`, or `DELETE`.  
Conditional: You can specify either the `OnFailure` parameter or the `DisableRollback` parameter, but not both.   
Default: `ROLLBACK`  
Type: String  
Valid Values:` DO_NOTHING | ROLLBACK | DELETE`  
Required: No

Parameters  
A list of `Parameter` structures that specify input parameters for the stack. For more information, see the [Parameter](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) data type.   
Type: array of [Parameter](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html) objects   
Required: No

ResourceTypes  
The template resource types that you have permissions to work with for this create stack action. For example: `AWS::EC2::Instance`, `AWS::EC2::*`, or `Custom::MyCustomInstance`. Use the following syntax to describe template resource types.  
+ For all AWS resources:

  ```
  AWS::*
  ```
+ For all custom resources:

  ```
  Custom::*
  ```
+ For a specific custom resource:

  ```
  Custom::logical_ID
  ```
+ For all resources of a particular AWS service:

  ```
  AWS::service_name::*
  ```
+ For a specific AWS resource:

  ```
  AWS::service_name::resource_logical_ID
  ```
If the list of resource types doesn't include a resource that you're creating, the stack creation fails. By default, CloudFormation grants permissions to all resource types. IAM uses this parameter for CloudFormation-specific condition keys in IAM policies. For more information, see [Controlling Access with AWS Identity and Access Management](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html).   
Type: array of Strings  
Length Constraints: Minimum length of 1. Maximum length of 256.  
Required: No

RoleARN  
The Amazon Resource Name (ARN) of an IAM role that CloudFormation assumes to create the stack. CloudFormation uses the role's credentials to make calls on your behalf. CloudFormation always uses this role for all future operations on the stack. As long as users have permission to operate on the stack, CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants the least amount of privileges.   
If you don't specify a value, CloudFormation uses the role that was previously associated with the stack. If no role is available, CloudFormation uses a temporary session that is generated from your user credentials.   
Type: String  
Length Constraints: Minimum length of 20. Maximum length of 2048.  
Required: No

StackName  
The name that is associated with the stack. The name must be unique in the Region in which you're creating the stack.  
A stack name can contain only alphanumeric characters (case sensitive) and hyphens. It must start with an alphabetic character and can't be longer than 128 characters. 
Type: String  
Required: Yes

StackPolicyBody  
Structure containing the stack policy body. For more information, see [Prevent Updates to Stack Resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html).  
Conditional: You can specify either the `StackPolicyBody` parameter or the `StackPolicyURL` parameter, but not both.   
Type: String  
Length Constraints: Minimum length of 1. Maximum length of 16384.  
Required: No

StackPolicyURL  
Location of a file containing the stack policy. The URL must point to a policy located in an S3 bucket in the same region as the stack. The maximum file size allowed for the stack policy is 16 KB.  
Conditional: You can specify either the `StackPolicyBody` parameter or the `StackPolicyURL` parameter, but not both.   
Type: String  
Length Constraints: Minimum length of 1. Maximum length of 1350.  
Required: No

Tags  
Key-value pairs to associate with this stack. CloudFormation also propagates these tags to the resources created in the stack. You can specify a maximum number of 10 tags.   
Type: array of [Tag](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Tag.html) objects   
Required: No

TemplateBody  
Structure containing the template body with a minimum length of 1 byte and a maximum length of 51,200 bytes. For more information, see [Template Anatomy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html).   
Conditional: You can specify either the `TemplateBody` parameter or the `TemplateURL` parameter, but not both.   
Type: String  
Length Constraints: Minimum length of 1.  
Required: No

TemplateURL  
Location of a file containing the template body. The URL must point to a template that is located in an S3 bucket. The maximum size allowed for the template is 460,800 bytes. For more information, see [Template Anatomy](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html).   
Conditional: You can specify either the `TemplateBody` parameter or the `TemplateURL` parameter, but not both.   
Type: String  
Length Constraints: Minimum length of 1. Maximum length of 1024.  
Required: No

TimeoutInMinutes  
The amount of time that can pass before the stack status becomes `CREATE_FAILED`. If `DisableRollback` isn't set or is set to `false`, the stack will be rolled back.   
Type: Integer  
Valid Range: Minimum value of 1.  
Required: No

## Outputs


StackId  
Unique identifier of the stack.  
Type: String

StackStatus  
Current status of the stack.  
Type: String  
Valid Values: `CREATE_IN_PROGRESS | CREATE_FAILED | CREATE_COMPLETE | ROLLBACK_IN_PROGRESS | ROLLBACK_FAILED | ROLLBACK_COMPLETE | DELETE_IN_PROGRESS | DELETE_FAILED | DELETE_COMPLETE | UPDATE_IN_PROGRESS | UPDATE_COMPLETE_CLEANUP_IN_PROGRESS | UPDATE_COMPLETE | UPDATE_ROLLBACK_IN_PROGRESS | UPDATE_ROLLBACK_FAILED | UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS | UPDATE_ROLLBACK_COMPLETE | REVIEW_IN_PROGRESS`  
Required: Yes

StackStatusReason  
Success or failure message associated with the stack status.  
Type: String  
Required: No  
For more information, see [CreateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html).

## Security considerations


Before you can use the `aws:createStack` action, you must assign the following policy to the IAM Automation assume role. For more information about the assume role, see [Task 1: Create a service role for Automation](automation-setup-iam.md#create-service-role). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "sqs:*",
            "cloudformation:CreateStack",
            "cloudformation:DescribeStacks"
         ],
         "Resource":"*"
      }
   ]
}
```

------

# `aws:createTags` – Create tags for AWS resources


Creates new tags for Amazon Elastic Compute Cloud (Amazon EC2) instances or AWS Systems Manager managed instances.

**Note**  
The `aws:createTags` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
This action supports most Amazon EC2 `CreateTags` and Systems Manager `AddTagsToResource` parameters. For more information, see [CreateTags](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/api_createtags.html) and [AddTagsToResource](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/api_addtagstoresource.html).

The following example shows how to tag an Amazon Machine Image (AMI) and an instance as production resources for a particular department.

------
#### [ YAML ]

```
name: createTags
action: aws:createTags
maxAttempts: 3
onFailure: Abort
inputs:
  ResourceType: EC2
  ResourceIds:
  - ami-9a3768fa
  - i-02951acd5111a8169
  Tags:
  - Key: production
    Value: ''
  - Key: department
    Value: devops
```

------
#### [ JSON ]

```
{
    "name": "createTags",
    "action": "aws:createTags",
    "maxAttempts": 3,
    "onFailure": "Abort",
    "inputs": {
        "ResourceType": "EC2",
        "ResourceIds": [
            "ami-9a3768fa",
            "i-02951acd5111a8169"
        ],
        "Tags": [
            {
                "Key": "production",
                "Value": ""
            },
            {
                "Key": "department",
                "Value": "devops"
            }
        ]
    }
}
```

------

ResourceIds  
The IDs of the resource(s) to be tagged. If resource type isn't “EC2”, this field can contain only a single item.  
Type: String List  
Required: Yes

Tags  
The tags to associate with the resource(s).  
Type: List of Maps  
Required: Yes

ResourceType  
The type of resource(s) to be tagged. If not supplied, the default value of “EC2” is used.  
Type: String  
Required: No  
Valid Values: `EC2` \$1 `ManagedInstance` \$1 `MaintenanceWindow` \$1 `Parameter`

**Output**  
None

# `aws:deleteImage` – Delete an Amazon Machine Image


Deletes the specified Amazon Machine Image (AMI) and all related snapshots.

**Note**  
The `aws:deleteImage` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
This action supports only one parameter. For more information, see the documentation for [DeregisterImage](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DeregisterImage.html) and [DeleteSnapshot](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DeleteSnapshot.html).

------
#### [ YAML ]

```
name: deleteMyImage
action: aws:deleteImage
maxAttempts: 3
timeoutSeconds: 180
onFailure: Abort
inputs:
  ImageId: ami-12345678
```

------
#### [ JSON ]

```
{
    "name": "deleteMyImage",
    "action": "aws:deleteImage",
    "maxAttempts": 3,
    "timeoutSeconds": 180,
    "onFailure": "Abort",
    "inputs": {
        "ImageId": "ami-12345678"
    }
}
```

------

ImageId  
The ID of the image to be deleted.  
Type: String  
Required: Yes

**Output**  
None

# `aws:deleteStack` – Delete an CloudFormation stack


Deletes an AWS CloudFormation stack.

**Note**  
The `aws:deleteStack` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**

------
#### [ YAML ]

```
name: deleteStack
action: aws:deleteStack
maxAttempts: 1
onFailure: Abort
inputs:
  StackName: "{{stackName}}"
```

------
#### [ JSON ]

```
{
   "name":"deleteStack",
   "action":"aws:deleteStack",
   "maxAttempts":1,
   "onFailure":"Abort",
   "inputs":{
      "StackName":"{{stackName}}"
   }
}
```

------

ClientRequestToken  
A unique identifier for this `DeleteStack` request. Specify this token if you plan to retry requests so that CloudFormation knows that you aren't attempting to delete a stack with the same name. You can retry `DeleteStack` requests to verify that CloudFormation received them.  
Type: String  
Length Constraints: Minimum length of 1. Maximum length of 128.  
Pattern: [a-zA-Z][-a-zA-Z0-9]\$1  
Required: No

RetainResources.member.N  
This input applies only to stacks that are in a `DELETE_FAILED` state. A list of logical resource IDs for the resources you want to retain. During deletion, CloudFormation deletes the stack, but doesn't delete the retained resources.  
Retaining resources is useful when you can't delete a resource, such as a non-empty S3 bucket, but you want to delete the stack.  
Type: array of strings  
Required: No

RoleARN  
The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that CloudFormation assumes to create the stack. CloudFormation uses the role's credentials to make calls on your behalf. CloudFormation always uses this role for all future operations on the stack. As long as users have permission to operate on the stack, CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants the least amount of privileges.   
If you don't specify a value, CloudFormation uses the role that was previously associated with the stack. If no role is available, CloudFormation uses a temporary session that is generated from your user credentials.   
Type: String  
Length Constraints: Minimum length of 20. Maximum length of 2048.  
Required: No

StackName  
The name or the unique stack ID that is associated with the stack.  
Type: String  
Required: Yes

## Security considerations


Before you can use the `aws:deleteStack` action, you must assign the following policy to the IAM Automation assume role. For more information about the assume role, see [Task 1: Create a service role for Automation](automation-setup-iam.md#create-service-role). 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "sqs:*",
            "cloudformation:DeleteStack",
            "cloudformation:DescribeStacks"
         ],
         "Resource":"*"
      }
   ]
}
```

------

# `aws:executeAutomation` – Run another automation


Runs a secondary automation by calling a secondary runbook. With this action, you can create runbooks for your most common operations, and reference those runbooks during an automation. This action can simplify your runbooks by removing the need to duplicate steps across similar runbooks.

The secondary automation runs in the context of the user who initiated the primary automation. This means that the secondary automation uses the same AWS Identity and Access Management (IAM) role or user as the user who started the first automation.

**Important**  
If you specify parameters in a secondary automation that use an assume role (a role that uses the iam:passRole policy), then the user or role that initiated the primary automation must have permission to pass the assume role specified in the secondary automation. For more information about setting up an assume role for Automation, see [Create the service roles for Automation using the console](automation-setup-iam.md).

**Input**

------
#### [ YAML ]

```
name: Secondary_Automation
action: aws:executeAutomation
maxAttempts: 3
timeoutSeconds: 3600
onFailure: Abort
inputs:
  DocumentName: secondaryAutomation
  RuntimeParameters:
    instanceIds:
    - i-1234567890abcdef0
```

------
#### [ JSON ]

```
{
   "name":"Secondary_Automation",
   "action":"aws:executeAutomation",
   "maxAttempts":3,
   "timeoutSeconds":3600,
   "onFailure":"Abort",
   "inputs":{
      "DocumentName":"secondaryAutomation",
      "RuntimeParameters":{
         "instanceIds":[
            "i-1234567890abcdef0"
         ]
      }
   }
}
```

------

DocumentName  
The name of the secondary runbook to run during the step. For runbooks in the same AWS account, specify the runbook name. For runbooks shared from a different AWS account, specify the Amazon Resource Name (ARN) of the runbook. For information about using shared runbooks, see [Using shared SSM documents](documents-ssm-sharing.md#using-shared-documents).  
Type: String  
Required: Yes

DocumentVersion  
The version of the secondary runbook to run. If not specified, Automation runs the default runbook version.  
Type: String  
Required: No

MaxConcurrency  
The maximum number of targets allowed to run this task in parallel. You can specify a number, such as 10, or a percentage, such as 10%.  
Type: String  
Required: No

MaxErrors  
The number of errors that are allowed before the system stops running the automation on additional targets. You can specify either an absolute number of errors, for example 10, or a percentage of the target set, for example 10%. If you specify 3, for example, the system stops running the automation when the fourth error is received. If you specify 0, then the system stops running the automation on additional targets after the first error result is returned. If you run an automation on 50 resources and set `MaxErrors` to 10%, then the system stops running the automation on additional targets when the sixth error is received.  
Automations that are already running when the `MaxErrors` threshold is reached are allowed to complete, but some of these automations may fail as well. If you need to ensure that there won't be more failed automations than the specified `MaxErrors`, set `MaxConcurrency` to 1 so the automations proceed one at a time.  
Type: String  
Required: No

RuntimeParameters  
Required parameters for the secondary runbook. The mapping uses the following format: \$1"parameter1" : "value1", "parameter2" : "value2" \$1  
Type: Map  
Required: No

Tags  
Optional metadata that you assign to a resource. You can specify a maximum of five tags for an automation.  
Type: MapList  
Required: No

TargetLocations  
A location is a combination of AWS Regions and/or AWS accounts where you want to run the automation. A minimum number of 1 item must be specified and a maximum number of 100 items can be specified. When specifying a value for this parameter, outputs aren't returned to the parent automation. If needed, you must make subsequent calls to API operations to retrieve the output from child automations.  
Type: MapList  
Required: No

TargetMaps  
A list of key-value mappings of document parameters to target resources. Both `Targets` and `TargetMaps` can't be specified together.   
Type: MapList  
Required: No

TargetParameterName  
The name of the parameter used as the target resource for the rate-controlled automation. Required if you specify `Targets`.  
Type: String  
Required: No

Targets  
A list of key-value mappings to target resources. Required if you specify `TargetParameterName`.  
Type: MapList  
Required: NoOutput

Output  
The output generated by the secondary automation. You can reference the output by using the following format: *Secondary\$1Automation\$1Step\$1Name*.Output  
Type: StringList  
Here is an example:  

```
- name: launchNewWindowsInstance
  action: 'aws:executeAutomation'
  onFailure: Abort
  inputs:
    DocumentName: launchWindowsInstance
  nextStep: getNewInstanceRootVolume
- name: getNewInstanceRootVolume
  action: 'aws:executeAwsApi'
  onFailure: Abort
  inputs:
    Service: ec2
    Api: DescribeVolumes
    Filters:
    - Name: attachment.device
      Values:
      - /dev/sda1
    - Name: attachment.instance-id
      Values:
      - '{{launchNewWindowsInstance.Output}}'
  outputs:
  - Name: rootVolumeId
    Selector: '$.Volumes[0].VolumeId'
    Type: String
  nextStep: snapshotRootVolume
- name: snapshotRootVolume
  action: 'aws:executeAutomation'
  onFailure: Abort
  inputs:
    DocumentName: AWS-CreateSnapshot
    RuntimeParameters:
    VolumeId:
    - '{{getNewInstanceRootVolume.rootVolumeId}}'
    Description:
    - 'Initial root snapshot for {{launchNewWindowsInstance.Output}}'
```

ExecutionId  
The ID of the secondary automation.  
Type: String

Status  
The status of the secondary automation.  
Type: String

# `aws:executeAwsApi` – Call and run AWS API operations


Calls and runs AWS API operations. Most API operations are supported, although not all API operations have been tested. Streaming API operations, such as the [GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html) operation, aren't supported. If you're not sure if an API operation you want to use is a streaming operation, review the [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) documentation for the service to determine if an API requires streaming inputs or outputs. We regularly update the Boto3 version used by this action. However, following the release of a new Boto3 version it can take up to a few weeks for changes to be reflected in this action. Each `aws:executeAwsApi` action can run up to a maximum duration of 25 seconds. For more examples of how to use this action, see [Additional runbook examples](automation-document-examples.md).

**Note**  
The `aws:executeAwsApi` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Inputs**  
Inputs are defined by the API operation that you choose. 

------
#### [ YAML ]

```
action: aws:executeAwsApi
inputs:
  Service: The official namespace of the service
  Api: The API operation or method name
  API operation inputs or parameters: A value
outputs: # These are user-specified outputs
- Name: The name for a user-specified output key
  Selector: A response object specified by using jsonpath format
  Type: The data type
```

------
#### [ JSON ]

```
{
   "action":"aws:executeAwsApi",
   "inputs":{
      "Service":"The official namespace of the service",
      "Api":"The API operation or method name",
      "API operation inputs or parameters":"A value"
   },
   "outputs":[ These are user-specified outputs
      {
         "Name":"The name for a user-specified output key",
         "Selector":"A response object specified by using JSONPath format",
         "Type":"The data type"
      }
   ]
}
```

------

Service  
The AWS service namespace that contains the API operation that you want to run. You can view a list of supported AWS service namespaces in [Available services](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) of the AWS SDK for Python (Boto3). The namespace can be found in the **Client** section. For example, the namespace for Systems Manager is `ssm`. The namespace for Amazon Elastic Compute Cloud (Amazon EC2) is `ec2`.  
Type: String  
Required: Yes

Api  
The name of the API operation that you want to run. You can view the API operations (also called methods) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all API operations (methods) for Amazon Relational Database Service (Amazon RDS) are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html).  
Type: String  
Required: Yes

API operation inputs  
One or more API operation inputs. You can view the available inputs (also called parameters) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all methods for Amazon RDS are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html). Choose the [describe\$1db\$1instances](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances) method and scroll down to see the available parameters, such as **DBInstanceIdentifier**, **Name**, and **Values**.  

```
inputs:
  Service: The official namespace of the service
  Api: The API operation name
  API input 1: A value
  API Input 2: A value
  API Input 3: A value
```

```
"inputs":{
      "Service":"The official namespace of the service",
      "Api":"The API operation name",
      "API input 1":"A value",
      "API Input 2":"A value",
      "API Input 3":"A value"
}
```
Type: Determined by chosen API operation  
Required: Yes

**Outputs**  
Outputs are specified by the user based on the response from the chosen API operation.

Name  
A name for the output.  
Type: String  
Required: Yes

Selector  
The JSONPath to a specific attribute in the response object. You can view the response objects by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all methods for Amazon RDS are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html). Choose the [describe\$1db\$1instances](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances) method and scroll down to the **Response Structure** section. **DBInstances** is listed as a response object.  
Type: Integer, Boolean, String, StringList, StringMap, or MapList  
Required: Yes

Type  
The data type for the response element.  
Type: Varies  
Required: Yes

# `aws:executeScript` – Run a script


Runs the Python or PowerShell script provided using the specified runtime and handler. Each `aws:executeScript` action can run up to a maximum duration of 600 seconds (10 minutes). You can limit the timeout by specifying the `timeoutSeconds` parameter for an `aws:executeScript` step.

Use return statements in your function to add outputs to your output payload. For examples of defining outputs for your `aws:executeScript` action, see [Example 2: Scripted runbook](automation-authoring-runbooks-scripted-example.md). You can also send the output from `aws:executeScript` actions in your runbooks to the Amazon CloudWatch Logs log group you specify. For more information, see [Logging Automation action output with CloudWatch Logs](automation-action-logging.md).

If you want to send output from `aws:executeScript` actions to CloudWatch Logs, or if the scripts you specify for `aws:executeScript` actions call AWS API operations, an AWS Identity and Access Management (IAM) service role (or assume role) is always required to run the runbook.

**Note**  
The `aws:executeScript` action does not support automatic throttling retry. If your script makes AWS API calls that might be throttled, you must implement your own retry logic in your script code.

The `aws:executeScript` action contains the following preinstalled PowerShell Core modules:
+ Microsoft.PowerShell.Host
+ Microsoft.PowerShell.Management
+ Microsoft.PowerShell.Security
+ Microsoft.PowerShell.Utility
+ PackageManagement
+ PowerShellGet

To use PowerShell Core modules that aren't preinstalled, your script must install the module with the `-Force` flag, as shown in the following command. The `AWSPowerShell.NetCore` module isn't supported. Replace *ModuleName* with the module you want to install.

```
Install-Module ModuleName -Force
```

To use PowerShell Core cmdlets in your script, we recommend using the `AWS.Tools` modules, as shown in the following commands. Replace each *example resource placeholder* with your own information.
+ Amazon S3 cmdlets.

  ```
  Install-Module AWS.Tools.S3 -Force
  Get-S3Bucket -BucketName amzn-s3-demo-bucket
  ```
+ Amazon EC2 cmdlets.

  ```
  Install-Module AWS.Tools.EC2 -Force
  Get-EC2InstanceStatus -InstanceId instance-id
  ```
+ Common, or service independent AWS Tools for Windows PowerShell cmdlets.

  ```
  Install-Module AWS.Tools.Common -Force
  Get-AWSRegion
  ```

If your script initializes new objects in addition to using PowerShell Core cmdlets, you must also import the module as shown in the following command.

```
Install-Module AWS.Tools.EC2 -Force
Import-Module AWS.Tools.EC2

$tag = New-Object Amazon.EC2.Model.Tag
$tag.Key = "Tag"
$tag.Value = "TagValue"

New-EC2Tag -Resource i-02573cafcfEXAMPLE -Tag $tag
```

For examples of installing and importing `AWS.Tools` modules, and using PowerShell Core cmdlets in runbooks, see [Visual design experience for Automation runbooks](automation-visual-designer.md).

**Input**  
Provide the information required to run your script. Replace each *example resource placeholder* with your own information.

**Note**  
The attachment for a Python script can be a .py file or a .zip file that contains the script. PowerShell scripts must be stored in .zip files.

------
#### [ YAML ]

```
action: "aws:executeScript"
inputs: 
 Runtime: runtime
 Handler: "functionName"
 InputPayload: 
  scriptInput: '{{parameterValue}}'
 Script: |-
   def functionName(events, context):
   ...
 Attachment: "scriptAttachment.zip"
```

------
#### [ JSON ]

```
{
    "action": "aws:executeScript",
    "inputs": {
        "Runtime": "runtime",
        "Handler": "functionName",
        "InputPayload": {
            "scriptInput": "{{parameterValue}}"
        },
        "Attachment": "scriptAttachment.zip"
    }
}
```

------

Runtime  
The runtime language to be used for running the provided script. `aws:executeScript` supports the runtimes in the following table.      
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/automation-action-executeScript.html)
Type: String  
Required: Yes  
For python runtimes, the environment provides 512MB of memory and 512MB of disk space. For PowerShell runtimes, the environment provides 1024MB of memory and 512MB of disk space.

Handler  
The name of your function. You must ensure the function defined in the handler has two parameters, `events` and `context`. The PowerShell runtime does not support this parameter.  
Type: String  
Required: Yes (Python) \$1 Not supported (PowerShell)

InputPayload  
A JSON or YAML object that will be passed to the first parameter of the handler. This can be used to pass input data to the script.  
Type: String  
Required: No  

```
description: Tag an instance
schemaVersion: '0.3'
assumeRole: '{{AutomationAssumeRole}}'
parameters:
    AutomationAssumeRole:
        type: String
        description: '(Required) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.'
    InstanceId:
        type: String
        description: (Required) The ID of the EC2 instance you want to tag.
mainSteps:
  - name: tagInstance
    action: 'aws:executeScript'
    inputs:
        Runtime: "python3.11"
        Handler: tagInstance
        InputPayload:
            instanceId: '{{InstanceId}}'
        Script: |-
          def tagInstance(events,context):
            import boto3

            #Initialize client
            ec2 = boto3.client('ec2')
            instanceId = events['instanceId']
            tag = {
                "Key": "Env",
                "Value": "ExamplePython"
            }
            print(f"Adding tag {tag} to instance id {instanceId}")
            ec2.create_tags(
                Resources=[instanceId],
                Tags=[tag]
            )
            return tag
    outputs:
      - Type: String
        Name: TagKey
        Selector: $.Payload.Key
outputs:
  - tagInstance.TagKey
```

```
description: Tag an instance
schemaVersion: '0.3'
assumeRole: '{{AutomationAssumeRole}}'
parameters:
  AutomationAssumeRole:
    type: String
    description: (Required) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.
  InstanceId:
    type: String
    description: (Required) The ID of the EC2 instance you want to tag.
mainSteps:
  - name: tagInstance
    action: aws:executeScript
    isEnd: true
    inputs:
      Runtime: PowerShell 7.4
      InputPayload:
        instanceId: '{{InstanceId}}'
      Script: |-
        Install-Module AWS.Tools.EC2 -Force
        Import-Module AWS.Tools.EC2

        $input = $env:InputPayload | ConvertFrom-Json

        $tag = New-Object Amazon.EC2.Model.Tag
        $tag.Key = "Env"
        $tag.Value = "ExamplePowerShell"

        Write-Information "Adding tag key: $($tag.Key) and value: $($tag.Value) to instance id $($input.instanceId)"
        New-EC2Tag -Resource $input.instanceId -Tag $tag

        return $tag
    outputs:
      - Type: String
        Name: TagKey
        Selector: $.Payload.Key
outputs:
  - tagInstance.TagKey
```

Script  
An embedded script that you want to run during the automation.  
Type: String  
Required: No (Python) \$1 Yes (PowerShell)

Attachment  
The name of a standalone script file or .zip file that can be invoked by the action. Specify the same value as the `Name` of the document attachment file you specify in the `Attachments` request parameter. For more information, see [Attachments](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateDocument.html#systemsmanager-CreateDocument-request-Attachments) in the *AWS Systems Manager API Reference*. If you're providing a script using an attachment, you must also define a `files` section in the top-level elements of your runbook. For more information, see [Schema version 0.3](documents-schemas-features.md#automation-doc-syntax-examples).  
To invoke a file for Python, use the `filename.method_name` format in `Handler`.   
The attachment for a Python script can be a .py file or a .zip file that contains the script. PowerShell scripts must be stored in .zip files.
When including Python libraries in your attachment, we recommend adding an empty `__init__.py` file in each module directory. This allows you to import the modules from the library in your attachment within your script content. For example: `from library import module`  
Type: String  
Required: NoOutput

Payload  
The JSON representation of the object returned by your function. Up to 100KB is returned. If you output a list, a maximum of 100 items is returned.

## Using attachments with aws:executeScript


Attachments provide a powerful way to package and reuse complex scripts, multiple modules, and external dependencies with your `aws:executeScript` actions. Use attachments when you need to:
+ Package multiple Python modules or PowerShell scripts together.
+ Reuse the same script logic across multiple runbooks.
+ Include external libraries or dependencies with your scripts.
+ Keep your runbook definition clean by separating complex script logic.
+ Share script packages across teams or automation workflows.

### Attachment structure and packaging


You can attach either single files or zip packages containing multiple files. The structure depends on your use case:

**Single file attachments**  
For simple scripts, you can attach a single `.py` file (Python) or a `.zip` file containing a single PowerShell script.

**Multi-module packages**  
For complex automation that requires multiple modules, create a zip package with the following recommended structure:

```
my-automation-package.zip
├── main.py                    # Entry point script
├── utils/
│   ├── __init__.py           # Required for Python module imports
│   ├── helper_functions.py   # Utility functions
│   └── aws_operations.py     # AWS-specific operations
├── config/
│   ├── __init__.py
│   └── settings.py           # Configuration settings
└── requirements.txt          # Optional: document dependencies
```

**Important**  
For Python packages, you must include an empty `__init__.py` file in each directory that contains Python modules. This allows you to import modules using standard Python import syntax like `from utils import helper_functions`.

**PowerShell package structure**  
PowerShell attachments must be packaged in zip files with the following structure:

```
my-powershell-package.zip
├── Main.ps1                  # Entry point script
├── Modules/
│   ├── HelperFunctions.ps1   # Utility functions
│   └── AWSOperations.ps1     # AWS-specific operations
└── Config/
    └── Settings.ps1          # Configuration settings
```

### Creating runbooks with attachments


Follow these steps to create runbooks that use attachments:

1. **Upload your attachment to Amazon S3**

   Upload your script file or zip package to an S3 bucket that your automation role can access. Note the S3 URI for use in the next step.

   ```
   aws s3 cp my-automation-package.zip s3://my-automation-bucket/scripts/
   ```

1. **Calculate the attachment checksum**

   Calculate the SHA-256 checksum of your attachment file for security verification:

   ```
   # Linux/macOS
   shasum -a 256 my-automation-package.zip
   
   # Windows PowerShell
   Get-FileHash -Algorithm SHA256 my-automation-package.zip
   ```

1. **Define the files section in your runbook**

   Add a `files` section at the top level of your runbook to reference your attachment:

   ```
   files:
     my-automation-package.zip:
       checksums:
         sha256: "your-calculated-checksum-here"
   ```

1. **Reference the attachment in your executeScript step**

   Use the `Attachment` parameter to reference your uploaded file:

   ```
   - name: runMyScript
     action: aws:executeScript
     inputs:
       Runtime: python3.11
       Handler: main.process_data
       Attachment: my-automation-package.zip
       InputPayload:
         inputData: "{{InputParameter}}"
   ```

## aws:executeScript attachment examples


The following examples demonstrate different ways to use attachments with the `aws:executeScript` action.

### Example 1: Single file attachment


This example shows how to use a single Python file as an attachment to process EC2 instance data.

**Attachment file: process\$1instance.py**  
Create a Python file with the following content:

```
import boto3
import json

def process_instance_data(events, context):
    """Process EC2 instance data and return formatted results."""
    try:
        instance_id = events.get('instanceId')
        if not instance_id:
            raise ValueError("instanceId is required")
        
        ec2 = boto3.client('ec2')
        
        # Get instance details
        response = ec2.describe_instances(InstanceIds=[instance_id])
        instance = response['Reservations'][0]['Instances'][0]
        
        # Format the response
        result = {
            'instanceId': instance_id,
            'instanceType': instance['InstanceType'],
            'state': instance['State']['Name'],
            'availabilityZone': instance['Placement']['AvailabilityZone'],
            'tags': {tag['Key']: tag['Value'] for tag in instance.get('Tags', [])}
        }
        
        print(f"Successfully processed instance {instance_id}")
        return result
        
    except Exception as e:
        print(f"Error processing instance: {str(e)}")
        raise
```

**Complete runbook**  
Here's the complete runbook that uses the single file attachment:

```
description: Process EC2 instance data using single file attachment
schemaVersion: '0.3'
assumeRole: '{{AutomationAssumeRole}}'
parameters:
  AutomationAssumeRole:
    type: String
    description: (Required) IAM role for automation execution
  InstanceId:
    type: String
    description: (Required) EC2 instance ID to process

files:
  process_instance.py:
    checksums:
      sha256: "abc123def456..."

mainSteps:
  - name: processInstance
    action: aws:executeScript
    inputs:
      Runtime: python3.11
      Handler: process_instance.process_instance_data
      Attachment: process_instance.py
      InputPayload:
        instanceId: '{{InstanceId}}'
    outputs:
      - Type: StringMap
        Name: InstanceData
        Selector: $.Payload

outputs:
  - processInstance.InstanceData
```

### Example 2: Multi-module package


This example demonstrates using a zip package containing multiple Python modules for complex S3 bucket operations.

**Package structure**  
Create a zip package with the following structure:

```
s3-operations.zip
├── main.py
├── utils/
│   ├── __init__.py
│   ├── s3_helper.py
│   └── validation.py
└── config/
    ├── __init__.py
    └── settings.py
```

**main.py (entry point)**  
The main script that orchestrates the operations:

```
from utils.s3_helper import S3Operations
from utils.validation import validate_bucket_name
from config.settings import get_default_settings

def cleanup_s3_bucket(events, context):
    """Clean up S3 bucket based on specified criteria."""
    try:
        bucket_name = events.get('bucketName')
        max_age_days = events.get('maxAgeDays', 30)
        
        # Validate inputs
        if not validate_bucket_name(bucket_name):
            raise ValueError(f"Invalid bucket name: {bucket_name}")
        
        # Initialize S3 operations
        s3_ops = S3Operations()
        settings = get_default_settings()
        
        # Perform cleanup
        deleted_objects = s3_ops.delete_old_objects(
            bucket_name, 
            max_age_days,
            settings['dry_run']
        )
        
        result = {
            'bucketName': bucket_name,
            'deletedCount': len(deleted_objects),
            'deletedObjects': deleted_objects[:10],  # Return first 10 for brevity
            'dryRun': settings['dry_run']
        }
        
        print(f"Cleanup completed for bucket {bucket_name}")
        return result
        
    except Exception as e:
        print(f"Error during S3 cleanup: {str(e)}")
        raise
```

## Troubleshooting aws:executeScript attachments


Use the following guidance to resolve common issues with `aws:executeScript` attachments:

**Module import errors**  
If you receive import errors when using multi-module packages:
+ Ensure you have included an empty `__init__.py` file in each directory containing Python modules.
+ Verify that your import statements match the actual file and directory structure in your zip package.
+ Use relative imports (e.g., `from .utils import helper`) or absolute imports (e.g., `from utils import helper`) consistently.

**Attachment not found errors**  
If your automation fails to find the attachment:
+ Verify that the `Attachment` parameter value exactly matches the key in your `files` section.
+ Check that your S3 bucket path and file name are correct in the `files` section.
+ Ensure your automation role has `s3:GetObject` permission for the attachment S3 location.
+ Verify that the checksum in your runbook matches the actual file checksum.

**Handler function errors**  
If you receive handler-related errors:
+ For Python: Use the format `filename.function_name` in the `Handler` parameter (e.g., `main.process_data`).
+ Ensure your handler function accepts exactly two parameters: `events` and `context`.
+ For PowerShell: Do not specify a `Handler` parameter; the script runs directly.

**Script execution failures**  
If your script fails during execution:
+ Check the automation execution history for detailed error messages and stack traces.
+ Use `print()` statements (Python) or `Write-Information` (PowerShell) to add debugging output.
+ Verify that all required AWS permissions are granted to your automation role.
+ Test your script logic locally before packaging it as an attachment.

**Exit codes and error handling**  
To properly handle errors and return exit codes:
+ In Python: Use `raise Exception("error message")` to indicate script failure.
+ In PowerShell: Use `throw "error message"` or `Write-Error` to indicate failure.
+ Return structured data from your functions to provide detailed success/failure information.
+ Use try-catch blocks to handle exceptions gracefully and provide meaningful error messages.

# `aws:executeStateMachine` – Run an AWS Step Functions state machine


Runs an AWS Step Functions state machine.

**Note**  
The `aws:executeStateMachine` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**

This action supports most parameters for the Step Functions [StartExecution](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API operation.

**Required AWS Identity and Access Management (IAM) permissions**
+ `states:DescribeExecution`
+ `states:StartExecution`
+ `states:StopExecution`

------
#### [ YAML ]

```
name: executeTheStateMachine
action: aws:executeStateMachine
inputs:
  stateMachineArn: StateMachine_ARN
  input: '{"parameters":"values"}'
  name: name
```

------
#### [ JSON ]

```
{
    "name": "executeTheStateMachine",
    "action": "aws:executeStateMachine",
    "inputs": {
        "stateMachineArn": "StateMachine_ARN",
        "input": "{\"parameters\":\"values\"}",
        "name": "name"
    }
}
```

------

stateMachineArn  
The Amazon Resource Name (ARN) of the Step Functions state machine.  
Type: String  
Required: Yes

name  
The name of the execution.  
Type: String  
Required: No

input  
A string that contains the JSON input data for the execution.  
Type: String  
Required: No

**Outputs**  
The following outputs are predefined for this action.

executionArn  
The ARN of the execution.  
Type: String

input  
The string that contains the JSON input data of the execution. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding..  
Type: String

name  
The name of the execution.  
Type: String

output  
The JSON output data of the execution. Length constraints apply to the payload size, and are expressed as bytes in UTF-8 encoding.  
Type: String

startDate  
The date the execution is started.  
Type: String

stateMachineArn  
The ARN of the executed stated machine.  
Type: String

status  
The current status of the execution.  
Type: String

stopDate  
If the execution has already ended, the date the execution stopped.  
Type: String

# `aws:invokeWebhook` – Invoke an Automation webhook integration


Invokes the specified Automation webhook integration. For information about creating Automation integrations, see [Creating webhook integrations for Automation](creating-webhook-integrations.md).

**Note**  
The `aws:invokeWebhook` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Note**  
To use the `aws:invokeWebhook` action, your user or service role must allow the following actions:  
ssm:GetParameter
kms:Decrypt
Permission for the AWS Key Management Service (AWS KMS) `Decrypt` operation is only required if you use a customer managed key to encrypt the parameter for your integration.

**Input**  
Provide the information for the Automation integration you want to invoke.

------
#### [ YAML ]

```
action: "aws:invokeWebhook"
inputs: 
 IntegrationName: "exampleIntegration"
 Body: "Request body"
```

------
#### [ JSON ]

```
{
    "action": "aws:invokeWebhook",
    "inputs": {
        "IntegrationName": "exampleIntegration",
        "Body": "Request body"
    }
}
```

------

IntegrationName  
The name of the Automation integration. For example, `exampleIntegration`. The integration you specify must already exist.  
Type: String  
Required: Yes

Body  
The payload you want to send when your webhook integration is invoked.  
Type: String  
Required: NoOutput

Response  
The text received from the webhook provider response.

ResponseCode  
The HTTP status code received from the webhook provider response.

# `aws:invokeLambdaFunction` – Invoke an AWS Lambda function


Invokes the specified AWS Lambda function.

**Note**  
Each `aws:invokeLambdaFunction` action can run up to a maximum duration of 300 seconds (5 minutes). You can limit the timeout by specifying the `timeoutSeconds` parameter for an `aws:invokeLambdaFunction` step.

**Note**  
The `aws:invokeLambdaFunction` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
This action supports most invoked parameters for the Lambda service. For more information, see [Invoke](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html).

------
#### [ YAML ]

```
name: invokeMyLambdaFunction
action: aws:invokeLambdaFunction
maxAttempts: 3
timeoutSeconds: 120
onFailure: Abort
inputs:
  FunctionName: MyLambdaFunction
```

------
#### [ JSON ]

```
{
    "name": "invokeMyLambdaFunction",
    "action": "aws:invokeLambdaFunction",
    "maxAttempts": 3,
    "timeoutSeconds": 120,
    "onFailure": "Abort",
    "inputs": {
        "FunctionName": "MyLambdaFunction"
    }
}
```

------

FunctionName  
The name of the Lambda function. This function must exist.  
Type: String  
Required: Yes

Qualifier  
The function version or alias name.  
Type: String  
Required: No

InvocationType  
The invocation type. The default value is `RequestResponse`.  
Type: String  
Valid values: `Event` \$1 `RequestResponse` \$1 `DryRun`  
Required: No

LogType  
If the default value is `Tail`, the invocation type must be `RequestResponse`. Lambda returns the last 4 KB of log data produced by your Lambda function, base64-encoded.  
Type: String  
Valid values: `None` \$1 `Tail`  
Required: No

ClientContext  
The client-specific information.  
Required: No

InputPayload  
A YAML or JSON object that is passed to the first parameter of the handler. You can use this input to pass data to the function. This input provides more flexibility and support than the legacy `Payload` input. If you define both `InputPayload` and `Payload` for the action, `InputPayload` takes precedence and the `Payload` value is not used.  
Type: StringMap  
Required: No

Payload  
A JSON string that's passed to the first parameter of the handler. This can be used to pass input data to the function. We recommend using `InputPayload` input for added functionality.  
Type: String  
Required: NoOutput

StatusCode  
The HTTP status code.

FunctionError  
If present, it indicates that an error occurred while executing the function. Error details are included in the response payload.

LogResult  
The base64-encoded logs for the Lambda function invocation. Logs are present only if the invocation type is `RequestResponse`, and the logs were requested.

Payload  
The JSON representation of the object returned by the Lambda function. Payload is present only if the invocation type is `RequestResponse`.

The following is a portion from the `AWS-PatchInstanceWithRollback` runbook demonstrating how to reference outputs from the `aws:invokeLambdaFunction` action.

------
#### [ YAML ]

```
- name: IdentifyRootVolume
  action: aws:invokeLambdaFunction
  inputs:
    FunctionName: "IdentifyRootVolumeLambda-{{automation:EXECUTION_ID}}"
    Payload: '{"InstanceId": "{{InstanceId}}"}'
- name: PrePatchSnapshot
  action: aws:executeAutomation
  inputs:
    DocumentName: "AWS-CreateSnapshot"
    RuntimeParameters:
      VolumeId: "{{IdentifyRootVolume.Payload}}"
      Description: "ApplyPatchBaseline restoration case contingency"
```

------
#### [ JSON ]

```
{
    "name": "IdentifyRootVolume",
    "action": "aws:invokeLambdaFunction",
    "inputs": {
      "FunctionName": "IdentifyRootVolumeLambda-{{automation:EXECUTION_ID}}",
      "Payload": "{\"InstanceId\": \"{{InstanceId}}\"}"
    }
  },
  {
    "name": "PrePatchSnapshot",
    "action": "aws:executeAutomation",
    "inputs": {
      "DocumentName": "AWS-CreateSnapshot",
      "RuntimeParameters": {
        "VolumeId": "{{IdentifyRootVolume.Payload}}",
        "Description": "ApplyPatchBaseline restoration case contingency"
      }
    }
  }
```

------

# `aws:loop` – Iterate over steps in an automation


This action iterates over a subset of steps in an automation runbook. You can choose a `do while` or `for each` style loop. To construct a `do while` loop, use the `LoopCondition` input parameter. To construct a `for each` loop, use the `Iterators` and `IteratorDataType` input parameters. When using an `aws:loop` action, only specify either the `Iterators` or `LoopCondition` input parameter. The maximum number of iterations is 100.

The `onCancel` property can only be used for steps defined within a loop. The `onCancel` property isn't supported for the `aws:loop` action. The `onFailure` property can be used for an `aws:loop` action, however it will only be used if an unexpected error occurs causing the step to fail. If you define `onFailure` properties for the steps within a loop, the `aws:loop` action inherits those properties and reacts accordingly when a failure occurs.

**Examples**  
The following are examples of how to construct the different types of loop actions.

------
#### [ do while ]

```
name: RepeatMyLambdaFunctionUntilOutputIsReturned
action: aws:loop
inputs:
    Steps:
    - name: invokeMyLambda
        action: aws:invokeLambdaFunction
        inputs:
        FunctionName: LambdaFunctionName
        outputs:
        - Name: ShouldRetry
            Selector: $.Retry
            Type: Boolean
    LoopCondition:
        Variable: "{{ invokeMyLambda.ShouldRetry }}"
        BooleanEquals: true
    MaxIterations: 3
```

------
#### [ for each ]

```
name: stopAllInstancesWithWaitTime
action: aws:loop
inputs:
    Iterators: "{{ DescribeInstancesStep.InstanceIds }}"
    IteratorDataType: "String"
    Steps:
    - name: stopOneInstance
        action: aws:changeInstanceState
        inputs:
        InstanceIds:
            - "{{stopAllInstancesWithWaitTime.CurrentIteratorValue}}"
        CheckStateOnly: false
        DesiredState: stopped
    - name: wait10Seconds
        action: aws:sleep
        inputs:
        Duration: PT10S
```

------

**Input**  
The input is as follows.

Iterators  
The list of items for the steps to iterate over. The maximum number of iterators is 100.  
Type: StringList  
Required: No

IteratorDataType  
An optional parameter to specify the data type of the `Iterators`. A value for this parameter can be provided along with the `Iterators` input parameter. If you don’t specify a value for this parameter and `Iterators`, then you must specify a value for the `LoopCondition` parameter.  
Type: String  
Valid values: Boolean \$1 Integer \$1 String \$1 StringMap  
Default: String  
Required: No

LoopCondition  
Consists of a `Variable` and an operator condition to evaluate. If you don’t specify a value for this parameter, then you must specify values for the `Iterators` and `IteratorDataType` parameters. You can use complex operator evaluations by using a combination of `And`, `Not`, and `Or` operators. The condition is evaluated after the steps in the loop complete. If the condition is `true` and the `MaxIterations` value has not been reached, the steps in the loop run again. The operator conditions are as follows:  

**String operations**
+ StringEquals
+ EqualsIgnoreCase
+ StartsWith
+ EndsWith
+ Contains

**Numeric operations**
+ NumericEquals
+ NumericGreater
+ NumericLesser
+ NumericGreaterOrEquals
+ NumericLesser
+ NumericLesserOrEquals

**Boolean operation**
+ BooleanEquals
Type: StringMap  
Required: No

MaxIterations  
The maximum number of times the steps in the loop run. Once the value specified for this input is reached, the loop stops running even if the `LoopCondition` is still `true` or if there are objects remaining in the `Iterators` parameter.  
Type: Integer  
Valid values: 1 - 100  
Required: No

Steps  
The list of steps to run in the loop. These function like a nested runbook. Within these steps you can access the current iterator value for a `for each` loop using the syntax `{{loopStepName.CurrentIteratorValue}}`. You can also access an integer value of the current iteration for both loop types using the syntax `{{loopStepName.CurrentIteration}}`.  
Type: List of steps  
Required: YesOutput

CurrentIteration  
The current loop iteration as an integer. Iteration values start at 1.  
Type: Integer

CurrentIteratorValue  
The value of the current iterator as a string. This output is only present in `for each` loops.  
Type: String

# `aws:pause` – Pause an automation


This action pauses the automation. Once paused, the automation status is *Waiting*. To continue the automation, use the [SendAutomationSignal](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendAutomationSignal.html) API operation with the `Resume` signal type. We recommend using the `aws:sleep` or `aws:approve` action for more granular control of your workflows.

**Note**  
The default timeout for this action is 7 days (604800 seconds) and the maximum value is 30 days (2592000 seconds). You can limit or extend the timeout by specifying the `timeoutSeconds` parameter for an `aws:pause` step.

**Input**  
The input is as follows.

------
#### [ YAML ]

```
name: pauseThis
action: aws:pause
timeoutSeconds: 1209600
inputs: {}
```

------
#### [ JSON ]

```
{
    "name": "pauseThis",
    "action": "aws:pause",
    "timeoutSeconds": "1209600",
    "inputs": {}
}
```

------Output

None  


# `aws:runCommand` – Run a command on a managed instance


Runs the specified commands.

**Note**  
Automation only supports *output* of one AWS Systems Manager Run Command action. A runbook can include multiple Run Command actions, but output is supported for only one action at a time.

**Input**  
This action supports most send command parameters. For more information, see [SendCommand](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html).

------
#### [ YAML ]

```
- name: checkMembership
  action: 'aws:runCommand'
  inputs:
    DocumentName: AWS-RunPowerShellScript
    InstanceIds:
      - '{{InstanceIds}}'
    Parameters:
      commands:
        - (Get-WmiObject -Class Win32_ComputerSystem).PartOfDomain
```

------
#### [ JSON ]

```
{
    "name": "checkMembership",
    "action": "aws:runCommand",
    "inputs": {
        "DocumentName": "AWS-RunPowerShellScript",
        "InstanceIds": [
            "{{InstanceIds}}"
        ],
        "Parameters": {
            "commands": [
                "(Get-WmiObject -Class Win32_ComputerSystem).PartOfDomain"
            ]
        }
    }
}
```

------

DocumentName  
If the Command type document is owned by you or AWS, specify the name of the document. If you're using a document shared with you by a different AWS account, specify the Amazon Resource Name (ARN) of the document. For more information about using shared documents, see [Using shared SSM documents](documents-ssm-sharing.md#using-shared-documents).  
Type: String  
Required: Yes

InstanceIds  
The instance IDs where you want the command to run. You can specify a maximum of 50 IDs.   
You can also use the pseudo parameter `{{RESOURCE_ID}}` in place of instance IDs to run the command on all instances in the target group. For more information about pseudo parameters, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md).  
Another alternative is to send commands to a fleet of instances by using the `Targets` parameter. The `Targets` parameter accepts Amazon Elastic Compute Cloud (Amazon EC2) tags. For more information about how to use the `Targets` parameter, see [Run commands at scale](send-commands-multiple.md).  
Type: StringList  
Required: No (If you don't specify InstanceIds or use the `{{RESOURCE_ID}}` pseudo parameter, then you must specify the `Targets` parameter.)

Targets  
An array of search criteria that targets instances by using a Key,Value combination that you specify. `Targets` is required if you don't provide one or more instance IDs in the call. For more information about how to use the `Targets` parameter, see [Run commands at scale](send-commands-multiple.md).  
Type: MapList (The schema of the map in the list must match the object.) For information, see [Target](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_Target.html) in the *AWS Systems Manager API Reference*.  
Required: No (If you don't specify `Targets`, then you must specify InstanceIds or use the `{{RESOURCE_ID}}` pseudo parameter.)  
Following is an example.  

```
- name: checkMembership
  action: aws:runCommand
  inputs:
    DocumentName: AWS-RunPowerShellScript
    Targets:
      - Key: tag:Stage
        Values:
          - Gamma
          - Beta
      - Key: tag-key
        Values:
          - Suite
    Parameters:
      commands:
        - (Get-WmiObject -Class Win32_ComputerSystem).PartOfDomain
```

```
{
    "name": "checkMembership",
    "action": "aws:runCommand",
    "inputs": {
        "DocumentName": "AWS-RunPowerShellScript",
        "Targets": [                   
            {
                "Key": "tag:Stage",
                "Values": [
                    "Gamma", "Beta"
                ]
            },
            {
                "Key": "tag:Application",
                "Values": [
                    "Suite"
                ]
            }
        ],
        "Parameters": {
            "commands": [
                "(Get-WmiObject -Class Win32_ComputerSystem).PartOfDomain"
            ]
        }
    }
}
```

Parameters  
The required and optional parameters specified in the document.  
Type: Map  
Required: No

CloudWatchOutputConfig  
Configuration options for sending command output to Amazon CloudWatch Logs. For more information about sending command output to CloudWatch Logs, see [Configuring Amazon CloudWatch Logs for Run Command](sysman-rc-setting-up-cwlogs.md).  
Type: StringMap (The schema of the map must match the object. For more information, see [CloudWatchOutputConfig](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CloudWatchOutputConfig.html) in the *AWS Systems Manager API Reference*).  
Required: No  
Following is an example.  

```
- name: checkMembership
  action: aws:runCommand
  inputs:
    DocumentName: AWS-RunPowerShellScript
    InstanceIds:
      - "{{InstanceIds}}"
    Parameters:
      commands:
        - "(Get-WmiObject -Class Win32_ComputerSystem).PartOfDomain"
    CloudWatchOutputConfig:
      CloudWatchLogGroupName: CloudWatchGroupForSSMAutomationService
      CloudWatchOutputEnabled: true
```

```
{
    "name": "checkMembership",
    "action": "aws:runCommand",
    "inputs": {
        "DocumentName": "AWS-RunPowerShellScript",
        "InstanceIds": [
            "{{InstanceIds}}"
        ],
        "Parameters": {
            "commands": [
                "(Get-WmiObject -Class Win32_ComputerSystem).PartOfDomain"
            ]
        },
        "CloudWatchOutputConfig" : { 
                "CloudWatchLogGroupName": "CloudWatchGroupForSSMAutomationService",
                "CloudWatchOutputEnabled": true
        }
    }
}
```

Comment  
User-defined information about the command.  
Type: String  
Required: No

DocumentHash  
The hash for the document.  
Type: String  
Required: No

DocumentHashType  
The type of the hash.  
Type: String  
Valid values: `Sha256` \$1 `Sha1`  
Required: No

NotificationConfig  
The configurations for sending notifications.  
Required: No

OutputS3BucketName  
The name of the S3 bucket for command output responses. Your managed node must have permissions for the S3 bucket to successfully log output.  
Type: String  
Required: No

OutputS3KeyPrefix  
The prefix.  
Type: String  
Required: No

ServiceRoleArn  
The ARN of the AWS Identity and Access Management (IAM) role.  
Type: String  
Required: No

TimeoutSeconds  
The amount of time in seconds to wait for a command to deliver to the AWS Systems Manager SSM Agent on an instance. If the command isn't received by the SSM Agent on the instance before the value specified is reached, then the status of the command changes to `Delivery Timed Out`.  
Type: Integer  
Required: No  
Valid values: 30-2592000Output

CommandId  
The ID of the command.

Status  
The status of the command.

ResponseCode  
The response code of the command. If the document you run has more than 1 step, a value isn't returned for this output.

Output  
The output of the command. If you target a tag or multiple instances with your command, no output value is returned. You can use the `GetCommandInvocation` and `ListCommandInvocations` API operations to retrieve output for individual instances.

# `aws:runInstances` – Launch an Amazon EC2 instance


Launches a new Amazon Elastic Compute Cloud (Amazon EC2) instance.

**Note**  
The `aws:runInstances` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
The action supports most API parameters. For more information, see the [RunInstances](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API documentation.

------
#### [ YAML ]

```
name: launchInstance
action: aws:runInstances
maxAttempts: 3
timeoutSeconds: 1200
onFailure: Abort
inputs:
  ImageId: ami-12345678
  InstanceType: t2.micro
  MinInstanceCount: 1
  MaxInstanceCount: 1
  IamInstanceProfileName: myRunCmdRole
  TagSpecifications:
  - ResourceType: instance
    Tags:
    - Key: LaunchedBy
      Value: SSMAutomation
    - Key: Category
      Value: HighAvailabilityFleetHost
```

------
#### [ JSON ]

```
{
   "name":"launchInstance",
   "action":"aws:runInstances",
   "maxAttempts":3,
   "timeoutSeconds":1200,
   "onFailure":"Abort",
   "inputs":{
      "ImageId":"ami-12345678",
      "InstanceType":"t2.micro",
      "MinInstanceCount":1,
      "MaxInstanceCount":1,
      "IamInstanceProfileName":"myRunCmdRole",
      "TagSpecifications":[
         {
            "ResourceType":"instance",
            "Tags":[
               {
                  "Key":"LaunchedBy",
                  "Value":"SSMAutomation"
               },
               {
                  "Key":"Category",
                  "Value":"HighAvailabilityFleetHost"
               }
            ]
         }
      ]
   }
}
```

------

AdditionalInfo  
Reserved.  
Type: String  
Required: No

BlockDeviceMappings  
The block devices for the instance.  
Type: MapList  
Required: No

ClientToken  
The identifier to ensure idempotency of the request.  
Type: String  
Required: No

DisableApiTermination  
Turns on or turns off instance API termination.  
Type: Boolean  
Required: No

EbsOptimized  
Turns on or turns off Amazon Elastic Block Store (Amazon EBS) optimization.  
Type: Boolean  
Required: No

IamInstanceProfileArn  
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) instance profile for the instance.  
Type: String  
Required: No

IamInstanceProfileName  
The name of the IAM instance profile for the instance.  
Type: String  
Required: No

ImageId  
The ID of the Amazon Machine Image (AMI).  
Type: String  
Required: Yes

InstanceInitiatedShutdownBehavior  
Indicates whether the instance stops or terminates on system shutdown.  
Type: String  
Required: No

InstanceType  
The instance type.  
If an instance type value isn't provided, the m1.small instance type is used.
Type: String  
Required: No

KernelId  
The ID of the kernel.  
Type: String  
Required: No

KeyName  
The name of the key pair.  
Type: String  
Required: No

MaxInstanceCount  
The maximum number of instances to be launched.  
Type: String  
Required: No

MetadataOptions  
The metadata options for the instance. For more information, see [InstanceMetadataOptionsRequest](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_InstanceMetadataOptionsRequest.html).  
Type: StringMap  
Required: No

MinInstanceCount  
The minimum number of instances to be launched.  
Type: String  
Required: No

Monitoring  
Turns on or turns off detailed monitoring.  
Type: Boolean  
Required: No

NetworkInterfaces  
The network interfaces.  
Type: MapList  
Required: No

Placement  
The placement for the instance.  
Type: StringMap  
Required: No

PrivateIpAddress  
The primary IPv4 address.  
Type: String  
Required: No

RamdiskId  
The ID of the RAM disk.  
Type: String  
Required: No

SecurityGroupIds  
The IDs of the security groups for the instance.  
Type: StringList  
Required: No

SecurityGroups  
The names of the security groups for the instance.  
Type: StringList  
Required: No

SubnetId  
The subnet ID.  
Type: String  
Required: No

TagSpecifications  
The tags to apply to the resources during launch. You can only tag instances and volumes at launch. The specified tags are applied to all instances or volumes that are created during launch. To tag an instance after it has been launched, use the [`aws:createTags` – Create tags for AWS resources](automation-action-createtag.md) action.  
Type: MapList (For more information, see [TagSpecification](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html).)  
Required: No

UserData  
A script provided as a string literal value. If a literal value is entered, then it must be Base64-encoded.  
Type: String  
Required: NoOutput

InstanceIds  
The IDs of the instances.

InstanceStates  
The current state of the instance.

# `aws:sleep` – Delay an automation


Delays an automation for a specified amount of time. This action uses the International Organization for Standardization (ISO) 8601 date and time format. For more information about this date and time format, see [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html).

**Input**  
You can delay an automation for a specified duration. 

------
#### [ YAML ]

```
name: sleep
action: aws:sleep
inputs:
  Duration: PT10M
```

------
#### [ JSON ]

```
{
   "name":"sleep",
   "action":"aws:sleep",
   "inputs":{
      "Duration":"PT10M"
   }
}
```

------

You can also delay an automation until a specified date and time. If the specified date and time has passed, the action proceeds immediately. 

------
#### [ YAML ]

```
name: sleep
action: aws:sleep
inputs:
  Timestamp: '2020-01-01T01:00:00Z'
```

------
#### [ JSON ]

```
{
    "name": "sleep",
    "action": "aws:sleep",
    "inputs": {
        "Timestamp": "2020-01-01T01:00:00Z"
    }
}
```

------

**Note**  
Automation supports a maximum delay of 604799 seconds (7 days).

Duration  
An ISO 8601 duration. You can't specify a negative duration.   
Type: String  
Required: No

Timestamp  
An ISO 8601 timestamp. If you don't specify a value for this parameter, then you must specify a value for the `Duration` parameter.   
Type: String  
Required: NoOutput

None  


# `aws:updateVariable` – Updates a value for a runbook variable


This action updates a value for a runbook variable. The data type of the value must match the data type of the variable you want to update. Data type conversions aren't supported. The `onCancel` property isn't supported for the `aws:updateVariable` action.

**Input**  
The input is as follows.

------
#### [ YAML ]

```
name: updateStringList
action: aws:updateVariable
inputs:
    Name: variable:variable name
    Value:
    - "1"
    - "2"
```

------
#### [ JSON ]

```
{
    "name": "updateStringList",
    "action": "aws:updateVariable",
    "inputs": {
        "Name": "variable:variable name",
        "Value": ["1","2"]
    }
}
```

------

Name  
The name of the variable whose value you want to update. You must use the format `variable:variable name`  
Type: String  
Required: Yes

Value  
The new value to assign to the variable. The value must match the data type of the variable. Data type conversions aren't supported.  
Type: Boolean \$1 Integer \$1 MapList \$1 String \$1 StringList \$1 StringMap  
Required: Yes  
Constraints:  
+ MapList can contain a maximum number of 200 items.
+ Key lengths can be a minimum length of 1 and a maximum length of 50.
+ StringList can be a minimum number of 0 items and a maximum number of 50 items.
+ String lengths can be a minimum length of 1 and a maximum length of 512.Output

None  


# `aws:waitForAwsResourceProperty` – Wait on an AWS resource property


The `aws:waitForAwsResourceProperty` action allows your automation to wait for a specific resource state or event state before continuing the automation. For more examples of how to use this action, see [Additional runbook examples](automation-document-examples.md).

**Note**  
The default timeout value for this action is 3600 seconds (one hour). You can limit or extend the timeout by specifying the `timeoutSeconds` parameter for an `aws:waitForAwsResourceProperty` step. For more information and examples of how to use this action, see [Handling timeouts in runbooks](automation-handling-timeouts.md).

**Note**  
The `aws:waitForAwsResourceProperty` action supports automatic throttling retry. For more information, see [Configuring automatic retry for throttled operations](automation-throttling-retry.md).

**Input**  
Inputs are defined by the API operation that you choose.

------
#### [ YAML ]

```
action: aws:waitForAwsResourceProperty
inputs:
  Service: The official namespace of the service
  Api: The API operation or method name
  API operation inputs or parameters: A value
  PropertySelector: Response object
  DesiredValues:
  - Desired property value
```

------
#### [ JSON ]

```
{
  "action": "aws:waitForAwsResourceProperty",
  "inputs": {
    "Service":"The official namespace of the service",
    "Api":"The API operation or method name",
    "API operation inputs or parameters":"A value",
    "PropertySelector": "Response object",
    "DesiredValues": [
      "Desired property value"
    ]
  }
}
```

------

Service  
The AWS service namespace that contains the API operation that you want to run. For example, the namespace for AWS Systems Manager is `ssm`. The namespace for Amazon Elastic Compute Cloud (Amazon EC2) is `ec2`. You can view a list of supported AWS service namespaces in the [Available Services](https://docs.aws.amazon.com/cli/latest/reference/#available-services) section of the *AWS CLI Command Reference*.  
Type: String  
Required: Yes

Api  
The name of the API operation that you want to run. You can view the API operations (also called methods) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all API operations (methods) for Amazon Relational Database Service (Amazon RDS) are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html).  
Type: String  
Required: Yes

API operation inputs  
One or more API operation inputs. You can view the available inputs (also called parameters) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all methods for Amazon RDS are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html). Choose the [describe\$1db\$1instances](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances) method and scroll down to see the available parameters, such as **DBInstanceIdentifier**, **Name**, and **Values**.  

```
inputs:
  Service: The official namespace of the service
  Api: The API operation name
  API input 1: A value
  API Input 2: A value
  API Input 3: A value
```

```
"inputs":{
      "Service":"The official namespace of the service",
      "Api":"The API operation name",
      "API input 1":"A value",
      "API Input 2":"A value",
      "API Input 3":"A value"
}
```
Type: Determined by chosen API operation  
Required: Yes

PropertySelector  
The JSONPath to a specific attribute in the response object. You can view the response objects by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all methods for Amazon RDS are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html). Choose the [describe\$1db\$1instances](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances) method and scroll down to the **Response Structure** section. **DBInstances** is listed as a response object.  
Type: String  
Required: Yes

DesiredValues  
The expected status or state on which to continue the automation.  
Type: MapList, StringList  
Required: Yes

# Automation system variables


AWS Systems Manager Automation runbooks use the following variables. For an example of how these variables are used, view the JSON source of the `AWS-UpdateWindowsAmi` runbook. 

**To view the JSON source of the `AWS-UpdateWindowsAmi` runbook**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the document list, use either the Search bar or the numbers to the right of the Search bar to choose the runbook `AWS-UpdateWindowsAmi`.

1. Choose the **Content** tab. 

**System variables**  
Automation runbooks support the following system variables.


****  

| Variable | Details | 
| --- | --- | 
|  `global:ACCOUNT_ID`  |  The AWS account ID of the user or role in which Automation runs.  | 
|  `global:DATE`  |  The date (at run time) in the format yyyy-MM-dd.  | 
|  `global:DATE_TIME`  |  The date and time (at run time) in the format yyyy-MM-dd\$1HH.mm.ss.  | 
|  `global:AWS_PARTITION`  |  The partition that the resource is in. For standard AWS Regions, the partition is `aws`. For resources in other partitions, the partition is `aws-partitionname`. For example, the partition for resources in the AWS GovCloud (US-West) Region is `aws-us-gov`.  | 
|  `global:REGION`  |  The Region that the runbook is run in. For example, us-east-2.  | 

**Automation variables**  
Automation runbooks support the following automation variables.


****  

| Variable | Details | 
| --- | --- | 
|  `automation:EXECUTION_ID`  |  The unique identifier assigned to the current automation. For example, `1a2b3c-1a2b3c-1a2b3c-1a2b3c1a2b3c1a2b3c`.  | 

**Topics**
+ [

## Terminology
](#automation-terms)
+ [

## Supported scenarios
](#automation-variables-support)
+ [

## Unsupported scenarios
](#automation-variables-unsupported)

## Terminology


The following terms describe how variables and parameters are resolved.


****  

| Term | Definition | Example | 
| --- | --- | --- | 
|  Constant ARN  |  A valid Amazon Resource Name (ARN) without variables.  |  `arn:aws:iam::123456789012:role/roleName`  | 
|  Runbook parameter  |  A parameter defined at the runbook level (for example, `instanceId`). The parameter is used in a basic string replace. Its value is supplied at Start Execution time.  |  <pre>{ <br />   "description": "Create Image Demo",<br />   "version": "0.3",<br />   "assumeRole": "Your_Automation_Assume_Role_ARN",<br />   "parameters":{ <br />      "instanceId": { <br />         "type": "String",<br />         "description": "Instance to create image from"<br />   }<br />}</pre>  | 
|  System variable  |  A general variable substituted into the runbook when any part of the runbook is evaluated.  |  <pre>"activities": [ <br />   { <br />      "id": "copyImage",<br />      "activityType": "AWS-CopyImage",<br />      "maxAttempts": 1,<br />      "onFailure": "Continue",<br />      "inputs": { <br />         "ImageName": "{{imageName}}",<br />         "SourceImageId": "{{sourceImageId}}",<br />         "SourceRegion": "{{sourceRegion}}",<br />         "Encrypted": true,<br />         "ImageDescription": "Test CopyImage Description created on {{global:DATE}}"<br />      }<br />   }<br />]</pre>  | 
|  Automation variable  |  A variable relating to the automation substituted into the runbook when any part of the runbook is evaluated.  |  <pre>{ <br />   "name": "runFixedCmds",<br />   "action": "aws:runCommand",<br />   "maxAttempts": 1,<br />   "onFailure": "Continue",<br />   "inputs": { <br />      "DocumentName": "AWS-RunPowerShellScript",<br />      "InstanceIds": [ <br />         "{{LaunchInstance.InstanceIds}}"<br />      ],<br />      "Parameters": { <br />         "commands": [ <br />            "dir",<br />            "date",<br />            "“{{outputFormat}}” -f “left”,”right”,”{{global:DATE}}”,”{{automation:EXECUTION_ID}}”<br />         ]<br />      }<br />   }<br />}</pre>  | 
|  Systems Manager Parameter  |  A variable defined within AWS Systems Manager Parameter Store. It can't be directly referenced in step input. Permissions might be required to access the parameter.  |  <pre><br />description: Launch new Windows test instance<br />schemaVersion: '0.3'<br />assumeRole: '{{AutomationAssumeRole}}'<br />parameters:<br />  AutomationAssumeRole:<br />    type: String<br />    default: ''<br />    description: >-<br />      (Required) The ARN of the role that allows Automation to perform the<br />      actions on your behalf. If no role is specified, Systems Manager<br />      Automation uses your IAM permissions to run this runbook.<br />  LatestAmi:<br />    type: String<br />    default: >-<br />      {{ssm:/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base}}<br />    description: The latest Windows Server 2016 AMI queried from the public parameter.<br />mainSteps:<br />  - name: launchInstance<br />    action: 'aws:runInstances'<br />    maxAttempts: 3<br />    timeoutSeconds: 1200<br />    onFailure: Abort<br />    inputs:<br />      ImageId: '{{LatestAmi}}'<br />...</pre>  | 

## Supported scenarios



****  

| Scenario | Comments | Example | 
| --- | --- | --- | 
|  Constant ARN `assumeRole` at creation.  |  An authorization check is performed to verify that the calling user is permitted to pass the given `assumeRole`.  |  <pre>{<br />  "description": "Test all Automation resolvable parameters",<br />  "schemaVersion": "0.3",<br />  "assumeRole": "arn:aws:iam::123456789012:role/roleName",<br />  "parameters": { <br />  ...</pre>  | 
|  Runbook parameter supplied for `AssumeRole` when the automation is started.  |  Must be defined in the parameter list of the runbook.  |  <pre>{<br />  "description": "Test all Automation resolvable parameters",<br />  "schemaVersion": "0.3",<br />  "assumeRole": "{{dynamicARN}}",<br />  "parameters": {<br /> ...</pre>  | 
|  Value supplied for runbook parameter at start.  |  Customer supplies the value to use for a parameter. Any inputs supplied at start time need to be defined in the parameter list of the runbook.  |  <pre>...<br />"parameters": {<br />    "amiId": {<br />      "type": "String",<br />      "default": "ami-12345678",<br />      "description": "list of commands to run as part of first step"<br />    },<br />...</pre> Inputs to Start Automation Execution include : `{"amiId" : ["ami-12345678"] }`  | 
|  Systems Manager Parameter referenced within runbook content.  |  The variable exists within the customer's account, or is a publicly accessibly parameter, and the `AssumeRole` for the runbook has access to the variable. A check is performed at create time to confirm the `AssumeRole` has access. The parameter can't be directly referenced in step input.  |  <pre><br />...<br />parameters:<br />    LatestAmi:<br />    type: String<br />    default: >-<br />      {{ssm:/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base}}<br />    description: The latest Windows Server 2016 AMI queried from the public parameter.<br />mainSteps:<br />  - name: launchInstance<br />    action: 'aws:runInstances'<br />    maxAttempts: 3<br />    timeoutSeconds: 1200<br />    onFailure: Abort<br />    inputs:<br />      ImageId: '{{LatestAmi}}'<br />...</pre>  | 
|  System variable referenced within step definition  |  A system variable is substituted into the runbook when the automation is started. The value injected into the runbook is relative to when the substitution occurs. That is, the value of a time variable injected at step 1 is different from the value injected at step 3 because of the time it takes to run the steps between. System variables don't need to be set in the parameter list of the runbook.  |  <pre>...<br />  "mainSteps": [<br />    {<br />      "name": "RunSomeCommands",<br />      "action": "aws:runCommand",<br />      "maxAttempts": 1,<br />      "onFailure": "Continue",<br />      "inputs": {<br />        "DocumentName": "AWS:RunPowerShell",<br />        "InstanceIds": ["{{LaunchInstance.InstanceIds}}"],<br />        "Parameters": {<br />            "commands" : [<br />                "echo {The time is now {{global:DATE_TIME}}}"<br />            ]<br />        }<br />    }<br />}, ... </pre>  | 
|  Automation variable referenced within step definition.  |  Automation variables don't need to be set in the parameter list of the runbook. The only supported Automation variable is **automation:EXECUTION\$1ID**.  |  <pre>...<br />"mainSteps": [<br />    {<br />      "name": "invokeLambdaFunction",<br />      "action": "aws:invokeLambdaFunction",<br />      "maxAttempts": 1,<br />      "onFailure": "Continue",<br />      "inputs": {<br />        "FunctionName": "Hello-World-LambdaFunction",<br /><br />"Payload" : "{ "executionId" : "{{automation:EXECUTION_ID}}" }"<br />      }<br />    }<br />... </pre>  | 
|  Refer to output from previous step within next step definition.  |  This is parameter redirection. The output of a previous step is referenced using the syntax `{{stepName.OutputName}}`. This syntax can't be used by the customer for runbook parameters. This is resolved when the referring step runs. The parameter isn't listed in the parameters of the runbook.  |  <pre>...<br />"mainSteps": [<br />    {<br />      "name": "LaunchInstance",<br />      "action": "aws:runInstances",<br />      "maxAttempts": 1,<br />      "onFailure": "Continue",<br />      "inputs": {<br />        "ImageId": "{{amiId}}",<br />        "MinInstanceCount": 1,<br />        "MaxInstanceCount": 2<br />      }<br />    },<br />    {<br />      "name":"changeState",<br />      "action": "aws:changeInstanceState",<br />      "maxAttempts": 1,<br />      "onFailure": "Continue",<br />      "inputs": {<br />        "InstanceIds": ["{{LaunchInstance.InstanceIds}}"],<br />        "DesiredState": "terminated"<br />      }<br />    }<br /><br />... </pre>  | 

## Unsupported scenarios



****  

| Scenario | Comment | Example | 
| --- | --- | --- | 
|  Systems Manager Parameter supplied for `assumeRole` at create  |  Not supported.  |  <pre>...<br /><br />{<br />  "description": "Test all Automation resolvable parameters",<br />  "schemaVersion": "0.3",<br />  "assumeRole": "{{ssm:administratorRoleARN}}",<br />  "parameters": {<br /><br />... </pre>  | 
|  Systems Manager Parameter directly referenced in step input.  |  Returns `InvalidDocumentContent` exception at create time.  |  <pre><br />...<br />mainSteps:<br />  - name: launchInstance<br />    action: 'aws:runInstances'<br />    maxAttempts: 3<br />    timeoutSeconds: 1200<br />    onFailure: Abort<br />    inputs:<br />      ImageId: '{{ssm:/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base}}'<br />...</pre>  | 
|  Variable step definition  |  The definition of a step in the runbook is constructed by variables.  |  <pre>...<br /><br />"mainSteps": [<br />    {<br />      "name": "LaunchInstance",<br />      "action": "aws:runInstances",<br />      "{{attemptModel}}": 1,<br />      "onFailure": "Continue",<br />      "inputs": {<br />        "ImageId": "ami-12345678",<br />        "MinInstanceCount": 1,<br />        "MaxInstanceCount": 2<br />      }<br /><br />...<br /><br />User supplies input : { "attemptModel" : "minAttempts" } </pre>  | 
|  Cross referencing runbook parameters  |  The user supplies an input parameter at start time, which is a reference to another parameter in the runbook.  |  <pre>...<br />"parameters": {<br />    "amiId": {<br />      "type": "String",<br />      "default": "ami-7f2e6015",<br />      "description": "list of commands to run as part of first step"<br />    },<br />    "alternateAmiId": {<br />      "type": "String",<br />      "description": "The alternate AMI to try if this first fails".<br /><br />"default" : "{{amiId}}"<br />    },<br /><br />... </pre>  | 
|  Multi-level expansion  |  The runbook defines a variable that evaluates to the name of a variable. This sits within the variable delimiters (that is *\$1\$1 \$1\$1*) and is expanded to the value of that variable/parameter.  |  <pre>...<br />  "parameters": {<br />    "firstParameter": {<br />      "type": "String",<br />      "default": "param2",<br />      "description": "The parameter to reference"<br />    },<br />    "secondParameter": {<br />      "type": "String",<br />      "default" : "echo {Hello world}",<br />      "description": "What to run"<br />    }<br />  },<br />  "mainSteps": [{<br />      "name": "runFixedCmds",<br />      "action": "aws:runCommand",<br />      "maxAttempts": 1,<br />      "onFailure": "Continue",<br />      "inputs": {<br />        "DocumentName": "AWS-RunPowerShellScript",<br /><br />"InstanceIds" : "{{LaunchInstance.InstanceIds}}",<br />        "Parameters": {<br />          "commands": [ "{{ {{firstParameter}} }}"]<br /><br />}<br /><br />...<br /><br />Note: The customer intention here would be to run a command of "echo {Hello world}" </pre>  | 
|  Referencing output from a runbook step that is a different variable type  |  The user references the output from a preceding runbook step within a subsequent step. The output is a variable type that doesn't meet the requirements of the action in the subsequent step.  |  <pre>...<br />mainSteps:<br />- name: getImageId<br />  action: aws:executeAwsApi<br />  inputs:<br />    Service: ec2<br />    Api: DescribeImages<br />    Filters:  <br />    - Name: "name"<br />      Values: <br />      - "{{ImageName}}"<br />  outputs:<br />  - Name: ImageIdList<br />    Selector: "$.Images"<br />    Type: "StringList"<br />- name: copyMyImages<br />  action: aws:copyImage<br />  maxAttempts: 3<br />  onFailure: Abort<br />  inputs:<br />    SourceImageId: {{getImageId.ImageIdList}}<br />    SourceRegion: ap-northeast-2<br />    ImageName: Encrypted Copies of LAMP base AMI in ap-northeast-2<br />    Encrypted: true <br />... <br />Note: You must provide the type required by the Automation action. <br />In this case, aws:copyImage requires a "String" type variable but the preceding step outputs a "StringList" type variable.<br />                                        </pre>  | 

# Creating your own runbooks


An Automation runbook defines the *actions* that Systems Manager performs on your managed instances and other AWS resources when an automation runs. Automation is a tool in AWS Systems Manager. A runbook contains one or more steps that run in sequential order. Each step is built around a single action. Output from one step can be used as input in a later step. 

The process of running these actions and their steps is called the *automation*.

Action types supported for runbooks let you automate a wide variety of operations in your AWS environment. For example, using the `executeScript` action type, you can embed a python or PowerShell script directly in your runbook. (When you create a custom runbook, you can add your script inline, or attach it from an S3 bucket or from your local machine.) You can automate management of your AWS CloudFormation resources by using the `createStack` and `deleteStack` action types. In addition, using the `executeAwsApi` action type, a step can run *any *API operation in any AWS service, including creating or deleting AWS resources, starting other processes, initiating notifications, and many more. 

For a list of all 20 supported action types for Automation, see [Systems Manager Automation actions reference](automation-actions.md).

AWS Systems Manager Automation provides several runbooks with pre-defined steps that you can use to perform common tasks like restarting one or more Amazon Elastic Compute Cloud (Amazon EC2) instances or creating an Amazon Machine Image (AMI). You can also create your own runbooks and share them with other AWS accounts, or make them public for all Automation users.

Runbooks are written using YAML or JSON. Using the **Document Builder** in the Systems Manager Automation console, however, you can create a runbook without having to author in native JSON or YAML.

**Important**  
If you run an automation workflow that invokes other services by using an AWS Identity and Access Management (IAM) service role, be aware that the service role must be configured with permission to invoke those services. This requirement applies to all AWS Automation runbooks (`AWS-*` runbooks) such as the `AWS-ConfigureS3BucketLogging`, `AWS-CreateDynamoDBBackup`, and `AWS-RestartEC2Instance` runbooks, to name a few. This requirement also applies to any custom Automation runbooks you create that invoke other AWS services by using actions that call other services. For example, if you use the `aws:executeAwsApi`, `aws:createStack`, or `aws:copyImage` actions, configure the service role with permission to invoke those services. You can give permissions to other AWS services by adding an IAM inline policy to the role. For more information, see [(Optional) Add an Automation inline policy or customer managed policy to invoke other AWS services](automation-setup-iam.md#add-inline-policy).

For information about the actions that you can specify in a runbook, see [Systems Manager Automation actions reference](automation-actions.md).

For information about using the AWS Toolkit for Visual Studio Code to create runbooks, see [Working with Systems Manager Automation documents](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/systems-manager-automation-docs.html) in the *AWS Toolkit for Visual Studio Code User Guide*.

For information about using the visual designer to create a custom runbook, see [Visual design experience for Automation runbooks](automation-visual-designer.md). 

**Contents**
+ [

# Visual design experience for Automation runbooks
](automation-visual-designer.md)
  + [

# Overview of the visual design experience interface
](visual-designer-interface-overview.md)
    + [

## Actions browser
](visual-designer-interface-overview.md#visual-designer-actions)
    + [

## Canvas
](visual-designer-interface-overview.md#visual-designer-canvas)
    + [

## Form
](visual-designer-interface-overview.md#visual-designer-form)
    + [

## Keyboard shortcuts
](visual-designer-interface-overview.md#visual-designer-keyboard-shortcuts)
  + [

# Using the visual design experience
](visual-designer-use.md)
    + [

## Create a runbook workflow
](visual-designer-use.md#visual-designer-create-runbook-workflow)
    + [

## Design a runbook
](visual-designer-use.md#visual-designer-build)
    + [

## Update your runbook
](visual-designer-use.md#visual-designer-update-runbook)
    + [

## Export your runbook
](visual-designer-use.md#visual-designer-export-runbook)
  + [

# Configuring inputs and outputs for your actions
](visual-designer-action-inputs-outputs.md)
    + [

## Provide input data for an action
](visual-designer-action-inputs-outputs.md#providing-input)
    + [

## Define output data for an action
](visual-designer-action-inputs-outputs.md#defining-output)
  + [

# Error handling with the visual design experience
](visual-designer-error-handling.md)
    + [

## Retry action on error
](visual-designer-error-handling.md#retry-actions)
    + [

## Timeouts
](visual-designer-error-handling.md#timeout-seconds)
    + [

## Failed actions
](visual-designer-error-handling.md#failure-actions)
    + [

## Canceled actions
](visual-designer-error-handling.md#cancel-actions)
    + [

## Critical actions
](visual-designer-error-handling.md#critical-actions)
    + [

## Ending actions
](visual-designer-error-handling.md#end-actions)
  + [

# Tutorial: Create a runbook using the visual design experience
](visual-designer-tutorial.md)
    + [

## Step 1: Navigate to the visual design experience
](visual-designer-tutorial.md#navigate-console)
    + [

## Step 2: Create a workflow
](visual-designer-tutorial.md#create-workflow)
    + [

## Step 3: Review the auto-generated code
](visual-designer-tutorial.md#view-generated-code)
    + [

## Step 4: Run your new runbook
](visual-designer-tutorial.md#use-tutorial-runbook)
    + [

## Step 5: Clean up
](visual-designer-tutorial.md#cleanup-tutorial-runbook)
+ [

# Authoring Automation runbooks
](automation-authoring-runbooks.md)
  + [

## Identify your use case
](automation-authoring-runbooks.md#automation-authoring-runbooks-use-case)
  + [

## Set up your development environment
](automation-authoring-runbooks.md#automation-authoring-runbooks-environment)
  + [

## Develop runbook content
](automation-authoring-runbooks.md#automation-authoring-runbooks-developing-content)
  + [

# Example 1: Creating parent-child runbooks
](automation-authoring-runbooks-parent-child-example.md)
    + [

## Create the child runbook
](automation-authoring-runbooks-parent-child-example.md#automation-authoring-runbooks-child-runbook)
    + [

## Create the parent runbook
](automation-authoring-runbooks-parent-child-example.md#automation-authoring-runbooks-parent-runbook)
  + [

# Example 2: Scripted runbook
](automation-authoring-runbooks-scripted-example.md)
  + [

# Additional runbook examples
](automation-document-examples.md)
    + [

# Deploy VPC architecture and Microsoft Active Directory domain controllers
](automation-document-architecture-deployment-example.md)
    + [

# Restore a root volume from the latest snapshot
](automation-document-instance-recovery-example.md)
    + [

# Create an AMI and cross-Region copy
](automation-document-backup-maintenance-example.md)
+ [

# Creating input parameters that populate AWS resources
](populating-input-parameters.md)
+ [

# Using Document Builder to create runbooks
](automation-document-builder.md)
  + [

## Create a runbook using Document Builder
](automation-document-builder.md#create-runbook)
  + [

## Create a runbook that runs scripts
](automation-document-builder.md#create-runbook-scripts)
+ [

# Using scripts in runbooks
](automation-document-script-considerations.md)
  + [

## Permissions for using runbooks
](automation-document-script-considerations.md#script-permissions)
  + [

## Adding scripts to runbooks
](automation-document-script-considerations.md#adding-scripts)
  + [

## Script constraints for runbooks
](automation-document-script-considerations.md#script-constraints)
+ [

# Using conditional statements in runbooks
](automation-branch-condition.md)
  + [

## Working with the `aws:branch` action
](automation-branch-condition.md#branch-action-explained)
    + [

### Creating an `aws:branch` step in a runbook
](automation-branch-condition.md#create-branch-action)
      + [

#### About creating the output variable
](automation-branch-condition.md#branch-action-output)
    + [

### Example `aws:branch` runbooks
](automation-branch-condition.md#branch-runbook-examples)
    + [

### Creating complex branching automations with operators
](automation-branch-condition.md#branch-operators)
  + [

## Examples of how to use conditional options
](automation-branch-condition.md#conditional-examples)
+ [

# Using action outputs as inputs
](automation-action-outputs-inputs.md)
  + [

## Using JSONPath in runbooks
](automation-action-outputs-inputs.md#automation-action-json-path)
+ [

# Creating webhook integrations for Automation
](creating-webhook-integrations.md)
  + [

## Creating integrations (console)
](creating-webhook-integrations.md#creating-integrations-console)
  + [

## Creating integrations (command line)
](creating-webhook-integrations.md#creating-integrations-commandline)
  + [

## Creating webhooks for integrations
](creating-webhook-integrations.md#creating-webhooks)
+ [

# Handling timeouts in runbooks
](automation-handling-timeouts.md)

# Visual design experience for Automation runbooks


AWS Systems Manager Automation provides a low-code visual design experience that helps you create automation runbooks. The visual design experience provides a drag-and-drop interface with the option to add your own code so you can create and edit runbooks more easily. With the visual design experience, you can do the following:
+ Control conditional statements.
+ Control how input and output is filtered or transformed for each action.
+ Configure error handling.
+ Prototype new runbooks.
+ Use your prototype runbooks as the starting point for local development with the AWS Toolkit for Visual Studio Code.

When you create or edit a runbook, you can access the visual design experience from the [Automation console](https://console.aws.amazon.com/systems-manager/automation/home?region=us-east-1#/). As you create a runbook, the visual design experience validates your work and auto-generates code. You can review the generated code, or export it for local development. When you're finished, you can save your runbook, run it, and examine the results in the Systems Manager Automation console. 

**Topics**
+ [Interface overview](visual-designer-interface-overview.md)
+ [

# Using the visual design experience
](visual-designer-use.md)
+ [Configure inputs and outputs](visual-designer-action-inputs-outputs.md)
+ [

# Error handling with the visual design experience
](visual-designer-error-handling.md)
+ [

# Tutorial: Create a runbook using the visual design experience
](visual-designer-tutorial.md)

# Overview of the visual design experience interface
Interface overview

The visual design experience for Systems Manager Automation is a low-code visual workflow designer that helps you create automation runbooks.

Get to know the visual design experience with an overview of the interface components:

![\[Visual design experience components\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_overview.png)

+ The **Actions** browser contains the **Actions**, **AWS APIs**, and **Runbooks** tabs.
+ The *canvas* is where you drag and drop actions into your workflow graph, change the order of actions, and select actions to configure or view.
+ The **Form** panel is where you can view and edit the properties of any action that you selected on the canvas. Select the **Content** toggle to view the YAML or JSON for your runbook, with the currently selected action highlighted. 

**Info** links open a panel with contextual information when you need help. These panels also include links to related topics in the Systems Manager Automation documentation. 

## Actions browser


From the **Actions** browser, you can select actions to drag and drop into your workflow graph. You can search all actions using the search field at the top of the **Actions** browser. The **Actions** browser contains the following tabs:
+ The **Actions** tab provides a list of automation actions that you can drag and drop into your runbook's workflow graph in the canvas.
+ The **AWS APIs** tab provides a list of AWS APIs that you can drag and drop into your runbook's workflow graph in the canvas.
+ The **Runbooks** tab provides several ready-to-use, reusable runbooks as building blocks that you can use for a variety of use cases. For example, you can use runbooks to perform common remediation tasks on Amazon EC2 instances in your workflow without having to re-create the same actions.

![\[Visual design experience actions browser\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_actions_multi_view.png)


## Canvas


After you choose an action to add to your automation, drag it to the canvas and drop it into your workflow graph. You can also drag and drop actions to move them to different places in your runbook's workflow. If your workflow is complex, you might not be able to view all of it in the canvas panel. Use the controls at the top of the canvas to zoom in or out. To view different parts of a workflow, you can drag the workflow graph in the canvas. 

Drag an action from the **Actions** browser, and drop it into your runbook's workflow graph. A line shows where it will be placed in your workflow. To change the order of an action, you can drag it to a different place in your workflow. The new action has been added to your workflow, and its code is auto-generated.

![\[Visual design experience canvas\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_canvas.png)


## Form


After you add an action to your runbook workflow, you can configure it to meet your use case. Choose the action that you want to configure, and you will see its parameters and options in the **Form** panel. You can also see the YAML or JSON code by choosing the **Content** toggle. The code associated with the action you have selected is highlighted.

![\[Visual design experience form panel\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_form.png)


![\[Visual design experience content panel\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_content.png)


## Keyboard shortcuts


The visual design experience supports the keyboard shortcuts shown in the following table.


| Keyboard shortcut | Function | 
| --- | --- | 
| Ctrl\$1Z | Undo the last operation. | 
| Ctrl\$1Shift\$1Z | Redo the last operation. | 
| Alt\$1C | Center the workflow in the canvas. | 
| Backspace | Remove all selected states. | 
| Delete | Remove all selected states. | 
| Ctrl\$1D | Duplicate the selected state. | 

# Using the visual design experience


Learn to create, edit and run runbook workflows using the visual design experience. After your workflow is ready, you can save it or export it. You can also use the visual design experience for rapid prototyping. 

## Create a runbook workflow


1. Sign in to the [Systems Manager Automation console](https://console.aws.amazon.com/systems-manager/automation/home?region=us-east-1#/).

1. Choose **Create runbook**.

1. In the **Name** box, enter a name for your runbook, for example, `MyNewRunbook`.

1. Next to the **Design** and **Code** toggle, select the pencil icon and enter a name for your runbook.

You can now design a workflow for your new runbook.

## Design a runbook


 To design a runbook workflow using the visual design experience, you drag an automation action from the **Actions** browser into the canvas, placing it where you want it in your runbook's workflow. You can also re-order actions in your workflow by dragging them to a different location. As you drag an action onto the canvas, a line appears wherever you can drop the action in your workflow. After an action is dropped onto the canvas, its code is auto-generated and added inside your runbook's content.

If you know the name of the action you want to add, use the search box at the top of the **Actions** browser to find the action.

After you drop an action onto the canvas, configure it using the **Form** panel on the right. This panel contains the **General**, **Inputs**, **Outputs**, and **Configuration** tabs for each automation action or API action that you place on the canvas. For example, the **General** tab consists of the following sections:
+ The **Step name** identifies the step. Specify a unique value for the step name.
+ The **Description** helps you describe what the action is doing in your runbook's workflow.

The **Inputs** tab contains fields that vary based on the action. For example, the `aws:executeScript` automation action consists of the following sections:
+ The **Runtime** is the language to use for running the provided script.
+ The **Handler** is the name of your function. You must ensure that the function defined in the handler has two parameters: `events` and `context`. The PowerShell runtime doesn't support this parameter. 
+ The **Script** is an embedded script that you want to run during the workflow.
+ (Optional) The **Attachment** is for standalone scripts or .zip files that can be invoked by the action. This parameter is required for JSON runbooks.

The **Outputs** tab helps you specify the values that you want to output from an action. You can reference output values in later actions of your workflow, or generate output from actions for logging purposes. Not all actions will have an **Outputs** tab because not all actions support outputs. For example, the `aws:pause` action doesn't support outputs. For actions that do support outputs, the **Outputs** tab consists of the following sections:
+ The **Name** is the name to be used for the output value. You can reference outputs in later actions of your workflow.
+ The **Selector** is a JSONPath expression string beginning with `"$."` that is used to select one or more components within a JSON element.
+ The **Type** is the data type for the output value. For example, a `String` or `Integer` data type.

The **Configuration** tab contains properties and options that all automation actions can use. The action consists of the following sections:
+ The **Max attempts** property is the number of times an action retries if it fails.
+ The **Timeout seconds** property specifies the timeout value for an action.
+ The **Is critical** property determines if the action failure stops the entire automation.
+ The **Next step** property determines which action the automation goes to next in the runbook.
+ The **On failure** property determines which action the automation goes to next in the runbook if the action fails.
+ The **On cancel** property determines which action the automation goes to next in the runbook if the action is canceled by a user.

To delete an action, you can use backspace, the toolbar above the canvas, or right-click and choose **Delete action**.

As your workflow grows, it might not fit in the canvas. To help make the workflow fit in the canvas, try one of the following options: 
+ Use the controls on the side panels to resize or close the panels.
+ Use the toolbar at the top of the canvas to zoom the workflow graph in or out.

## Update your runbook


You can update an existing runbook workflow by creating a new version of your runbook. Updates to your runbooks can be made by using the visual design experience, or by editing the code directly. To update an existing runbook, use the following procedure:

1. Sign in to the [Systems Manager Automation console](https://console.aws.amazon.com/systems-manager/automation/home?region=us-east-1#/).

1. Choose the runbook that you want to update.

1. Choose **Create new version**.

1. The visual design experience has two panes: A code pane and a visual workflow pane. Choose **Design** in the visual workflow pane to edit your workflow with the visual design experience. When you're done, choose **Create new version** to save your changes and exit.

1. (Optional) Use the code pane to edit the runbook content in YAML or JSON.

## Export your runbook


To export your runbook's workflow YAML or JSON code, and also a graph of your workflow, use the following procedure: 

1. Choose your runbook in the **Documents** console.

1. Choose **Create new version**.

1. In the **Actions** dropdown, choose whether you want to export the graph or runbook, and which format you prefer.

# Configuring inputs and outputs for your actions
Configure inputs and outputs

Each automation action responds based on input that it receives. In most cases, you then pass output to the subsequent actions. In the visual design experience, you can configure an action's input and output data in the **Inputs** and **Outputs** tabs of the **Form** panel.

For detailed information about how to define and use output for automation actions, see [Using action outputs as inputs](automation-action-outputs-inputs.md). 

## Provide input data for an action


Each automation action has one or more inputs that you must provide a value for. The value you provide for an action's input is determined by the data type and format that's accepted by the action. For example, the `aws:sleep` actions requires an ISO 8601 formatted string value for the `Duration` input.

Generally, you use actions in your runbook's workflow that return output that you want to use in subsequent actions. It's important to make sure your input values are correct to avoid errors in your runbook's workflow. Input values are also important because they determine whether the action returns the expected output. For example, when using the `aws:executeAwsApi` action, you want to make sure that you're providing the right value for the API operation.

## Define output data for an action


Some automation actions return output after performing their defined operations. Actions that return output either have predefined outputs, or allow you to define the outputs yourself. For example, the `aws:createImage` action has predefined outputs that return an `ImageId` and `ImageState`. Comparatively, with the `aws:executeAwsApi` action, you can define the outputs you that want from the specified API operation. As a result, you can return one or more values from a single API operation to use in subsequent actions.

Defining your own outputs for an automation action requires that you specify a name of the output, the data type, and the output value. To continue using the `aws:executeAwsApi` action as an example, let's say you're calling the `DescribeInstances` API operation from Amazon EC2. In this example, you want to return, or output, the `State` of an Amazon EC2 instance and branch your runbook's workflow based on the output. You choose to name the output **InstanceState**, and use the **String** data type. 

The process to define the actual value of the output differs, depending on the action. For example, if you're using the `aws:executeScript` action, you must use `return` statements in your functions to provide data to your outputs. With other actions like `aws:executeAwsApi`, `aws:waitForAwsResourceProperty`, and `aws:assertAwsResourceProperty`, a `Selector` is required. The `Selector`, or `PropertySelector` as some actions refer to it, is a JSONPath string that is used to process the JSON response from an API operation. It's important to understand how the JSON response object from an API operation is structured so you can select the correct value for your output. Using the `DescribeInstances` API operation mentioned earlier, see the following example JSON response:

```
{
  "reservationSet": {
    "item": {
      "reservationId": "r-1234567890abcdef0",
      "ownerId": 123456789012,
      "groupSet": "",
      "instancesSet": {
        "item": {
          "instanceId": "i-1234567890abcdef0",
          "imageId": "ami-bff32ccc",
          "instanceState": {
            "code": 16,
            "name": "running"
          },
          "privateDnsName": "ip-192-168-1-88.eu-west-1.compute.internal",
          "dnsName": "ec2-54-194-252-215.eu-west-1.compute.amazonaws.com",
          "reason": "",
          "keyName": "my_keypair",
          "amiLaunchIndex": 0,
          "productCodes": "",
          "instanceType": "t2.micro",
          "launchTime": "2018-05-08T16:46:19.000Z",
          "placement": {
            "availabilityZone": "eu-west-1c",
            "groupName": "",
            "tenancy": "default"
          },
          "monitoring": {
            "state": "disabled"
          },
          "subnetId": "subnet-56f5f000",
          "vpcId": "vpc-11112222",
          "privateIpAddress": "192.168.1.88",
          "ipAddress": "54.194.252.215",
          "sourceDestCheck": true,
          "groupSet": {
            "item": {
              "groupId": "sg-e4076000",
              "groupName": "SecurityGroup1"
            }
          },
          "architecture": "x86_64",
          "rootDeviceType": "ebs",
          "rootDeviceName": "/dev/xvda",
          "blockDeviceMapping": {
            "item": {
              "deviceName": "/dev/xvda",
              "ebs": {
                "volumeId": "vol-1234567890abcdef0",
                "status": "attached",
                "attachTime": "2015-12-22T10:44:09.000Z",
                "deleteOnTermination": true
              }
            }
          },
          "virtualizationType": "hvm",
          "clientToken": "xMcwG14507example",
          "tagSet": {
            "item": {
              "key": "Name",
              "value": "Server_1"
            }
          },
          "hypervisor": "xen",
          "networkInterfaceSet": {
            "item": {
              "networkInterfaceId": "eni-551ba000",
              "subnetId": "subnet-56f5f000",
              "vpcId": "vpc-11112222",
              "description": "Primary network interface",
              "ownerId": 123456789012,
              "status": "in-use",
              "macAddress": "02:dd:2c:5e:01:69",
              "privateIpAddress": "192.168.1.88",
              "privateDnsName": "ip-192-168-1-88.eu-west-1.compute.internal",
              "sourceDestCheck": true,
              "groupSet": {
                "item": {
                  "groupId": "sg-e4076000",
                  "groupName": "SecurityGroup1"
                }
              },
              "attachment": {
                "attachmentId": "eni-attach-39697adc",
                "deviceIndex": 0,
                "status": "attached",
                "attachTime": "2018-05-08T16:46:19.000Z",
                "deleteOnTermination": true
              },
              "association": {
                "publicIp": "54.194.252.215",
                "publicDnsName": "ec2-54-194-252-215.eu-west-1.compute.amazonaws.com",
                "ipOwnerId": "amazon"
              },
              "privateIpAddressesSet": {
                "item": {
                  "privateIpAddress": "192.168.1.88",
                  "privateDnsName": "ip-192-168-1-88.eu-west-1.compute.internal",
                  "primary": true,
                  "association": {
                    "publicIp": "54.194.252.215",
                    "publicDnsName": "ec2-54-194-252-215.eu-west-1.compute.amazonaws.com",
                    "ipOwnerId": "amazon"
                  }
                }
              },
              "ipv6AddressesSet": {
                "item": {
                  "ipv6Address": "2001:db8:1234:1a2b::123"
                }
              }
            }
          },
          "iamInstanceProfile": {
            "arn": "arn:aws:iam::123456789012:instance-profile/AdminRole",
            "id": "ABCAJEDNCAA64SSD123AB"
          },
          "ebsOptimized": false,
          "cpuOptions": {
            "coreCount": 1,
            "threadsPerCore": 1
          }
        }
      }
    }
  }
}
```

In the JSON response object, the instance `State` is nested in an `Instances` object, which is nested in the `Reservations` object. To return the value of the instance `State`, use the following string for the `Selector` so the value can be used in our output: **\$1.Reservations[0].Instances[0].State.Name**.

To reference an output value in subsequent actions of your runbook's workflow, the following format is used: `{{ StepName.NameOfOutput }}`. For example, **\$1\$1 GetInstanceState.InstanceState \$1\$1**. In the visual design experience, you can choose output values to use in subsequent actions using the dropdown for the input. When using outputs in subsequent actions, the data type of the output must match the data type for the input. In this example, the `InstanceState` output is a `String`. Therefore, to use the value in a subsequent action's input, the input must accept a `String`.

# Error handling with the visual design experience


By default, when an action reports an error, Automation stops the runbook's workflow entirely. This is because the default value for the `onFailure` property on all actions is `Abort`. You can configure how Automation handles errors in your runbook's workflow. Even if you have configured error handling, some errors might still cause an automation to fail. For more information, see [Troubleshooting Systems Manager Automation](automation-troubleshooting.md). In the visual design experience, you configure error handling in the **Configuration** panel.

![\[Error handling options\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_error_handling.png)


## Retry action on error


To retry an action in case of an error, specify a value for the **Max attempts** property. The default value is 1. If you specify a value greater than 1, the action isn't considered to have failed until all of the retry attempts have failed.

## Timeouts


You can configure a timeout for actions to set the maximum number of seconds your action can run before it fails. To configure a timeout, enter the number of seconds that your action should wait before the action fails in the **Timeout seconds** property. If the timeout is reached and the action has a value of `Max attempts` that is greater than 1, the step isn't considered to have timed out until the retries complete.

## Failed actions


By default, when an action fails, Automation stops the runbook's workflow entirely. You can modify this behavior by specifying an alternative value for the **On failure** property of the actions in your runbook. If you want the workflow to continue to the next step in the runbook, choose **Continue**. If you want the workflow to jump to a different subsequent step in the runbook, choose **Step** and then enter the name of the step.

## Canceled actions


By default, when an action is canceled by a user, Automation stops the runbook's workflow entirely. You can modify this behavior by specifying an alternative value for the **On cancel** property of the actions in your runbook. If you want the workflow to jump to a different subsequent step in the runbook, choose **Step** and then enter the name of the step.

## Critical actions


You can designate an action as *critical*, meaning it determines the overall reporting status of your automation. If a step with this designation fails, Automation reports the final status as `Failed` regardless of the success of other actions. To configure an action as critical, leave the default value as **True** for the **Is critical** property.

## Ending actions


The **Is end** property stops an automation at the end of the specified action. The default value for this property is `false`. If you configure this property for an action, the automation stops whether the action succeeds or fails. This property is most often used with `aws:branch` actions to handle unexpected or undefined input values. The following example shows a runbook that is expecting an instance state of either `running`, `stopping`, or `stopped`. If an instance is in a different state, the automation ends.

![\[Visual design experience is end example\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_is_end_example.png)


# Tutorial: Create a runbook using the visual design experience


In this tutorial, you will learn the basics of working with the visual design experience provided by Systems Manager Automation. In the visual design experience, you can create a runbook that uses multiple actions. You use the drag and drop feature to arrange actions on the canvas. You also search for, select, and configure these actions. Then, you can view the auto-generated YAML code for your runbook's workflow, exit the visual design experience, run the runbook, and review the execution details.

This tutorial also shows you how to update the runbook and view the new version. At the end of the tutorial, you perform a clean-up step and delete your runbook.

After you complete this tutorial, you'll know how to use the visual design experience to create a runbook. You'll also know how to update, run, and delete your runbook.

**Note**  
Before you start this tutorial, make sure to complete [Setting up Automation](automation-setup.md).

**Topics**
+ [

## Step 1: Navigate to the visual design experience
](#navigate-console)
+ [

## Step 2: Create a workflow
](#create-workflow)
+ [

## Step 3: Review the auto-generated code
](#view-generated-code)
+ [

## Step 4: Run your new runbook
](#use-tutorial-runbook)
+ [

## Step 5: Clean up
](#cleanup-tutorial-runbook)

## Step 1: Navigate to the visual design experience


1. Sign in to the [Systems Manager Automation console](https://console.aws.amazon.com/systems-manager/automation/home?region=us-east-1#/).

1. Choose **Create automation runbook**.

## Step 2: Create a workflow


In the visual design experience, a workflow is a graphical representation of your runbook on the canvas. You can use the visual design experience to define, configure, and examine the individual actions of your runbook.

**To create a workflow**

1. Next to the **Design** and **Code** toggle, select the pencil icon and enter a name for your runbook. For this tutorial, enter **VisualDesignExperienceTutorial**.  
![\[Visual design experience name your runbook\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_tutorial_name.png)

1. In the **Document attributes** section of the **Form** panel, expand the **Input parameters** dropdown, and select **Add a parameter**.

   1. In the **Parameter name** field, enter **InstanceId**.

   1. In the **Type** dropdown, choose **AWS::EC2::Instance**.

   1. Select the **Required** toggle.  
![\[Create a parameter for your runbook\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_actions_tutorial_parameter.png)

1. In the **AWS APIs** browser, enter **DescribeInstances** in the search bar.

1. Drag an **Amazon EC2 – DescribeInstances** action to the empty canvas.

1. For **Step name**, enter a value. For this tutorial, you can use the name **GetInstanceState**.  
![\[Choose an Amazon EC2 describe instances API action.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_tutorial_api_action.png)

   1. Expand the **Additional inputs** dropdown, and in the **Input name** field, enter **InstanceIds**.

   1. Choose the **Inputs** tab.

   1. In the **Input value** field, choose the **InstanceId** document input. This references the value of the input parameter that you created at the beginning of the procedure. Since the **InstanceIds** input for the `DescribeInstances` action accepts `StringList` values, you must wrap the **InstanceId** input in square brackets. The YAML for the **Input value** should match the following: **['\$1\$1 InstanceId \$1\$1']**.

   1. In the **Outputs** tab, select **Add an output** and enter **InstanceState** in the **Name** field.

   1. In the **Selector** field, enter **\$1.Reservations[0].Instances[0].State.Name**.

   1. In the **Type** dropdown, choose **String**.

1. Drag a **Branch** action from the **Actions** browser, and drop it below the **`GetInstanceState`** step. 

1. For **Step name**, enter a value. For this tutorial, use the name `BranchOnInstanceState`.

   To define the branching logic, do the following:

   1. Choose the **`Branch`** state on the canvas. Then, under **Inputs** and **Choices**, select the pencil icon to edit **Rule \$11**.

   1. Choose **Add conditions**.

   1. In the **Conditions for rule \$11** dialog box, choose the **GetInstanceState.InstanceState** step output from the **Variable** dropdown.

   1. For **Operator**, choose **is equal to**.

   1. For **Value**, choose **String** from the dropdown list. Enter **stopped**.  
![\[Define a condition for a branch action.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_tutorial_condition.png)

   1. Select **Save conditions**.

   1. Choose **Add new choice rule**.

   1. Choose **Add conditions** for **Rule \$12**.

   1. In the **Conditions for rule \$12** dialog box, choose the **GetInstanceState.InstanceState** step output from the **Variable** dropdown.

   1. For **Operator**, choose **is equal to**.

   1. For **Value**, choose **String** from the dropdown list. Enter **stopping**.

   1. Select **Save conditions**.

   1. Choose **Add new choice rule**.

   1. For **Rule \$13**, choose **Add conditions**.

   1. In the **Conditions for rule \$13** dialog box, choose the **GetInstanceState.InstanceState** step output from the **Variable** dropdown.

   1. For **Operator**, choose **is equal to**.

   1. For **Value**, choose **String** from the dropdown list. Enter **running**.

   1. Select **Save conditions**.

   1. In the **Default rule**, choose **Go to end** for the **Default step**.

1. Drag a **Change an instance state** action to the empty **Drag action here** box under the **\$1\$1 GetInstanceState.InstanceState \$1\$1 == "stopped"** condition.

   1. For the **Step name**, enter **StartInstance**.

   1. In the **Inputs** tab, under **Instance IDs**, choose the **InstanceId** document input value from the dropdown.

   1. For the **Desired state**, specify **`running`**.

1. Drag a **Wait on AWS resource** action to the empty **Drag action here** box under the **\$1\$1 GetInstanceState.InstanceState \$1\$1 == "stopping"** condition.

1. For **Step name**, enter a value. For this tutorial, use the name `WaitForInstanceStop`.

   1. For the **Service** field, choose **Amazon EC2**.

   1. For the **API** field, choose **DescribeInstances**.

   1. For the **Property selector** field, enter **\$1.Reservations[0].Instances[0].State.Name**.

   1. For the **Desired values** parameter, enter **`["stopped"]`**.

   1. In the **Configuration** tab of the **WaitForInstanceStop** action, choose **StartInstance** from the **Next step** dropdown.

1. Drag a **Run command on instances** action to the empty **Drag action here** box under the **\$1\$1 GetInstanceState.InstanceState \$1\$1 == "running"** condition.

1. For the **Step name**, enter **SayHello**.

   1. In the **Inputs** tab, enter **AWS-RunShellScript** for the **Document name** parameter.

   1. For **InstanceIds**, choose the **InstanceId** document input value from the dropdown.

   1. Expand the **Additional inputs** dropdown, and in the **Input name** dropdown, choose **Parameters**.

   1. In the **Input value** field, enter **`{"commands": "echo 'Hello World'"}`**.

1. Review the completed runbook in the canvas and select **Create runbook** to save the tutorial runbook.  
![\[Review and create the runbook.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/visual_designer_tutorial_complete.png)

## Step 3: Review the auto-generated code


As you drag and drop actions from the **Actions** browser onto the canvas, the visual design experience automatically composes the YAML or JSON content of your runbook in real-time. You can view and edit this code. To view the auto-generated code, select **Code** for the **Design** and **Code** toggle.

## Step 4: Run your new runbook


After creating your runbook, you can run the automation.

**To run your new automation runbook**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Automation document** list, choose a runbook. Choose one or more options in the **Document categories** pane to filter SSM documents according to their purpose. To view a runbook that you own, choose the **Owned by me** tab. To view a runbook that is shared with your account, choose the **Shared with me** tab. To view all runbooks, choose the **All documents** tab.
**Note**  
You can view information about a runbook by choosing the runbook name.

1. In the **Document details** section, verify that **Document version** is set to the version that you want to run. The system includes the following version options: 
   + **Default version at runtime** – Choose this option if the Automation runbook is updated periodically and a new default version is assigned.
   + **Latest version at runtime** – Choose this option if the Automation runbook is updated periodically, and you want to run the version that was most recently updated.
   + **1 (Default)** – Choose this option to run the first version of the document, which is the default.

1. Choose **Next**.

1. In the **Execute automation runbook** section, choose **Simple execution**.

1. In the **Input parameters** section, specify the required inputs. Optionally, you can choose an IAM service role from the **AutomationAssumeRole** list.

1. (Optional) Choose an Amazon CloudWatch alarm to apply to your automation for monitoring. To attach a CloudWatch alarm to your automation, the IAM principal that starts the automation must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html). If your alarm activates, the automation is stopped. If you use AWS CloudTrail, you will see the API call in your trail. 

1. Choose **Execute**. 

## Step 5: Clean up


**To delete your runbook**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Select the **Owned by me** tab.

1. Locate the **VisualDesignExperienceTutorial** runbook.

1. Select the button on the document card page, and then choose **Delete document** from the **Actions** dropdown.

# Authoring Automation runbooks


Each runbook in Automation, a tool in AWS Systems Manager, defines an automation. Automation runbooks define the actions that are performed during an automation. In the runbook content, you define the input parameters, outputs, and actions that Systems Manager performs on your managed instances and AWS resources. 

Automation includes several pre-defined runbooks that you can use to perform common tasks like restarting one or more Amazon Elastic Compute Cloud (Amazon EC2) instances or creating an Amazon Machine Image (AMI). However, your use cases might extend beyond the capabilities of the pre-defined runbooks. If this is the case, you can create your own runbooks and modify them to your needs.

A runbook consists of automation actions, parameters for those actions, and input parameters that you specify. A runbook's content is written in either YAML or JSON. If you're not familiar with either YAML or JSON, we recommend using the visual designer, or learning more about either markup language before attempting to author your own runbook. For more information about the visual designer, see [Visual design experience for Automation runbooks](automation-visual-designer.md).

The following sections will help you author your first runbook.

## Identify your use case


The first step in authoring a runbook is identifying your use case. For example, you scheduled the `AWS-CreateImage` runbook to run daily on all of your production Amazon EC2 instances. At the end of the month, you decide you have more images than are necessary for recovery points. Going forward, you want to automatically delete the oldest AMI of an Amazon EC2 instance when a new AMI is created. To accomplish this, you create a new runbook that does the following:

1. Runs the `aws:createImage` action and specifies the instance ID in the image description.

1. Runs the `aws:waitForAwsResourceProperty` action to poll the state of the image until it's `available`.

1. After the image state is `available`, the `aws:executeScript` action runs a custom Python script that gathers the IDs of all images associated with your Amazon EC2 instance. The script does this by filtering, using the instance ID in the image description you specified at creation. Then, the script sorts the list of image IDs based on the `creationDate` of the image and outputs the ID of the oldest AMI.

1. Lastly, the `aws:deleteImage` action runs to delete the oldest AMI using the ID from the output of the previous step.

In this scenario, you were already using the `AWS-CreateImage` runbook but found that your use case required greater flexibility. This is a common situation because there can be overlap between runbooks and automation actions. As a result, you might have to adjust which runbooks or actions you use to address your use case.

For example, the `aws:executeScript` and `aws:invokeLambdaFunction` actions both allow you to run custom scripts as part of your automation. To choose between them, you might prefer `aws:invokeLambdaFunction` because of the additional supported runtime languages. However, you might prefer `aws:executeScript` because it allows you to author your script content directly in YAML runbooks and provide script content as attachments for JSON runbooks. You might also consider `aws:executeScript` to be simpler in terms of AWS Identity and Access Management (IAM) setup. Because it uses the permissions provided in the `AutomationAssumeRole`, `aws:executeScript` doesn't require an additional AWS Lambda function execution role.

In any given scenario, one action might provide more flexibility, or added functionality, over another. Therefore, we recommend that you review the available input parameters for the runbook or action you want to use to determine which best fits your use case and preferences.

## Set up your development environment


After you've identified your use case and the pre-defined runbooks or automation actions you want to use in your runbook, it's time to set up your development environment for the content of your runbook. To develop your runbook content, we recommend using the AWS Toolkit for Visual Studio Code instead of the Systems Manager Documents console. 

The Toolkit for VS Code is an open-source extension for Visual Studio Code (VS Code) that offers more features than the Systems Manager Documents console. Helpful features include schema validation for both YAML and JSON, snippets for automation action types, and auto-complete support for various options in both YAML and JSON. 

For more information about installing the Toolkit for VS Code, see [Installing the AWS Toolkit for Visual Studio Code](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/setup-toolkit.html). For more information about using the Toolkit for VS Code to develop runbooks, see [Working with Systems Manager Automation documents](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/systems-manager-automation-docs.html) in the *AWS Toolkit for Visual Studio Code User Guide*.

## Develop runbook content


With your use case identified and environment set up, you're ready to develop the content for your runbook. Your use case and preferences will largely dictate the automation actions or runbooks you use in your runbook content. Some actions support only a subset of input parameters when compared to another action that allows you to accomplish a similar task. Other actions have specific outputs, such as `aws:createImage`, where some actions allow you to define your own outputs, such as `aws:executeAwsApi`. 

If you're unsure how to use a particular action in your runbook, we recommend reviewing the corresponding entry for the action in the [Systems Manager Automation actions reference](automation-actions.md). We also recommend reviewing the content of pre-defined runbooks to see real-world examples of how these actions are used. For more examples of real-world applications of runbooks, see [Additional runbook examples](automation-document-examples.md).

To demonstrate the differences in simplicity and flexibility that runbook content provides, the following tutorials provide an example of how to patch groups of Amazon EC2 instances in stages:
+ [Example 1: Creating parent-child runbooks](automation-authoring-runbooks-parent-child-example.md) – In this example, two runbooks are used in a parent-child relationship. The parent runbook initiates a rate control automation of the child runbook. 
+ [Example 2: Scripted runbook](automation-authoring-runbooks-scripted-example.md) – This example demonstrates how you can accomplish the same tasks of Example 1 by condensing the content into a single runbook and using scripts in your runbook.

# Example 1: Creating parent-child runbooks


The following example demonstrates how to create two runbooks that patch tagged groups of Amazon Elastic Compute Cloud (Amazon EC2) instances in stages. These runbooks are used in a parent-child relationship with the parent runbook used to initiate a rate control automation of the child runbook. For more information about rate control automations, see [Run automated operations at scale](running-automations-scale.md). For more information about the automation actions used in this example, see the [Systems Manager Automation actions reference](automation-actions.md).

## Create the child runbook


This example runbook addresses the following scenario. Emily is a Systems Engineer at AnyCompany Consultants, LLC. She needs to configure patching for groups of Amazon Elastic Compute Cloud (Amazon EC2) instances that host primary and secondary databases. Applications access these databases 24 hours a day, so one of the database instances must always be available. 

She decides that patching the instances in stages is the best approach. The primary group of database instances will be patched first, followed by the secondary group of database instances. Also, to avoid incurring additional costs by leaving instances running that were previously stopped, Emily wants the patched instances to be returned to their original state before the patching occurred. 

Emily identifies the primary and secondary groups of database instances by the tags associated with the instances. She decides to create a parent runbook that starts a rate control automation of a child runbook. By doing that, she can target the tags associated with the primary and secondary groups of database instances and manage the concurrency of the child automations. After reviewing the available Systems Manager (SSM) documents for patching, she chooses the `AWS-RunPatchBaseline` document. By using this SSM document, her colleagues can review the associated patch compliance information after the patching operation completes.

To start creating her runbook content, Emily reviews the available automation actions and begins authoring the content for the child runbook as follows:

1. First, she provides values for the schema and description of the runbook, and defines the input parameters for the child runbook.

   By using the `AutomationAssumeRole` parameter, Emily and her colleagues can use an existing IAM role that allows Automation to perform the actions in the runbook on their behalf. Emily uses the `InstanceId` parameter to determine the instance that should be patched. Optionally, the `Operation`, `RebootOption`, and `SnapshotId` parameters can be used to provide values to document parameters for `AWS-RunPatchBaseline`. To prevent invalid values from being provided to those document parameters, she defines the `allowedValues` as needed.

------
#### [ YAML ]

   ```
   schemaVersion: '0.3'
   description: 'An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.'
   assumeRole: '{{AutomationAssumeRole}}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: >-
         '(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the
         actions on your behalf. If no role is specified, Systems Manager
         Automation uses your IAM permissions to operate this runbook.'
       default: ''
     InstanceId:
       type: String
       description: >-
         '(Required) The instance you want to patch.'
     SnapshotId:
       type: String
       description: '(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.'
       default: ''
     RebootOption:
       type: String
       description: '(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.'
       allowedValues:
         - NoReboot
         - RebootIfNeeded
       default: RebootIfNeeded
     Operation:
       type: String
       description: '(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.'
       allowedValues:
         - Install
         - Scan
       default: Install
   ```

------
#### [ JSON ]

   ```
   {
      "schemaVersion":"0.3",
      "description":"An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.",
      "assumeRole":"{{AutomationAssumeRole}}",
      "parameters":{
         "AutomationAssumeRole":{
            "type":"String",
            "description":"(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.",
            "default":""
         },
         "InstanceId":{
            "type":"String",
            "description":"(Required) The instance you want to patch."
         },
         "SnapshotId":{
            "type":"String",
            "description":"(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.",
            "default":""
         },
         "RebootOption":{
            "type":"String",
            "description":"(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.",
            "allowedValues":[
               "NoReboot",
               "RebootIfNeeded"
            ],
            "default":"RebootIfNeeded"
         },
         "Operation":{
            "type":"String",
            "description":"(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.",
            "allowedValues":[
               "Install",
               "Scan"
            ],
            "default":"Install"
         }
      }
   },
   ```

------

1. With the top-level elements defined, Emily proceeds with authoring the actions that make up the `mainSteps` of the runbook. The first step outputs the current state of the target instance specified in the `InstanceId` input parameter using the `aws:executeAwsApi` action. The output of this action is used in later actions.

------
#### [ YAML ]

   ```
   mainSteps:
     - name: getInstanceState
       action: 'aws:executeAwsApi'
       onFailure: Abort
       inputs:
         inputs:
         Service: ec2
         Api: DescribeInstances
         InstanceIds:
           - '{{InstanceId}}'
       outputs:
         - Name: instanceState
           Selector: '$.Reservations[0].Instances[0].State.Name'
           Type: String
       nextStep: branchOnInstanceState
   ```

------
#### [ JSON ]

   ```
   "mainSteps":[
         {
            "name":"getInstanceState",
            "action":"aws:executeAwsApi",
            "onFailure":"Abort",
            "inputs":{
               "inputs":null,
               "Service":"ec2",
               "Api":"DescribeInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ]
            },
            "outputs":[
               {
                  "Name":"instanceState",
                  "Selector":"$.Reservations[0].Instances[0].State.Name",
                  "Type":"String"
               }
            ],
            "nextStep":"branchOnInstanceState"
         },
   ```

------

1. Rather than manually starting and keeping track of the original state of every instance that needs to be patched, Emily uses the output from the previous action to branch the automation based on the state of the target instance. This allows the automation to run different steps depending on the conditions defined in the `aws:branch` action and improves the overall efficiency of the automation without manual intervention.

   If the instance state is already `running`, the automation proceeds with patching the instance with the `AWS-RunPatchBaseline` document using the `aws:runCommand` action.

   If the instance state is `stopping`, the automation polls for the instance to reach the `stopped` state using the `aws:waitForAwsResourceProperty` action, starts the instance using the `executeAwsApi` action, and polls for the instance to reach a `running` state before patching the instance.

   If the instance state is `stopped`, the automation starts the instance and polls for the instance to reach a `running` state before patching the instance using the same actions.

------
#### [ YAML ]

   ```
   - name: branchOnInstanceState
       action: 'aws:branch'
       onFailure: Abort
       inputs:
         Choices:
           - NextStep: startInstance
              Variable: '{{getInstanceState.instanceState}}'
              StringEquals: stopped
            - NextStep: verifyInstanceStopped
              Variable: '{{getInstanceState.instanceState}}'
              StringEquals: stopping
            - NextStep: patchInstance
              Variable: '{{getInstanceState.instanceState}}'
              StringEquals: running
       isEnd: true
     - name: startInstance
       action: 'aws:executeAwsApi'
       onFailure: Abort
       inputs:
         Service: ec2
         Api: StartInstances
         InstanceIds:
           - '{{InstanceId}}'
       nextStep: verifyInstanceRunning
     - name: verifyInstanceRunning
       action: 'aws:waitForAwsResourceProperty'
       timeoutSeconds: 120
       inputs:
         Service: ec2
         Api: DescribeInstances
         InstanceIds:
           - '{{InstanceId}}'
         PropertySelector: '$.Reservations[0].Instances[0].State.Name'
         DesiredValues:
           - running
       nextStep: patchInstance
     - name: verifyInstanceStopped
       action: 'aws:waitForAwsResourceProperty'
       timeoutSeconds: 120
       inputs:
         Service: ec2
         Api: DescribeInstances
         InstanceIds:
           - '{{InstanceId}}'
         PropertySelector: '$.Reservations[0].Instances[0].State.Name'
         DesiredValues:
           - stopped
         nextStep: startInstance
     - name: patchInstance
       action: 'aws:runCommand'
       onFailure: Abort
       timeoutSeconds: 5400
       inputs:
         DocumentName: 'AWS-RunPatchBaseline'
         InstanceIds: 
         - '{{InstanceId}}'
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
   ```

------
#### [ JSON ]

   ```
   {
            "name":"branchOnInstanceState",
            "action":"aws:branch",
            "onFailure":"Abort",
            "inputs":{
               "Choices":[
                  {
                     "NextStep":"startInstance",
                     "Variable":"{{getInstanceState.instanceState}}",
                     "StringEquals":"stopped"
                  },
                  {
                     "Or":[
                        {
                           "Variable":"{{getInstanceState.instanceState}}",
                           "StringEquals":"stopping"
                        }
                     ],
                     "NextStep":"verifyInstanceStopped"
                  },
                  {
                     "NextStep":"patchInstance",
                     "Variable":"{{getInstanceState.instanceState}}",
                     "StringEquals":"running"
                  }
               ]
            },
            "isEnd":true
         },
         {
            "name":"startInstance",
            "action":"aws:executeAwsApi",
            "onFailure":"Abort",
            "inputs":{
               "Service":"ec2",
               "Api":"StartInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ]
            },
            "nextStep":"verifyInstanceRunning"
         },
         {
            "name":"verifyInstanceRunning",
            "action":"aws:waitForAwsResourceProperty",
            "timeoutSeconds":120,
            "inputs":{
               "Service":"ec2",
               "Api":"DescribeInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ],
               "PropertySelector":"$.Reservations[0].Instances[0].State.Name",
               "DesiredValues":[
                  "running"
               ]
            },
            "nextStep":"patchInstance"
         },
         {
            "name":"verifyInstanceStopped",
            "action":"aws:waitForAwsResourceProperty",
            "timeoutSeconds":120,
            "inputs":{
               "Service":"ec2",
               "Api":"DescribeInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ],
               "PropertySelector":"$.Reservations[0].Instances[0].State.Name",
               "DesiredValues":[
                  "stopped"
               ],
               "nextStep":"startInstance"
            }
         },
         {
            "name":"patchInstance",
            "action":"aws:runCommand",
            "onFailure":"Abort",
            "timeoutSeconds":5400,
            "inputs":{
               "DocumentName":"AWS-RunPatchBaseline",
               "InstanceIds":[
                  "{{InstanceId}}"
               ],
               "Parameters":{
                  "SnapshotId":"{{SnapshotId}}",
                  "RebootOption":"{{RebootOption}}",
                  "Operation":"{{Operation}}"
               }
            }
         },
   ```

------

1. After the patching operation completes, Emily wants the automation to return the target instance to the same state it was in before the automation started. She does this by again using the output from the first action. The automation branches based on the original state of the target instance using the `aws:branch` action. If the instance was previously in any state other than `running`, the instance is stopped. Otherwise, if the instance state is `running`, the automation ends.

------
#### [ YAML ]

   ```
   - name: branchOnOriginalInstanceState
       action: 'aws:branch'
       onFailure: Abort
       inputs:
         Choices:
           - NextStep: stopInstance
             Not: 
               Variable: '{{getInstanceState.instanceState}}'
               StringEquals: running
       isEnd: true
     - name: stopInstance
       action: 'aws:executeAwsApi'
       onFailure: Abort
       inputs:
         Service: ec2
         Api: StopInstances
         InstanceIds:
           - '{{InstanceId}}'
   ```

------
#### [ JSON ]

   ```
   {
            "name":"branchOnOriginalInstanceState",
            "action":"aws:branch",
            "onFailure":"Abort",
            "inputs":{
               "Choices":[
                  {
                     "NextStep":"stopInstance",
                     "Not":{
                        "Variable":"{{getInstanceState.instanceState}}",
                        "StringEquals":"running"
                     }
                  }
               ]
            },
            "isEnd":true
         },
         {
            "name":"stopInstance",
            "action":"aws:executeAwsApi",
            "onFailure":"Abort",
            "inputs":{
               "Service":"ec2",
               "Api":"StopInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ]
            }
         }
      ]
   }
   ```

------

1. Emily reviews the completed child runbook content and creates the runbook in the same AWS account and AWS Region as the target instances. Now she's ready to continue with the creation of the parent runbook's content. The following is the completed child runbook content.

------
#### [ YAML ]

   ```
   schemaVersion: '0.3'
   description: 'An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.'
   assumeRole: '{{AutomationAssumeRole}}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: >-
         '(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the
         actions on your behalf. If no role is specified, Systems Manager
         Automation uses your IAM permissions to operate this runbook.'
       default: ''
     InstanceId:
       type: String
       description: >-
         '(Required) The instance you want to patch.'
     SnapshotId:
       type: String
       description: '(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.'
       default: ''
     RebootOption:
       type: String
       description: '(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.'
       allowedValues:
         - NoReboot
         - RebootIfNeeded
       default: RebootIfNeeded
     Operation:
       type: String
       description: '(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.'
       allowedValues:
         - Install
         - Scan
       default: Install
   mainSteps:
     - name: getInstanceState
       action: 'aws:executeAwsApi'
       onFailure: Abort
       inputs:
         inputs:
         Service: ec2
         Api: DescribeInstances
         InstanceIds:
           - '{{InstanceId}}'
       outputs:
         - Name: instanceState
           Selector: '$.Reservations[0].Instances[0].State.Name'
           Type: String
       nextStep: branchOnInstanceState
     - name: branchOnInstanceState
       action: 'aws:branch'
       onFailure: Abort
       inputs:
         Choices:
           - NextStep: startInstance
             Variable: '{{getInstanceState.instanceState}}'
             StringEquals: stopped
           - Or:
               - Variable: '{{getInstanceState.instanceState}}'
                 StringEquals: stopping
             NextStep: verifyInstanceStopped
           - NextStep: patchInstance
             Variable: '{{getInstanceState.instanceState}}'
             StringEquals: running
       isEnd: true
     - name: startInstance
       action: 'aws:executeAwsApi'
       onFailure: Abort
       inputs:
         Service: ec2
         Api: StartInstances
         InstanceIds:
           - '{{InstanceId}}'
       nextStep: verifyInstanceRunning
     - name: verifyInstanceRunning
       action: 'aws:waitForAwsResourceProperty'
       timeoutSeconds: 120
       inputs:
         Service: ec2
         Api: DescribeInstances
         InstanceIds:
           - '{{InstanceId}}'
         PropertySelector: '$.Reservations[0].Instances[0].State.Name'
         DesiredValues:
           - running
       nextStep: patchInstance
     - name: verifyInstanceStopped
       action: 'aws:waitForAwsResourceProperty'
       timeoutSeconds: 120
       inputs:
         Service: ec2
         Api: DescribeInstances
         InstanceIds:
           - '{{InstanceId}}'
         PropertySelector: '$.Reservations[0].Instances[0].State.Name'
         DesiredValues:
           - stopped
         nextStep: startInstance
     - name: patchInstance
       action: 'aws:runCommand'
       onFailure: Abort
       timeoutSeconds: 5400
       inputs:
         DocumentName: 'AWS-RunPatchBaseline'
         InstanceIds: 
         - '{{InstanceId}}'
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
     - name: branchOnOriginalInstanceState
       action: 'aws:branch'
       onFailure: Abort
       inputs:
         Choices:
           - NextStep: stopInstance
             Not: 
               Variable: '{{getInstanceState.instanceState}}'
               StringEquals: running
       isEnd: true
     - name: stopInstance
       action: 'aws:executeAwsApi'
       onFailure: Abort
       inputs:
         Service: ec2
         Api: StopInstances
         InstanceIds:
           - '{{InstanceId}}'
   ```

------
#### [ JSON ]

   ```
   {
      "schemaVersion":"0.3",
      "description":"An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.",
      "assumeRole":"{{AutomationAssumeRole}}",
      "parameters":{
         "AutomationAssumeRole":{
            "type":"String",
            "description":"'(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.'",
            "default":""
         },
         "InstanceId":{
            "type":"String",
            "description":"'(Required) The instance you want to patch.'"
         },
         "SnapshotId":{
            "type":"String",
            "description":"(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.",
            "default":""
         },
         "RebootOption":{
            "type":"String",
            "description":"(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.",
            "allowedValues":[
               "NoReboot",
               "RebootIfNeeded"
            ],
            "default":"RebootIfNeeded"
         },
         "Operation":{
            "type":"String",
            "description":"(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.",
            "allowedValues":[
               "Install",
               "Scan"
            ],
            "default":"Install"
         }
      },
      "mainSteps":[
         {
            "name":"getInstanceState",
            "action":"aws:executeAwsApi",
            "onFailure":"Abort",
            "inputs":{
               "inputs":null,
               "Service":"ec2",
               "Api":"DescribeInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ]
            },
            "outputs":[
               {
                  "Name":"instanceState",
                  "Selector":"$.Reservations[0].Instances[0].State.Name",
                  "Type":"String"
               }
            ],
            "nextStep":"branchOnInstanceState"
         },
         {
            "name":"branchOnInstanceState",
            "action":"aws:branch",
            "onFailure":"Abort",
            "inputs":{
               "Choices":[
                  {
                     "NextStep":"startInstance",
                     "Variable":"{{getInstanceState.instanceState}}",
                     "StringEquals":"stopped"
                  },
                  {
                     "Or":[
                        {
                           "Variable":"{{getInstanceState.instanceState}}",
                           "StringEquals":"stopping"
                        }
                     ],
                     "NextStep":"verifyInstanceStopped"
                  },
                  {
                     "NextStep":"patchInstance",
                     "Variable":"{{getInstanceState.instanceState}}",
                     "StringEquals":"running"
                  }
               ]
            },
            "isEnd":true
         },
         {
            "name":"startInstance",
            "action":"aws:executeAwsApi",
            "onFailure":"Abort",
            "inputs":{
               "Service":"ec2",
               "Api":"StartInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ]
            },
            "nextStep":"verifyInstanceRunning"
         },
         {
            "name":"verifyInstanceRunning",
            "action":"aws:waitForAwsResourceProperty",
            "timeoutSeconds":120,
            "inputs":{
               "Service":"ec2",
               "Api":"DescribeInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ],
               "PropertySelector":"$.Reservations[0].Instances[0].State.Name",
               "DesiredValues":[
                  "running"
               ]
            },
            "nextStep":"patchInstance"
         },
         {
            "name":"verifyInstanceStopped",
            "action":"aws:waitForAwsResourceProperty",
            "timeoutSeconds":120,
            "inputs":{
               "Service":"ec2",
               "Api":"DescribeInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ],
               "PropertySelector":"$.Reservations[0].Instances[0].State.Name",
               "DesiredValues":[
                  "stopped"
               ],
               "nextStep":"startInstance"
            }
         },
         {
            "name":"patchInstance",
            "action":"aws:runCommand",
            "onFailure":"Abort",
            "timeoutSeconds":5400,
            "inputs":{
               "DocumentName":"AWS-RunPatchBaseline",
               "InstanceIds":[
                  "{{InstanceId}}"
               ],
               "Parameters":{
                  "SnapshotId":"{{SnapshotId}}",
                  "RebootOption":"{{RebootOption}}",
                  "Operation":"{{Operation}}"
               }
            }
         },
         {
            "name":"branchOnOriginalInstanceState",
            "action":"aws:branch",
            "onFailure":"Abort",
            "inputs":{
               "Choices":[
                  {
                     "NextStep":"stopInstance",
                     "Not":{
                        "Variable":"{{getInstanceState.instanceState}}",
                        "StringEquals":"running"
                     }
                  }
               ]
            },
            "isEnd":true
         },
         {
            "name":"stopInstance",
            "action":"aws:executeAwsApi",
            "onFailure":"Abort",
            "inputs":{
               "Service":"ec2",
               "Api":"StopInstances",
               "InstanceIds":[
                  "{{InstanceId}}"
               ]
            }
         }
      ]
   }
   ```

------

For more information about the automation actions used in this example, see the [Systems Manager Automation actions reference](automation-actions.md).

## Create the parent runbook


This example runbook continues the scenario described in the previous section. Now that Emily has created the child runbook, she begins authoring the content for the parent runbook as follows:

1. First, she provides values for the schema and description of the runbook, and defines the input parameters for the parent runbook.

   By using the `AutomationAssumeRole` parameter, Emily and her colleagues can use an existing IAM role that allows Automation to perform the actions in the runbook on their behalf. Emily uses the `PatchGroupPrimaryKey` and `PatchGroupPrimaryValue` parameters to specify the tag associated with the primary group of database instances that will be patched. She uses the `PatchGroupSecondaryKey` and `PatchGroupSecondaryValue` parameters to specify the tag associated with the secondary group of database instances that will be patched.

------
#### [ YAML ]

   ```
   description: 'An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.'
   schemaVersion: '0.3'
   assumeRole: '{{AutomationAssumeRole}}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: '(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.'
       default: ''
     PatchGroupPrimaryKey:
       type: String
       description: '(Required) The key of the tag for the primary group of instances you want to patch.''
     PatchGroupPrimaryValue:
       type: String
       description: '(Required) The value of the tag for the primary group of instances you want to patch.'
     PatchGroupSecondaryKey:
       type: String
       description: '(Required) The key of the tag for the secondary group of instances you want to patch.'
     PatchGroupSecondaryValue:
       type: String
       description: '(Required) The value of the tag for the secondary group of instances you want to patch.'
   ```

------
#### [ JSON ]

   ```
   {
      "schemaVersion": "0.3",
      "description": "An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.",
      "assumeRole": "{{AutomationAssumeRole}}",
      "parameters": {
         "AutomationAssumeRole": {
            "type": "String",
            "description": "(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.",
            "default": ""
         },
         "PatchGroupPrimaryKey": {
            "type": "String",
            "description": "(Required) The key of the tag for the primary group of instances you want to patch."
         },
         "PatchGroupPrimaryValue": {
            "type": "String",
            "description": "(Required) The value of the tag for the primary group of instances you want to patch."
         },
         "PatchGroupSecondaryKey": {
            "type": "String",
            "description": "(Required) The key of the tag for the secondary group of instances you want to patch."
         },
         "PatchGroupSecondaryValue": {
            "type": "String",
            "description": "(Required) The value of the tag for the secondary group of instances you want to patch."
         }
      }
   },
   ```

------

1. With the top-level elements defined, Emily proceeds with authoring the actions that make up the `mainSteps` of the runbook. 

   The first action starts a rate control automation using the child runbook she just created that targets instances associated with the tag specified in the `PatchGroupPrimaryKey` and `PatchGroupPrimaryValue` input parameters. She uses the values provided to the input parameters to specify the key and value of the tag associated with the primary group of database instances she wants to patch.

   After the first automation completes, the second action starts another rate control automation using the child runbook that targets instances associated with the tag specified in the `PatchGroupSecondaryKey` and `PatchGroupSecondaryValue` input parameters. She uses the values provided to the input parameters to specify the key and value of the tag associated with the secondary group of database instances she wants to patch.

------
#### [ YAML ]

   ```
   mainSteps:
     - name: patchPrimaryTargets
       action: 'aws:executeAutomation'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: RunbookTutorialChildAutomation
         Targets:
           - Key: 'tag:{{PatchGroupPrimaryKey}}'
             Values:
               - '{{PatchGroupPrimaryValue}}'
         TargetParameterName: 'InstanceId'
     - name: patchSecondaryTargets
       action: 'aws:executeAutomation'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: RunbookTutorialChildAutomation
         Targets:
           - Key: 'tag:{{PatchGroupSecondaryKey}}'
             Values:
               - '{{PatchGroupSecondaryValue}}'
         TargetParameterName: 'InstanceId'
   ```

------
#### [ JSON ]

   ```
   "mainSteps":[
         {
            "name":"patchPrimaryTargets",
            "action":"aws:executeAutomation",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"RunbookTutorialChildAutomation",
               "Targets":[
                  {
                     "Key":"tag:{{PatchGroupPrimaryKey}}",
                     "Values":[
                        "{{PatchGroupPrimaryValue}}"
                     ]
                  }
               ],
               "TargetParameterName":"InstanceId"
            }
         },
         {
            "name":"patchSecondaryTargets",
            "action":"aws:executeAutomation",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"RunbookTutorialChildAutomation",
               "Targets":[
                  {
                     "Key":"tag:{{PatchGroupSecondaryKey}}",
                     "Values":[
                        "{{PatchGroupSecondaryValue}}"
                     ]
                  }
               ],
               "TargetParameterName":"InstanceId"
            }
         }
      ]
   }
   ```

------

1. Emily reviews the completed parent runbook content and creates the runbook in the same AWS account and AWS Region as the target instances. Now, she is ready to test her runbooks to make sure the automation operates as desired before implementing them into her production environment. The following is the completed parent runbook content.

------
#### [ YAML ]

   ```
   description: An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.
   schemaVersion: '0.3'
   assumeRole: '{{AutomationAssumeRole}}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: '(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.'
       default: ''
     PatchGroupPrimaryKey:
       type: String
       description: (Required) The key of the tag for the primary group of instances you want to patch.
     PatchGroupPrimaryValue:
       type: String
       description: '(Required) The value of the tag for the primary group of instances you want to patch. '
     PatchGroupSecondaryKey:
       type: String
       description: (Required) The key of the tag for the secondary group of instances you want to patch.
     PatchGroupSecondaryValue:
       type: String
       description: '(Required) The value of the tag for the secondary group of instances you want to patch.  '
   mainSteps:
     - name: patchPrimaryTargets
       action: 'aws:executeAutomation'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: RunbookTutorialChildAutomation
         Targets:
           - Key: 'tag:{{PatchGroupPrimaryKey}}'
             Values:
               - '{{PatchGroupPrimaryValue}}'
         TargetParameterName: 'InstanceId'
     - name: patchSecondaryTargets
       action: 'aws:executeAutomation'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: RunbookTutorialChildAutomation
         Targets:
           - Key: 'tag:{{PatchGroupSecondaryKey}}'
             Values:
               - '{{PatchGroupSecondaryValue}}'
         TargetParameterName: 'InstanceId'
   ```

------
#### [ JSON ]

   ```
   {
      "description":"An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.",
      "schemaVersion":"0.3",
      "assumeRole":"{{AutomationAssumeRole}}",
      "parameters":{
         "AutomationAssumeRole":{
            "type":"String",
            "description":"(Optional) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.",
            "default":""
         },
         "PatchGroupPrimaryKey":{
            "type":"String",
            "description":"(Required) The key of the tag for the primary group of instances you want to patch."
         },
         "PatchGroupPrimaryValue":{
            "type":"String",
            "description":"(Required) The value of the tag for the primary group of instances you want to patch. "
         },
         "PatchGroupSecondaryKey":{
            "type":"String",
            "description":"(Required) The key of the tag for the secondary group of instances you want to patch."
         },
         "PatchGroupSecondaryValue":{
            "type":"String",
            "description":"(Required) The value of the tag for the secondary group of instances you want to patch.  "
         }
      },
      "mainSteps":[
         {
            "name":"patchPrimaryTargets",
            "action":"aws:executeAutomation",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"RunbookTutorialChildAutomation",
               "Targets":[
                  {
                     "Key":"tag:{{PatchGroupPrimaryKey}}",
                     "Values":[
                        "{{PatchGroupPrimaryValue}}"
                     ]
                  }
               ],
               "TargetParameterName":"InstanceId"
            }
         },
         {
            "name":"patchSecondaryTargets",
            "action":"aws:executeAutomation",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"RunbookTutorialChildAutomation",
               "Targets":[
                  {
                     "Key":"tag:{{PatchGroupSecondaryKey}}",
                     "Values":[
                        "{{PatchGroupSecondaryValue}}"
                     ]
                  }
               ],
               "TargetParameterName":"InstanceId"
            }
         }
      ]
   }
   ```

------

For more information about the automation actions used in this example, see the [Systems Manager Automation actions reference](automation-actions.md).

# Example 2: Scripted runbook


This example runbook addresses the following scenario. Emily is a Systems Engineer at AnyCompany Consultants, LLC. She previously created two runbooks that are used in a parent-child relationship to patch groups of Amazon Elastic Compute Cloud (Amazon EC2) instances that host primary and secondary databases. Applications access these databases 24 hours a day, so one of the database instances must always be available. 

Based on this requirement, she built a solution that patches the instances in stages using the `AWS-RunPatchBaseline` Systems Manager (SSM) document. By using this SSM document, her colleagues can review the associated patch compliance information after the patching operation completes. 

The primary group of database instances are patched first, followed by the secondary group of database instances. Also, to avoid incurring additional costs by leaving instances running that were previously stopped, Emily made sure that the automation returned the patched instances to their original state before the patching occurred. Emily used tags that are associated with the primary and secondary groups of database instances to identify which instances should be patched in her desired order.

Her existing automated solution works, but she wants to improve her solution if possible. To help with the maintenance of the runbook content and to ease troubleshooting efforts, she would like to condense the automation into a single runbook and simplify the number of input parameters. Also, she would like to avoid creating multiple child automations. 

After Emily reviews the available automation actions, she determines that she can improve her solution by using the `aws:executeScript` action to run her custom Python scripts. She now begins authoring the content for the runbook as follows:

1. First, she provides values for the schema and description of the runbook, and defines the input parameters for the parent runbook.

   By using the `AutomationAssumeRole` parameter, Emily and her colleagues can use an existing IAM role that allows Automation to perform the actions in the runbook on their behalf. Unlike [Example 1](automation-authoring-runbooks-parent-child-example.md), the `AutomationAssumeRole` parameter is now required rather than optional. Because this runbook includes `aws:executeScript` actions, an AWS Identity and Access Management (IAM) service role (or assume role) is always required. This requirement is necessary because some of the Python scripts specified for the actions call AWS API operations.

   Emily uses the `PrimaryPatchGroupTag` and `SecondaryPatchGroupTag` parameters to specify the tags associated with the primary and secondary group of database instances that will be patched. To simplify the required input parameters, she decides to use `StringMap` parameters rather than using multiple `String` parameters as she used in the Example 1 runbook. Optionally, the `Operation`, `RebootOption`, and `SnapshotId` parameters can be used to provide values to document parameters for `AWS-RunPatchBaseline`. To prevent invalid values from being provided to those document parameters, she defines the `allowedValues` as needed.

------
#### [ YAML ]

   ```
   description: 'An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.'
   schemaVersion: '0.3'
   assumeRole: '{{AutomationAssumeRole}}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: '(Required) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.'
     PrimaryPatchGroupTag:
       type: StringMap
       description: '(Required) The tag for the primary group of instances you want to patch. Specify a key-value pair. Example: {"key" : "value"}'
     SecondaryPatchGroupTag:
       type: StringMap
       description: '(Required) The tag for the secondary group of instances you want to patch. Specify a key-value pair. Example: {"key" : "value"}'
     SnapshotId:
       type: String
       description: '(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.'
       default: ''
     RebootOption:
       type: String
       description: '(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.'
       allowedValues:
         - NoReboot
         - RebootIfNeeded
       default: RebootIfNeeded
     Operation:
       type: String
       description: '(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.'
       allowedValues:
         - Install
         - Scan
       default: Install
   ```

------
#### [ JSON ]

   ```
   {
      "description":"An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.",
      "schemaVersion":"0.3",
      "assumeRole":"{{AutomationAssumeRole}}",
      "parameters":{
         "AutomationAssumeRole":{
            "type":"String",
            "description":"(Required) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook."
         },
         "PrimaryPatchGroupTag":{
            "type":"StringMap",
            "description":"(Required) The tag for the primary group of instances you want to patch. Specify a key-value pair. Example: {\"key\" : \"value\"}"
         },
         "SecondaryPatchGroupTag":{
            "type":"StringMap",
            "description":"(Required) The tag for the secondary group of instances you want to patch. Specify a key-value pair. Example: {\"key\" : \"value\"}"
         },
         "SnapshotId":{
            "type":"String",
            "description":"(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.",
            "default":""
         },
         "RebootOption":{
            "type":"String",
            "description":"(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.",
            "allowedValues":[
               "NoReboot",
               "RebootIfNeeded"
            ],
            "default":"RebootIfNeeded"
         },
         "Operation":{
            "type":"String",
            "description":"(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.",
            "allowedValues":[
               "Install",
               "Scan"
            ],
            "default":"Install"
         }
      }
   },
   ```

------

1. With the top-level elements defined, Emily proceeds with authoring the actions that make up the `mainSteps` of the runbook. The first step gathers the IDs of all instances associated with the tag specified in the `PrimaryPatchGroupTag` parameter and outputs a `StringMap` parameter containing the instance ID and the current state of the instance. The output of this action is used in later actions. 

   Note that the `script` input parameter isn't supported for JSON runbooks. JSON runbooks must provide script content using the `attachment` input parameter.

------
#### [ YAML ]

   ```
   mainSteps:
     - name: getPrimaryInstanceState
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: getInstanceStates
         InputPayload:
           primaryTag: '{{PrimaryPatchGroupTag}}'
         Script: |-
           def getInstanceStates(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             tag = events['primaryTag']
             tagKey, tagValue = list(tag.items())[0]
             instanceQuery = ec2.describe_instances(
             Filters=[
                 {
                     "Name": "tag:" + tagKey,
                     "Values": [tagValue]
                 }]
             )
             if not instanceQuery['Reservations']:
                 noInstancesForTagString = "No instances found for specified tag."
                 return({ 'noInstancesFound' : noInstancesForTagString })
             else:
                 queryResponse = instanceQuery['Reservations']
                 originalInstanceStates = {}
                 for results in queryResponse:
                     instanceSet = results['Instances']
                     for instance in instanceSet:
                         instanceId = instance['InstanceId']
                         originalInstanceStates[instanceId] = instance['State']['Name']
                 return originalInstanceStates
       outputs:
         - Name: originalInstanceStates
           Selector: $.Payload
           Type: StringMap
       nextStep: verifyPrimaryInstancesRunning
   ```

------
#### [ JSON ]

   ```
   "mainSteps":[
         {
            "name":"getPrimaryInstanceState",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"getInstanceStates",
               "InputPayload":{
                  "primaryTag":"{{PrimaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"originalInstanceStates",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               }
            ],
            "nextStep":"verifyPrimaryInstancesRunning"
         },
   ```

------

1. Emily uses the output from the previous action in another `aws:executeScript` action to verify all instances associated with the tag specified in the `PrimaryPatchGroupTag` parameter are in a `running` state.

   If the instance state is already `running` or `shutting-down`, the script continues to loop through the remaining instances.

   If the instance state is `stopping`, the script polls for the instance to reach the `stopped` state and starts the instance.

   If the instance state is `stopped`, the script starts the instance.

------
#### [ YAML ]

   ```
   - name: verifyPrimaryInstancesRunning
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: verifyInstancesRunning
         InputPayload:
           targetInstances: '{{getPrimaryInstanceState.originalInstanceStates}}'
         Script: |-
           def verifyInstancesRunning(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped':
                   print("The target instance " + instance + " is stopped. The instance will now be started.")
                   ec2.start_instances(
                       InstanceIds=[instance]
                       )
               elif instanceDict[instance] == 'stopping':
                   print("The target instance " + instance + " is stopping. Polling for instance to reach stopped state.")
                   while instanceDict[instance] != 'stopped':
                       poll = ec2.get_waiter('instance_stopped')
                       poll.wait(
                           InstanceIds=[instance]
                       )
                   ec2.start_instances(
                       InstanceIds=[instance]
                   )
               else:
                 pass
       nextStep: waitForPrimaryRunningInstances
   ```

------
#### [ JSON ]

   ```
   {
            "name":"verifyPrimaryInstancesRunning",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"verifyInstancesRunning",
               "InputPayload":{
                  "targetInstances":"{{getPrimaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"waitForPrimaryRunningInstances"
         },
   ```

------

1. Emily verifies that all instances associated with the tag specified in the `PrimaryPatchGroupTag` parameter were started or already in a `running` state. Then she uses another script to verify that all instances, including those that were started in the previous action, have reached the `running` state.

------
#### [ YAML ]

   ```
   - name: waitForPrimaryRunningInstances
       action: 'aws:executeScript'
       timeoutSeconds: 300
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: waitForRunningInstances
         InputPayload:
           targetInstances: '{{getPrimaryInstanceState.originalInstanceStates}}'
         Script: |-
           def waitForRunningInstances(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
                 poll = ec2.get_waiter('instance_running')
                 poll.wait(
                     InstanceIds=[instance]
                 )
       nextStep: returnPrimaryTagKey
   ```

------
#### [ JSON ]

   ```
   {
            "name":"waitForPrimaryRunningInstances",
            "action":"aws:executeScript",
            "timeoutSeconds":300,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"waitForRunningInstances",
               "InputPayload":{
                  "targetInstances":"{{getPrimaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"returnPrimaryTagKey"
         },
   ```

------

1. Emily uses two more scripts to return individual `String` values of the key and value of the tag specified in the `PrimaryPatchGroupTag` parameter. The values returned by these actions allows her to provide values directly to the `Targets` parameter for the `AWS-RunPatchBaseline` document. The automation then proceeds with patching the instance with the `AWS-RunPatchBaseline` document using the `aws:runCommand` action.

------
#### [ YAML ]

   ```
   - name: returnPrimaryTagKey
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           primaryTag: '{{PrimaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['primaryTag']
             tagKey = list(tag)[0]
             stringKey = "tag:" + tagKey
             return {'tagKey' : stringKey}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: primaryPatchGroupKey
           Selector: $.Payload.tagKey
           Type: String
       nextStep: returnPrimaryTagValue
     - name: returnPrimaryTagValue
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           primaryTag: '{{PrimaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['primaryTag']
             tagKey = list(tag)[0]
             tagValue = tag[tagKey]
             return {'tagValue' : tagValue}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: primaryPatchGroupValue
           Selector: $.Payload.tagValue
           Type: String
       nextStep: patchPrimaryInstances
     - name: patchPrimaryInstances
       action: 'aws:runCommand'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: AWS-RunPatchBaseline
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
         Targets:
           - Key: '{{returnPrimaryTagKey.primaryPatchGroupKey}}'
             Values:
               - '{{returnPrimaryTagValue.primaryPatchGroupValue}}'
         MaxConcurrency: 10%
         MaxErrors: 10%
       nextStep: returnPrimaryToOriginalState
   ```

------
#### [ JSON ]

   ```
   {
            "name":"returnPrimaryTagKey",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "primaryTag":"{{PrimaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"primaryPatchGroupKey",
                  "Selector":"$.Payload.tagKey",
                  "Type":"String"
               }
            ],
            "nextStep":"returnPrimaryTagValue"
         },
         {
            "name":"returnPrimaryTagValue",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "primaryTag":"{{PrimaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"primaryPatchGroupValue",
                  "Selector":"$.Payload.tagValue",
                  "Type":"String"
               }
            ],
            "nextStep":"patchPrimaryInstances"
         },
         {
            "name":"patchPrimaryInstances",
            "action":"aws:runCommand",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"AWS-RunPatchBaseline",
               "Parameters":{
                  "SnapshotId":"{{SnapshotId}}",
                  "RebootOption":"{{RebootOption}}",
                  "Operation":"{{Operation}}"
               },
               "Targets":[
                  {
                     "Key":"{{returnPrimaryTagKey.primaryPatchGroupKey}}",
                     "Values":[
                        "{{returnPrimaryTagValue.primaryPatchGroupValue}}"
                     ]
                  }
               ],
               "MaxConcurrency":"10%",
               "MaxErrors":"10%"
            },
            "nextStep":"returnPrimaryToOriginalState"
         },
   ```

------

1. After the patching operation completes, Emily wants the automation to return the target instances associated with the tag specified in the `PrimaryPatchGroupTag` parameter to the same state they were before the automation started. She does this by again using the output from the first action in a script. Based on the original state of the target instance, if the instance was previously in any state other than `running`, the instance is stopped. Otherwise, if the instance state is `running`, the script continues to loop through the remaining instances.

------
#### [ YAML ]

   ```
   - name: returnPrimaryToOriginalState
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnToOriginalState
         InputPayload:
           targetInstances: '{{getPrimaryInstanceState.originalInstanceStates}}'
         Script: |-
           def returnToOriginalState(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped' or instanceDict[instance] == 'stopping':
                   ec2.stop_instances(
                       InstanceIds=[instance]
                       )
               else:
                 pass
       nextStep: getSecondaryInstanceState
   ```

------
#### [ JSON ]

   ```
   {
            "name":"returnPrimaryToOriginalState",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnToOriginalState",
               "InputPayload":{
                  "targetInstances":"{{getPrimaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"getSecondaryInstanceState"
         },
   ```

------

1. The patching operation is completed for the instances associated with the tag specified in the `PrimaryPatchGroupTag` parameter. Now Emily duplicates all of the previous actions in her runbook content to target the instances associated with the tag specified in the `SecondaryPatchGroupTag` parameter.

------
#### [ YAML ]

   ```
   - name: getSecondaryInstanceState
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: getInstanceStates
         InputPayload:
           secondaryTag: '{{SecondaryPatchGroupTag}}'
         Script: |-
           def getInstanceStates(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             tag = events['secondaryTag']
             tagKey, tagValue = list(tag.items())[0]
             instanceQuery = ec2.describe_instances(
             Filters=[
                 {
                     "Name": "tag:" + tagKey,
                     "Values": [tagValue]
                 }]
             )
             if not instanceQuery['Reservations']:
                 noInstancesForTagString = "No instances found for specified tag."
                 return({ 'noInstancesFound' : noInstancesForTagString })
             else:
                 queryResponse = instanceQuery['Reservations']
                 originalInstanceStates = {}
                 for results in queryResponse:
                     instanceSet = results['Instances']
                     for instance in instanceSet:
                         instanceId = instance['InstanceId']
                         originalInstanceStates[instanceId] = instance['State']['Name']
                 return originalInstanceStates
       outputs:
         - Name: originalInstanceStates
           Selector: $.Payload
           Type: StringMap
       nextStep: verifySecondaryInstancesRunning
     - name: verifySecondaryInstancesRunning
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: verifyInstancesRunning
         InputPayload:
           targetInstances: '{{getSecondaryInstanceState.originalInstanceStates}}'
         Script: |-
           def verifyInstancesRunning(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped':
                   print("The target instance " + instance + " is stopped. The instance will now be started.")
                   ec2.start_instances(
                       InstanceIds=[instance]
                       )
               elif instanceDict[instance] == 'stopping':
                   print("The target instance " + instance + " is stopping. Polling for instance to reach stopped state.")
                   while instanceDict[instance] != 'stopped':
                       poll = ec2.get_waiter('instance_stopped')
                       poll.wait(
                           InstanceIds=[instance]
                       )
                   ec2.start_instances(
                       InstanceIds=[instance]
                   )
               else:
                 pass
       nextStep: waitForSecondaryRunningInstances
     - name: waitForSecondaryRunningInstances
       action: 'aws:executeScript'
       timeoutSeconds: 300
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: waitForRunningInstances
         InputPayload:
           targetInstances: '{{getSecondaryInstanceState.originalInstanceStates}}'
         Script: |-
           def waitForRunningInstances(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
                 poll = ec2.get_waiter('instance_running')
                 poll.wait(
                     InstanceIds=[instance]
                 )
       nextStep: returnSecondaryTagKey
     - name: returnSecondaryTagKey
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           secondaryTag: '{{SecondaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['secondaryTag']
             tagKey = list(tag)[0]
             stringKey = "tag:" + tagKey
             return {'tagKey' : stringKey}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: secondaryPatchGroupKey
           Selector: $.Payload.tagKey
           Type: String
       nextStep: returnSecondaryTagValue
     - name: returnSecondaryTagValue
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           secondaryTag: '{{SecondaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['secondaryTag']
             tagKey = list(tag)[0]
             tagValue = tag[tagKey]
             return {'tagValue' : tagValue}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: secondaryPatchGroupValue
           Selector: $.Payload.tagValue
           Type: String
       nextStep: patchSecondaryInstances
     - name: patchSecondaryInstances
       action: 'aws:runCommand'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: AWS-RunPatchBaseline
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
         Targets:
           - Key: '{{returnSecondaryTagKey.secondaryPatchGroupKey}}'
             Values:
             - '{{returnSecondaryTagValue.secondaryPatchGroupValue}}'
         MaxConcurrency: 10%
         MaxErrors: 10%
       nextStep: returnSecondaryToOriginalState
     - name: returnSecondaryToOriginalState
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnToOriginalState
         InputPayload:
           targetInstances: '{{getSecondaryInstanceState.originalInstanceStates}}'
         Script: |-
           def returnToOriginalState(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped' or instanceDict[instance] == 'stopping':
                   ec2.stop_instances(
                       InstanceIds=[instance]
                       )
               else:
                 pass
   ```

------
#### [ JSON ]

   ```
   {
            "name":"getSecondaryInstanceState",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"getInstanceStates",
               "InputPayload":{
                  "secondaryTag":"{{SecondaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"originalInstanceStates",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               }
            ],
            "nextStep":"verifySecondaryInstancesRunning"
         },
         {
            "name":"verifySecondaryInstancesRunning",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"verifyInstancesRunning",
               "InputPayload":{
                  "targetInstances":"{{getSecondaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"waitForSecondaryRunningInstances"
         },
         {
            "name":"waitForSecondaryRunningInstances",
            "action":"aws:executeScript",
            "timeoutSeconds":300,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"waitForRunningInstances",
               "InputPayload":{
                  "targetInstances":"{{getSecondaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"returnSecondaryTagKey"
         },
         {
            "name":"returnSecondaryTagKey",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "secondaryTag":"{{SecondaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"secondaryPatchGroupKey",
                  "Selector":"$.Payload.tagKey",
                  "Type":"String"
               }
            ],
            "nextStep":"returnSecondaryTagValue"
         },
         {
            "name":"returnSecondaryTagValue",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "secondaryTag":"{{SecondaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"secondaryPatchGroupValue",
                  "Selector":"$.Payload.tagValue",
                  "Type":"String"
               }
            ],
            "nextStep":"patchSecondaryInstances"
         },
         {
            "name":"patchSecondaryInstances",
            "action":"aws:runCommand",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"AWS-RunPatchBaseline",
               "Parameters":{
                  "SnapshotId":"{{SnapshotId}}",
                  "RebootOption":"{{RebootOption}}",
                  "Operation":"{{Operation}}"
               },
               "Targets":[
                  {
                     "Key":"{{returnSecondaryTagKey.secondaryPatchGroupKey}}",
                     "Values":[
                        "{{returnSecondaryTagValue.secondaryPatchGroupValue}}"
                     ]
                  }
               ],
               "MaxConcurrency":"10%",
               "MaxErrors":"10%"
            },
            "nextStep":"returnSecondaryToOriginalState"
         },
         {
            "name":"returnSecondaryToOriginalState",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnToOriginalState",
               "InputPayload":{
                  "targetInstances":"{{getSecondaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            }
         }
      ]
   }
   ```

------

1. Emily reviews the completed scripted runbook content and creates the runbook in the same AWS account and AWS Region as the target instances. Now she's ready to test her runbook to make sure the automation operates as desired before implementing it into her production environment. The following is the completed scripted runbook content.

------
#### [ YAML ]

   ```
   description: An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.
   schemaVersion: '0.3'
   assumeRole: '{{AutomationAssumeRole}}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: '(Required) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook.'
     PrimaryPatchGroupTag:
       type: StringMap
       description: '(Required) The tag for the primary group of instances you want to patch. Specify a key-value pair. Example: {"key" : "value"}'
     SecondaryPatchGroupTag:
       type: StringMap
       description: '(Required) The tag for the secondary group of instances you want to patch. Specify a key-value pair. Example: {"key" : "value"}'
     SnapshotId:
       type: String
       description: '(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.'
       default: ''
     RebootOption:
       type: String
       description: '(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.'
       allowedValues:
         - NoReboot
         - RebootIfNeeded
       default: RebootIfNeeded
     Operation:
       type: String
       description: '(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.'
       allowedValues:
         - Install
         - Scan
       default: Install
   mainSteps:
     - name: getPrimaryInstanceState
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: getInstanceStates
         InputPayload:
           primaryTag: '{{PrimaryPatchGroupTag}}'
         Script: |-
           def getInstanceStates(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             tag = events['primaryTag']
             tagKey, tagValue = list(tag.items())[0]
             instanceQuery = ec2.describe_instances(
             Filters=[
                 {
                     "Name": "tag:" + tagKey,
                     "Values": [tagValue]
                 }]
             )
             if not instanceQuery['Reservations']:
                 noInstancesForTagString = "No instances found for specified tag."
                 return({ 'noInstancesFound' : noInstancesForTagString })
             else:
                 queryResponse = instanceQuery['Reservations']
                 originalInstanceStates = {}
                 for results in queryResponse:
                     instanceSet = results['Instances']
                     for instance in instanceSet:
                         instanceId = instance['InstanceId']
                         originalInstanceStates[instanceId] = instance['State']['Name']
                 return originalInstanceStates
       outputs:
         - Name: originalInstanceStates
           Selector: $.Payload
           Type: StringMap
       nextStep: verifyPrimaryInstancesRunning
     - name: verifyPrimaryInstancesRunning
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: verifyInstancesRunning
         InputPayload:
           targetInstances: '{{getPrimaryInstanceState.originalInstanceStates}}'
         Script: |-
           def verifyInstancesRunning(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped':
                   print("The target instance " + instance + " is stopped. The instance will now be started.")
                   ec2.start_instances(
                       InstanceIds=[instance]
                       )
               elif instanceDict[instance] == 'stopping':
                   print("The target instance " + instance + " is stopping. Polling for instance to reach stopped state.")
                   while instanceDict[instance] != 'stopped':
                       poll = ec2.get_waiter('instance_stopped')
                       poll.wait(
                           InstanceIds=[instance]
                       )
                   ec2.start_instances(
                       InstanceIds=[instance]
                   )
               else:
                 pass
       nextStep: waitForPrimaryRunningInstances
     - name: waitForPrimaryRunningInstances
       action: 'aws:executeScript'
       timeoutSeconds: 300
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: waitForRunningInstances
         InputPayload:
           targetInstances: '{{getPrimaryInstanceState.originalInstanceStates}}'
         Script: |-
           def waitForRunningInstances(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
                 poll = ec2.get_waiter('instance_running')
                 poll.wait(
                     InstanceIds=[instance]
                 )
       nextStep: returnPrimaryTagKey
     - name: returnPrimaryTagKey
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           primaryTag: '{{PrimaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['primaryTag']
             tagKey = list(tag)[0]
             stringKey = "tag:" + tagKey
             return {'tagKey' : stringKey}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: primaryPatchGroupKey
           Selector: $.Payload.tagKey
           Type: String
       nextStep: returnPrimaryTagValue
     - name: returnPrimaryTagValue
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           primaryTag: '{{PrimaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['primaryTag']
             tagKey = list(tag)[0]
             tagValue = tag[tagKey]
             return {'tagValue' : tagValue}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: primaryPatchGroupValue
           Selector: $.Payload.tagValue
           Type: String
       nextStep: patchPrimaryInstances
     - name: patchPrimaryInstances
       action: 'aws:runCommand'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: AWS-RunPatchBaseline
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
         Targets:
           - Key: '{{returnPrimaryTagKey.primaryPatchGroupKey}}'
             Values:
               - '{{returnPrimaryTagValue.primaryPatchGroupValue}}'
         MaxConcurrency: 10%
         MaxErrors: 10%
       nextStep: returnPrimaryToOriginalState
     - name: returnPrimaryToOriginalState
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnToOriginalState
         InputPayload:
           targetInstances: '{{getPrimaryInstanceState.originalInstanceStates}}'
         Script: |-
           def returnToOriginalState(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped' or instanceDict[instance] == 'stopping':
                   ec2.stop_instances(
                       InstanceIds=[instance]
                       )
               else:
                 pass
       nextStep: getSecondaryInstanceState
     - name: getSecondaryInstanceState
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: getInstanceStates
         InputPayload:
           secondaryTag: '{{SecondaryPatchGroupTag}}'
         Script: |-
           def getInstanceStates(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             tag = events['secondaryTag']
             tagKey, tagValue = list(tag.items())[0]
             instanceQuery = ec2.describe_instances(
             Filters=[
                 {
                     "Name": "tag:" + tagKey,
                     "Values": [tagValue]
                 }]
             )
             if not instanceQuery['Reservations']:
                 noInstancesForTagString = "No instances found for specified tag."
                 return({ 'noInstancesFound' : noInstancesForTagString })
             else:
                 queryResponse = instanceQuery['Reservations']
                 originalInstanceStates = {}
                 for results in queryResponse:
                     instanceSet = results['Instances']
                     for instance in instanceSet:
                         instanceId = instance['InstanceId']
                         originalInstanceStates[instanceId] = instance['State']['Name']
                 return originalInstanceStates
       outputs:
         - Name: originalInstanceStates
           Selector: $.Payload
           Type: StringMap
       nextStep: verifySecondaryInstancesRunning
     - name: verifySecondaryInstancesRunning
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: verifyInstancesRunning
         InputPayload:
           targetInstances: '{{getSecondaryInstanceState.originalInstanceStates}}'
         Script: |-
           def verifyInstancesRunning(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped':
                   print("The target instance " + instance + " is stopped. The instance will now be started.")
                   ec2.start_instances(
                       InstanceIds=[instance]
                       )
               elif instanceDict[instance] == 'stopping':
                   print("The target instance " + instance + " is stopping. Polling for instance to reach stopped state.")
                   while instanceDict[instance] != 'stopped':
                       poll = ec2.get_waiter('instance_stopped')
                       poll.wait(
                           InstanceIds=[instance]
                       )
                   ec2.start_instances(
                       InstanceIds=[instance]
                   )
               else:
                 pass
       nextStep: waitForSecondaryRunningInstances
     - name: waitForSecondaryRunningInstances
       action: 'aws:executeScript'
       timeoutSeconds: 300
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: waitForRunningInstances
         InputPayload:
           targetInstances: '{{getSecondaryInstanceState.originalInstanceStates}}'
         Script: |-
           def waitForRunningInstances(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
                 poll = ec2.get_waiter('instance_running')
                 poll.wait(
                     InstanceIds=[instance]
                 )
       nextStep: returnSecondaryTagKey
     - name: returnSecondaryTagKey
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           secondaryTag: '{{SecondaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['secondaryTag']
             tagKey = list(tag)[0]
             stringKey = "tag:" + tagKey
             return {'tagKey' : stringKey}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: secondaryPatchGroupKey
           Selector: $.Payload.tagKey
           Type: String
       nextStep: returnSecondaryTagValue
     - name: returnSecondaryTagValue
       action: 'aws:executeScript'
       timeoutSeconds: 120
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnTagValues
         InputPayload:
           secondaryTag: '{{SecondaryPatchGroupTag}}'
         Script: |-
           def returnTagValues(events,context):
             tag = events['secondaryTag']
             tagKey = list(tag)[0]
             tagValue = tag[tagKey]
             return {'tagValue' : tagValue}
       outputs:
         - Name: Payload
           Selector: $.Payload
           Type: StringMap
         - Name: secondaryPatchGroupValue
           Selector: $.Payload.tagValue
           Type: String
       nextStep: patchSecondaryInstances
     - name: patchSecondaryInstances
       action: 'aws:runCommand'
       onFailure: Abort
       timeoutSeconds: 7200
       inputs:
         DocumentName: AWS-RunPatchBaseline
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
         Targets:
           - Key: '{{returnSecondaryTagKey.secondaryPatchGroupKey}}'
             Values:
             - '{{returnSecondaryTagValue.secondaryPatchGroupValue}}'
         MaxConcurrency: 10%
         MaxErrors: 10%
       nextStep: returnSecondaryToOriginalState
     - name: returnSecondaryToOriginalState
       action: 'aws:executeScript'
       timeoutSeconds: 600
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: returnToOriginalState
         InputPayload:
           targetInstances: '{{getSecondaryInstanceState.originalInstanceStates}}'
         Script: |-
           def returnToOriginalState(events,context):
             import boto3
   
             #Initialize client
             ec2 = boto3.client('ec2')
             instanceDict = events['targetInstances']
             for instance in instanceDict:
               if instanceDict[instance] == 'stopped' or instanceDict[instance] == 'stopping':
                   ec2.stop_instances(
                       InstanceIds=[instance]
                       )
               else:
                 pass
   ```

------
#### [ JSON ]

   ```
   {
      "description":"An example of an Automation runbook that patches groups of Amazon EC2 instances in stages.",
      "schemaVersion":"0.3",
      "assumeRole":"{{AutomationAssumeRole}}",
      "parameters":{
         "AutomationAssumeRole":{
            "type":"String",
            "description":"(Required) The Amazon Resource Name (ARN) of the IAM role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to operate this runbook."
         },
         "PrimaryPatchGroupTag":{
            "type":"StringMap",
            "description":"(Required) The tag for the primary group of instances you want to patch. Specify a key-value pair. Example: {\"key\" : \"value\"}"
         },
         "SecondaryPatchGroupTag":{
            "type":"StringMap",
            "description":"(Required) The tag for the secondary group of instances you want to patch. Specify a key-value pair. Example: {\"key\" : \"value\"}"
         },
         "SnapshotId":{
            "type":"String",
            "description":"(Optional) The snapshot ID to use to retrieve a patch baseline snapshot.",
            "default":""
         },
         "RebootOption":{
            "type":"String",
            "description":"(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.",
            "allowedValues":[
               "NoReboot",
               "RebootIfNeeded"
            ],
            "default":"RebootIfNeeded"
         },
         "Operation":{
            "type":"String",
            "description":"(Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.",
            "allowedValues":[
               "Install",
               "Scan"
            ],
            "default":"Install"
         }
      },
      "mainSteps":[
         {
            "name":"getPrimaryInstanceState",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"getInstanceStates",
               "InputPayload":{
                  "primaryTag":"{{PrimaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"originalInstanceStates",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               }
            ],
            "nextStep":"verifyPrimaryInstancesRunning"
         },
         {
            "name":"verifyPrimaryInstancesRunning",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"verifyInstancesRunning",
               "InputPayload":{
                  "targetInstances":"{{getPrimaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"waitForPrimaryRunningInstances"
         },
         {
            "name":"waitForPrimaryRunningInstances",
            "action":"aws:executeScript",
            "timeoutSeconds":300,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"waitForRunningInstances",
               "InputPayload":{
                  "targetInstances":"{{getPrimaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"returnPrimaryTagKey"
         },
         {
            "name":"returnPrimaryTagKey",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "primaryTag":"{{PrimaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"primaryPatchGroupKey",
                  "Selector":"$.Payload.tagKey",
                  "Type":"String"
               }
            ],
            "nextStep":"returnPrimaryTagValue"
         },
         {
            "name":"returnPrimaryTagValue",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "primaryTag":"{{PrimaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"primaryPatchGroupValue",
                  "Selector":"$.Payload.tagValue",
                  "Type":"String"
               }
            ],
            "nextStep":"patchPrimaryInstances"
         },
         {
            "name":"patchPrimaryInstances",
            "action":"aws:runCommand",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"AWS-RunPatchBaseline",
               "Parameters":{
                  "SnapshotId":"{{SnapshotId}}",
                  "RebootOption":"{{RebootOption}}",
                  "Operation":"{{Operation}}"
               },
               "Targets":[
                  {
                     "Key":"{{returnPrimaryTagKey.primaryPatchGroupKey}}",
                     "Values":[
                        "{{returnPrimaryTagValue.primaryPatchGroupValue}}"
                     ]
                  }
               ],
               "MaxConcurrency":"10%",
               "MaxErrors":"10%"
            },
            "nextStep":"returnPrimaryToOriginalState"
         },
         {
            "name":"returnPrimaryToOriginalState",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnToOriginalState",
               "InputPayload":{
                  "targetInstances":"{{getPrimaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"getSecondaryInstanceState"
         },
         {
            "name":"getSecondaryInstanceState",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"getInstanceStates",
               "InputPayload":{
                  "secondaryTag":"{{SecondaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"originalInstanceStates",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               }
            ],
            "nextStep":"verifySecondaryInstancesRunning"
         },
         {
            "name":"verifySecondaryInstancesRunning",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"verifyInstancesRunning",
               "InputPayload":{
                  "targetInstances":"{{getSecondaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"waitForSecondaryRunningInstances"
         },
         {
            "name":"waitForSecondaryRunningInstances",
            "action":"aws:executeScript",
            "timeoutSeconds":300,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"waitForRunningInstances",
               "InputPayload":{
                  "targetInstances":"{{getSecondaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            },
            "nextStep":"returnSecondaryTagKey"
         },
         {
            "name":"returnSecondaryTagKey",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "secondaryTag":"{{SecondaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"secondaryPatchGroupKey",
                  "Selector":"$.Payload.tagKey",
                  "Type":"String"
               }
            ],
            "nextStep":"returnSecondaryTagValue"
         },
         {
            "name":"returnSecondaryTagValue",
            "action":"aws:executeScript",
            "timeoutSeconds":120,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnTagValues",
               "InputPayload":{
                  "secondaryTag":"{{SecondaryPatchGroupTag}}"
               },
               "Script":"..."
            },
            "outputs":[
               {
                  "Name":"Payload",
                  "Selector":"$.Payload",
                  "Type":"StringMap"
               },
               {
                  "Name":"secondaryPatchGroupValue",
                  "Selector":"$.Payload.tagValue",
                  "Type":"String"
               }
            ],
            "nextStep":"patchSecondaryInstances"
         },
         {
            "name":"patchSecondaryInstances",
            "action":"aws:runCommand",
            "onFailure":"Abort",
            "timeoutSeconds":7200,
            "inputs":{
               "DocumentName":"AWS-RunPatchBaseline",
               "Parameters":{
                  "SnapshotId":"{{SnapshotId}}",
                  "RebootOption":"{{RebootOption}}",
                  "Operation":"{{Operation}}"
               },
               "Targets":[
                  {
                     "Key":"{{returnSecondaryTagKey.secondaryPatchGroupKey}}",
                     "Values":[
                        "{{returnSecondaryTagValue.secondaryPatchGroupValue}}"
                     ]
                  }
               ],
               "MaxConcurrency":"10%",
               "MaxErrors":"10%"
            },
            "nextStep":"returnSecondaryToOriginalState"
         },
         {
            "name":"returnSecondaryToOriginalState",
            "action":"aws:executeScript",
            "timeoutSeconds":600,
            "onFailure":"Abort",
            "inputs":{
               "Runtime":"python3.11",
               "Handler":"returnToOriginalState",
               "InputPayload":{
                  "targetInstances":"{{getSecondaryInstanceState.originalInstanceStates}}"
               },
               "Script":"..."
            }
         }
      ]
   }
   ```

------

For more information about the automation actions used in this example, see the [Systems Manager Automation actions reference](automation-actions.md).

# Additional runbook examples


The following example runbook demonstrate how you can use AWS Systems Manager automation actions to automate common deployment, troubleshooting, and maintenance tasks.

**Note**  
The example runbooks in this section are provided to demonstrate how you can create custom runbooks to support your specific operational needs. These runbooks aren't meant for use in production environments as is. However, you can customize them for your own use.

**Topics**
+ [

# Deploy VPC architecture and Microsoft Active Directory domain controllers
](automation-document-architecture-deployment-example.md)
+ [

# Restore a root volume from the latest snapshot
](automation-document-instance-recovery-example.md)
+ [

# Create an AMI and cross-Region copy
](automation-document-backup-maintenance-example.md)

# Deploy VPC architecture and Microsoft Active Directory domain controllers


To increase efficiency and standardize common tasks, you might choose to automate deployments. This is useful if you regularly deploy the same architecture across multiple accounts and AWS Regions. Automating architecture deployments can also reduce the potential for human error that can occur when deploying architecture manually. AWS Systems Manager Automation actions can help you accomplish this. Automation is a tool in AWS Systems Manager.

The following example AWS Systems Manager runbook performs these actions:
+ Retrieves the latest Windows Server 2016 Amazon Machine Image (AMI) using Systems Manager Parameter Store to use when launching the EC2 instances that will be configured as domain controllers. Parameter Store is a tool in AWS Systems Manager.
+ Uses the `aws:executeAwsApi` automation action to call several AWS API operations to create the VPC architecture. The domain controller instances are launched in private subnets, and connect to the internet using a NAT gateway. This allows the SSM Agent on the instances to access the requisite Systems Manager endpoints.
+ Uses the `aws:waitForAwsResourceProperty` automation action to confirm the instances launched by the previous action are `Online` for AWS Systems Manager.
+ Uses the `aws:runCommand` automation action to configure the instances launched as Microsoft Active Directory domain controllers.

------
#### [ YAML ]

```
    ---
    description: Custom Automation Deployment Example
    schemaVersion: '0.3'
    parameters:
      AutomationAssumeRole:
        type: String
        default: ''
        description: >-
          (Optional) The ARN of the role that allows Automation to perform the
          actions on your behalf. If no role is specified, Systems Manager
          Automation uses your IAM permissions to run this runbook.
    mainSteps:
      - name: getLatestWindowsAmi
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ssm
          Api: GetParameter
          Name: >-
            /aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base
        outputs:
          - Name: amiId
            Selector: $.Parameter.Value
            Type: String
        nextStep: createSSMInstanceRole
      - name: createSSMInstanceRole
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: iam
          Api: CreateRole
          AssumeRolePolicyDocument: >-
            {"Version": "2012-10-17",		 	 	 "Statement":[{"Effect":"Allow","Principal":{"Service":["ec2.amazonaws.com"]},"Action":["sts:AssumeRole"]}]}
          RoleName: sampleSSMInstanceRole
        nextStep: attachManagedSSMPolicy
      - name: attachManagedSSMPolicy
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: iam
          Api: AttachRolePolicy
          PolicyArn: 'arn:aws:iam::aws:policy/service-role/AmazonSSMManagedInstanceCore'
          RoleName: sampleSSMInstanceRole
        nextStep: createSSMInstanceProfile
      - name: createSSMInstanceProfile
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: iam
          Api: CreateInstanceProfile
          InstanceProfileName: sampleSSMInstanceRole
        outputs:
          - Name: instanceProfileArn
            Selector: $.InstanceProfile.Arn
            Type: String
        nextStep: addSSMInstanceRoleToProfile
      - name: addSSMInstanceRoleToProfile
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: iam
          Api: AddRoleToInstanceProfile
          InstanceProfileName: sampleSSMInstanceRole
          RoleName: sampleSSMInstanceRole
        nextStep: createVpc
      - name: createVpc
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateVpc
          CidrBlock: 10.0.100.0/22
        outputs:
          - Name: vpcId
            Selector: $.Vpc.VpcId
            Type: String
        nextStep: getMainRtb
      - name: getMainRtb
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: DescribeRouteTables
          Filters:
            - Name: vpc-id
              Values:
                - '{{ createVpc.vpcId }}'
        outputs:
          - Name: mainRtbId
            Selector: '$.RouteTables[0].RouteTableId'
            Type: String
        nextStep: verifyMainRtb
      - name: verifyMainRtb
        action: aws:assertAwsResourceProperty
        onFailure: Abort
        inputs:
          Service: ec2
          Api: DescribeRouteTables
          RouteTableIds:
            - '{{ getMainRtb.mainRtbId }}'
          PropertySelector: '$.RouteTables[0].Associations[0].Main'
          DesiredValues:
            - 'True'
        nextStep: createPubSubnet
      - name: createPubSubnet
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateSubnet
          CidrBlock: 10.0.103.0/24
          AvailabilityZone: us-west-2c
          VpcId: '{{ createVpc.vpcId }}'
        outputs:
          - Name: pubSubnetId
            Selector: $.Subnet.SubnetId
            Type: String
        nextStep: createPubRtb
      - name: createPubRtb
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateRouteTable
          VpcId: '{{ createVpc.vpcId }}'
        outputs:
          - Name: pubRtbId
            Selector: $.RouteTable.RouteTableId
            Type: String
        nextStep: createIgw
      - name: createIgw
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateInternetGateway
        outputs:
          - Name: igwId
            Selector: $.InternetGateway.InternetGatewayId
            Type: String
        nextStep: attachIgw
      - name: attachIgw
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: AttachInternetGateway
          InternetGatewayId: '{{ createIgw.igwId }}'
          VpcId: '{{ createVpc.vpcId }}'
        nextStep: allocateEip
      - name: allocateEip
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: AllocateAddress
          Domain: vpc
        outputs:
          - Name: eipAllocationId
            Selector: $.AllocationId
            Type: String
        nextStep: createNatGw
      - name: createNatGw
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateNatGateway
          AllocationId: '{{ allocateEip.eipAllocationId }}'
          SubnetId: '{{ createPubSubnet.pubSubnetId }}'
        outputs:
          - Name: natGwId
            Selector: $.NatGateway.NatGatewayId
            Type: String
        nextStep: verifyNatGwAvailable
      - name: verifyNatGwAvailable
        action: aws:waitForAwsResourceProperty
        timeoutSeconds: 150
        inputs:
          Service: ec2
          Api: DescribeNatGateways
          NatGatewayIds:
            - '{{ createNatGw.natGwId }}'
          PropertySelector: '$.NatGateways[0].State'
          DesiredValues:
            - available
        nextStep: createNatRoute
      - name: createNatRoute
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateRoute
          DestinationCidrBlock: 0.0.0.0/0
          NatGatewayId: '{{ createNatGw.natGwId }}'
          RouteTableId: '{{ getMainRtb.mainRtbId }}'
        nextStep: createPubRoute
      - name: createPubRoute
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateRoute
          DestinationCidrBlock: 0.0.0.0/0
          GatewayId: '{{ createIgw.igwId }}'
          RouteTableId: '{{ createPubRtb.pubRtbId }}'
        nextStep: setPubSubAssoc
      - name: setPubSubAssoc
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: AssociateRouteTable
          RouteTableId: '{{ createPubRtb.pubRtbId }}'
          SubnetId: '{{ createPubSubnet.pubSubnetId }}'
      - name: createDhcpOptions
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateDhcpOptions
          DhcpConfigurations:
            - Key: domain-name-servers
              Values:
                - '10.0.100.50,10.0.101.50'
            - Key: domain-name
              Values:
                - sample.com
        outputs:
          - Name: dhcpOptionsId
            Selector: $.DhcpOptions.DhcpOptionsId
            Type: String
        nextStep: createDCSubnet1
      - name: createDCSubnet1
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateSubnet
          CidrBlock: 10.0.100.0/24
          AvailabilityZone: us-west-2a
          VpcId: '{{ createVpc.vpcId }}'
        outputs:
          - Name: firstSubnetId
            Selector: $.Subnet.SubnetId
            Type: String
        nextStep: createDCSubnet2
      - name: createDCSubnet2
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateSubnet
          CidrBlock: 10.0.101.0/24
          AvailabilityZone: us-west-2b
          VpcId: '{{ createVpc.vpcId }}'
        outputs:
          - Name: secondSubnetId
            Selector: $.Subnet.SubnetId
            Type: String
        nextStep: createDCSecGroup
      - name: createDCSecGroup
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: CreateSecurityGroup
          GroupName: SampleDCSecGroup
          Description: Security Group for Sample Domain Controllers
          VpcId: '{{ createVpc.vpcId }}'
        outputs:
          - Name: dcSecGroupId
            Selector: $.GroupId
            Type: String
        nextStep: authIngressDCTraffic
      - name: authIngressDCTraffic
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: AuthorizeSecurityGroupIngress
          GroupId: '{{ createDCSecGroup.dcSecGroupId }}'
          IpPermissions:
            - FromPort: -1
              IpProtocol: '-1'
              IpRanges:
                - CidrIp: 0.0.0.0/0
                  Description: Allow all traffic between Domain Controllers
        nextStep: verifyInstanceProfile
      - name: verifyInstanceProfile
        action: aws:waitForAwsResourceProperty
        maxAttempts: 5
        onFailure: Abort
        inputs:
          Service: iam
          Api: ListInstanceProfilesForRole
          RoleName: sampleSSMInstanceRole
          PropertySelector: '$.InstanceProfiles[0].Arn'
          DesiredValues:
            - '{{ createSSMInstanceProfile.instanceProfileArn }}'
        nextStep: iamEventualConsistency
      - name: iamEventualConsistency
        action: aws:sleep
        inputs:
          Duration: PT2M
        nextStep: launchDC1
      - name: launchDC1
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: RunInstances
          BlockDeviceMappings:
            - DeviceName: /dev/sda1
              Ebs:
                DeleteOnTermination: true
                VolumeSize: 50
                VolumeType: gp2
            - DeviceName: xvdf
              Ebs:
                DeleteOnTermination: true
                VolumeSize: 100
                VolumeType: gp2
          IamInstanceProfile:
            Arn: '{{ createSSMInstanceProfile.instanceProfileArn }}'
          ImageId: '{{ getLatestWindowsAmi.amiId }}'
          InstanceType: t2.micro
          MaxCount: 1
          MinCount: 1
          PrivateIpAddress: 10.0.100.50
          SecurityGroupIds:
            - '{{ createDCSecGroup.dcSecGroupId }}'
          SubnetId: '{{ createDCSubnet1.firstSubnetId }}'
          TagSpecifications:
            - ResourceType: instance
              Tags:
                - Key: Name
                  Value: SampleDC1
        outputs:
          - Name: pdcInstanceId
            Selector: '$.Instances[0].InstanceId'
            Type: String
        nextStep: launchDC2
      - name: launchDC2
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: RunInstances
          BlockDeviceMappings:
            - DeviceName: /dev/sda1
              Ebs:
                DeleteOnTermination: true
                VolumeSize: 50
                VolumeType: gp2
            - DeviceName: xvdf
              Ebs:
                DeleteOnTermination: true
                VolumeSize: 100
                VolumeType: gp2
          IamInstanceProfile:
            Arn: '{{ createSSMInstanceProfile.instanceProfileArn }}'
          ImageId: '{{ getLatestWindowsAmi.amiId }}'
          InstanceType: t2.micro
          MaxCount: 1
          MinCount: 1
          PrivateIpAddress: 10.0.101.50
          SecurityGroupIds:
            - '{{ createDCSecGroup.dcSecGroupId }}'
          SubnetId: '{{ createDCSubnet2.secondSubnetId }}'
          TagSpecifications:
            - ResourceType: instance
              Tags:
                - Key: Name
                  Value: SampleDC2
        outputs:
          - Name: adcInstanceId
            Selector: '$.Instances[0].InstanceId'
            Type: String
        nextStep: verifyDCInstanceState
      - name: verifyDCInstanceState
        action: aws:waitForAwsResourceProperty
        inputs:
          Service: ec2
          Api: DescribeInstanceStatus
          IncludeAllInstances: true
          InstanceIds:
            - '{{ launchDC1.pdcInstanceId }}'
            - '{{ launchDC2.adcInstanceId }}'
          PropertySelector: '$.InstanceStatuses..InstanceState.Name'
          DesiredValues:
            - running
        nextStep: verifyInstancesOnlineSSM
      - name: verifyInstancesOnlineSSM
        action: aws:waitForAwsResourceProperty
        timeoutSeconds: 600
        inputs:
          Service: ssm
          Api: DescribeInstanceInformation
          InstanceInformationFilterList:
            - key: InstanceIds
              valueSet:
                - '{{ launchDC1.pdcInstanceId }}'
                - '{{ launchDC2.adcInstanceId }}'
          PropertySelector: '$.InstanceInformationList..PingStatus'
          DesiredValues:
            - Online
        nextStep: installADRoles
      - name: installADRoles
        action: aws:runCommand
        inputs:
          DocumentName: AWS-RunPowerShellScript
          InstanceIds:
            - '{{ launchDC1.pdcInstanceId }}'
            - '{{ launchDC2.adcInstanceId }}'
          Parameters:
            commands: |-
              try {
                  Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools
              }
              catch {
                  Write-Error "Failed to install ADDS Role."
              }
        nextStep: setAdminPassword
      - name: setAdminPassword
        action: aws:runCommand
        inputs:
          DocumentName: AWS-RunPowerShellScript
          InstanceIds:
            - '{{ launchDC1.pdcInstanceId }}'
          Parameters:
            commands:
              - net user Administrator "sampleAdminPass123!"
        nextStep: createForest
      - name: createForest
        action: aws:runCommand
        inputs:
          DocumentName: AWS-RunPowerShellScript
          InstanceIds:
            - '{{ launchDC1.pdcInstanceId }}'
          Parameters:
            commands: |-
              $dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force
              try {
                  Install-ADDSForest -DomainName "sample.com" -DomainMode 6 -ForestMode 6 -InstallDNS -DatabasePath "D:\NTDS" -SysvolPath "D:\SYSVOL" -SafeModeAdministratorPassword $dsrmPass -Force
              }
              catch {
                  Write-Error $_
              }
              try {
                  Add-DnsServerForwarder -IPAddress "10.0.100.2"
              }
              catch {
                  Write-Error $_
              }
        nextStep: associateDhcpOptions
      - name: associateDhcpOptions
        action: aws:executeAwsApi
        onFailure: Abort
        inputs:
          Service: ec2
          Api: AssociateDhcpOptions
          DhcpOptionsId: '{{ createDhcpOptions.dhcpOptionsId }}'
          VpcId: '{{ createVpc.vpcId }}'
        nextStep: waitForADServices
      - name: waitForADServices
        action: aws:sleep
        inputs:
          Duration: PT1M
        nextStep: promoteADC
      - name: promoteADC
        action: aws:runCommand
        inputs:
          DocumentName: AWS-RunPowerShellScript
          InstanceIds:
            - '{{ launchDC2.adcInstanceId }}'
          Parameters:
            commands: |-
              ipconfig /renew
              $dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force
              $domAdminUser = "sample\Administrator"
              $domAdminPass = "sampleAdminPass123!" | ConvertTo-SecureString -asPlainText -Force
              $domAdminCred = New-Object System.Management.Automation.PSCredential($domAdminUser,$domAdminPass)
    
              try {
                  Install-ADDSDomainController -DomainName "sample.com" -InstallDNS -DatabasePath "D:\NTDS" -SysvolPath "D:\SYSVOL" -SafeModeAdministratorPassword $dsrmPass -Credential $domAdminCred -Force
              }
              catch {
                  Write-Error $_
              }
```

------
#### [ JSON ]

```
{
      "description": "Custom Automation Deployment Example",
      "schemaVersion": "0.3",
      "assumeRole": "{{ AutomationAssumeRole }}",
      "parameters": {
        "AutomationAssumeRole": {
          "type": "String",
          "description": "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to run this runbook.",
          "default": ""
        }
      },
      "mainSteps": [
        {
          "name": "getLatestWindowsAmi",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ssm",
            "Api": "GetParameter",
            "Name": "/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-Base"
          },
          "outputs": [
            {
              "Name": "amiId",
              "Selector": "$.Parameter.Value",
              "Type": "String"
            }
          ],
          "nextStep": "createSSMInstanceRole"
        },
        {
          "name": "createSSMInstanceRole",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "iam",
            "Api": "CreateRole",
            "AssumeRolePolicyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":[\"ec2.amazonaws.com\"]},\"Action\":[\"sts:AssumeRole\"]}]}",
            "RoleName": "sampleSSMInstanceRole"
          },
          "nextStep": "attachManagedSSMPolicy"
        },
        {
          "name": "attachManagedSSMPolicy",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "iam",
            "Api": "AttachRolePolicy",
            "PolicyArn": "arn:aws:iam::aws:policy/service-role/AmazonSSMManagedInstanceCore",
            "RoleName": "sampleSSMInstanceRole"
          },
          "nextStep": "createSSMInstanceProfile"
        },
        {
          "name": "createSSMInstanceProfile",
          "action":"aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "iam",
            "Api": "CreateInstanceProfile",
            "InstanceProfileName": "sampleSSMInstanceRole"
          },
          "outputs": [
            {
              "Name": "instanceProfileArn",
              "Selector": "$.InstanceProfile.Arn",
              "Type": "String"
            }
          ],
          "nextStep": "addSSMInstanceRoleToProfile"
        },
        {
          "name": "addSSMInstanceRoleToProfile",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "iam",
            "Api": "AddRoleToInstanceProfile",
            "InstanceProfileName": "sampleSSMInstanceRole",
            "RoleName": "sampleSSMInstanceRole"
          },
          "nextStep": "createVpc"
        },
        {
          "name": "createVpc",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateVpc",
            "CidrBlock": "10.0.100.0/22"
          },
          "outputs": [
            {
              "Name": "vpcId",
              "Selector": "$.Vpc.VpcId",
              "Type": "String"
            }
          ],
          "nextStep": "getMainRtb"
        },
        {
          "name": "getMainRtb",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "DescribeRouteTables",
            "Filters": [
              {
                "Name": "vpc-id",
                "Values": ["{{ createVpc.vpcId }}"]
              }
            ]
          },
          "outputs": [
            {
              "Name": "mainRtbId",
              "Selector": "$.RouteTables[0].RouteTableId",
              "Type": "String"
            }
          ],
          "nextStep": "verifyMainRtb"
        },
        {
          "name": "verifyMainRtb",
          "action": "aws:assertAwsResourceProperty",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "DescribeRouteTables",
            "RouteTableIds": ["{{ getMainRtb.mainRtbId }}"],
            "PropertySelector": "$.RouteTables[0].Associations[0].Main",
            "DesiredValues": ["True"]
          },
          "nextStep": "createPubSubnet"
        },
        {
          "name": "createPubSubnet",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateSubnet",
            "CidrBlock": "10.0.103.0/24",
            "AvailabilityZone": "us-west-2c",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "outputs":[
            {
              "Name": "pubSubnetId",
              "Selector": "$.Subnet.SubnetId",
              "Type": "String"
            }
          ],
          "nextStep": "createPubRtb"
        },
        {
          "name": "createPubRtb",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateRouteTable",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "outputs": [
            {
              "Name": "pubRtbId",
              "Selector": "$.RouteTable.RouteTableId",
              "Type": "String"
            }
          ],
          "nextStep": "createIgw"
        },
        {
          "name": "createIgw",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateInternetGateway"
          },
          "outputs": [
            {
              "Name": "igwId",
              "Selector": "$.InternetGateway.InternetGatewayId",
              "Type": "String"
            }
          ],
          "nextStep": "attachIgw"
        },
        {
          "name": "attachIgw",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "AttachInternetGateway",
            "InternetGatewayId": "{{ createIgw.igwId }}",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "nextStep": "allocateEip"
        },
        {
          "name": "allocateEip",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "AllocateAddress",
            "Domain": "vpc"
          },
          "outputs": [
            {
              "Name": "eipAllocationId",
              "Selector": "$.AllocationId",
              "Type": "String"
            }
          ],
          "nextStep": "createNatGw"
        },
        {
          "name": "createNatGw",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateNatGateway",
            "AllocationId": "{{ allocateEip.eipAllocationId }}",
            "SubnetId": "{{ createPubSubnet.pubSubnetId }}"
          },
          "outputs":[
            {
              "Name": "natGwId",
              "Selector": "$.NatGateway.NatGatewayId",
              "Type": "String"
            }
          ],
          "nextStep": "verifyNatGwAvailable"
        },
        {
          "name": "verifyNatGwAvailable",
          "action": "aws:waitForAwsResourceProperty",
          "timeoutSeconds": 150,
          "inputs": {
            "Service": "ec2",
            "Api": "DescribeNatGateways",
            "NatGatewayIds": [
              "{{ createNatGw.natGwId }}"
            ],
            "PropertySelector": "$.NatGateways[0].State",
            "DesiredValues": [
              "available"
            ]
          },
          "nextStep": "createNatRoute"
        },
        {
          "name": "createNatRoute",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateRoute",
            "DestinationCidrBlock": "0.0.0.0/0",
            "NatGatewayId": "{{ createNatGw.natGwId }}",
            "RouteTableId": "{{ getMainRtb.mainRtbId }}"
          },
          "nextStep": "createPubRoute"
        },
        {
          "name": "createPubRoute",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateRoute",
            "DestinationCidrBlock": "0.0.0.0/0",
            "GatewayId": "{{ createIgw.igwId }}",
            "RouteTableId": "{{ createPubRtb.pubRtbId }}"
          },
          "nextStep": "setPubSubAssoc"
        },
        {
          "name": "setPubSubAssoc",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "AssociateRouteTable",
            "RouteTableId": "{{ createPubRtb.pubRtbId }}",
            "SubnetId": "{{ createPubSubnet.pubSubnetId }}"
          }
        },
        {
          "name": "createDhcpOptions",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateDhcpOptions",
            "DhcpConfigurations": [
              {
                "Key": "domain-name-servers",
                "Values": ["10.0.100.50,10.0.101.50"]
              },
              {
                "Key": "domain-name",
                "Values": ["sample.com"]
              }
            ]
          },
          "outputs": [
            {
              "Name": "dhcpOptionsId",
              "Selector": "$.DhcpOptions.DhcpOptionsId",
              "Type": "String"
            }
          ],
          "nextStep": "createDCSubnet1"
        },
        {
          "name": "createDCSubnet1",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateSubnet",
            "CidrBlock": "10.0.100.0/24",
            "AvailabilityZone": "us-west-2a",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "outputs": [
            {
              "Name": "firstSubnetId",
              "Selector": "$.Subnet.SubnetId",
              "Type": "String"
            }
          ],
          "nextStep": "createDCSubnet2"
        },
        {
          "name": "createDCSubnet2",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateSubnet",
            "CidrBlock": "10.0.101.0/24",
            "AvailabilityZone": "us-west-2b",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "outputs": [
            {
              "Name": "secondSubnetId",
              "Selector": "$.Subnet.SubnetId",
              "Type": "String"
            }
          ],
          "nextStep": "createDCSecGroup"
        },
        {
          "name": "createDCSecGroup",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "CreateSecurityGroup",
            "GroupName": "SampleDCSecGroup",
            "Description": "Security Group for Example Domain Controllers",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "outputs": [
            {
              "Name": "dcSecGroupId",
              "Selector": "$.GroupId",
              "Type": "String"
            }
          ],
          "nextStep": "authIngressDCTraffic"
        },
        {
          "name": "authIngressDCTraffic",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "AuthorizeSecurityGroupIngress",
            "GroupId": "{{ createDCSecGroup.dcSecGroupId }}",
            "IpPermissions": [
              {
                "FromPort": -1,
                "IpProtocol": "-1",
                "IpRanges": [
                  {
                    "CidrIp": "0.0.0.0/0",
                    "Description": "Allow all traffic between Domain Controllers"
                  }
                ]
              }
            ]
          },
          "nextStep": "verifyInstanceProfile"
        },
        {
          "name": "verifyInstanceProfile",
          "action": "aws:waitForAwsResourceProperty",
          "maxAttempts": 5,
          "onFailure": "Abort",
          "inputs": {
            "Service": "iam",
            "Api": "ListInstanceProfilesForRole",
            "RoleName": "sampleSSMInstanceRole",
            "PropertySelector": "$.InstanceProfiles[0].Arn",
            "DesiredValues": [
              "{{ createSSMInstanceProfile.instanceProfileArn }}"
            ]
          },
          "nextStep": "iamEventualConsistency"
        },
        {
          "name": "iamEventualConsistency",
          "action": "aws:sleep",
          "inputs": {
            "Duration": "PT2M"
          },
          "nextStep": "launchDC1"
        },
        {
          "name": "launchDC1",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "RunInstances",
            "BlockDeviceMappings": [
              {
                "DeviceName": "/dev/sda1",
                "Ebs": {
                  "DeleteOnTermination": true,
                  "VolumeSize": 50,
                  "VolumeType": "gp2"
                }
              },
              {
                "DeviceName": "xvdf",
                "Ebs": {
                  "DeleteOnTermination": true,
                  "VolumeSize": 100,
                  "VolumeType": "gp2"
                }
              }
            ],
            "IamInstanceProfile": {
              "Arn": "{{ createSSMInstanceProfile.instanceProfileArn }}"
            },
            "ImageId": "{{ getLatestWindowsAmi.amiId }}",
            "InstanceType": "t2.micro",
            "MaxCount": 1,
            "MinCount": 1,
            "PrivateIpAddress": "10.0.100.50",
            "SecurityGroupIds": [
              "{{ createDCSecGroup.dcSecGroupId }}"
            ],
            "SubnetId": "{{ createDCSubnet1.firstSubnetId }}",
            "TagSpecifications": [
              {
                "ResourceType": "instance",
                "Tags": [
                  {
                    "Key": "Name",
                    "Value": "SampleDC1"
                  }
                ]
              }
            ]
          },
          "outputs": [
            {
              "Name": "pdcInstanceId",
              "Selector": "$.Instances[0].InstanceId",
              "Type": "String"
            }
          ],
          "nextStep": "launchDC2"
        },
        {
          "name": "launchDC2",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "RunInstances",
            "BlockDeviceMappings": [
              {
                "DeviceName": "/dev/sda1",
                "Ebs": {
                  "DeleteOnTermination": true,
                  "VolumeSize": 50,
                  "VolumeType": "gp2"
                }
              },
              {
                "DeviceName": "xvdf",
                "Ebs": {
                  "DeleteOnTermination": true,
                  "VolumeSize": 100,
                  "VolumeType": "gp2"
                }
              }
            ],
            "IamInstanceProfile": {
              "Arn": "{{ createSSMInstanceProfile.instanceProfileArn }}"
            },
            "ImageId": "{{ getLatestWindowsAmi.amiId }}",
            "InstanceType": "t2.micro",
            "MaxCount": 1,
            "MinCount": 1,
            "PrivateIpAddress": "10.0.101.50",
            "SecurityGroupIds": [
              "{{ createDCSecGroup.dcSecGroupId }}"
            ],
            "SubnetId": "{{ createDCSubnet2.secondSubnetId }}",
            "TagSpecifications": [
              {
                "ResourceType": "instance",
                "Tags": [
                  {
                    "Key": "Name",
                    "Value": "SampleDC2"
                  }
                ]
              }
            ]
          },
          "outputs": [
            {
              "Name": "adcInstanceId",
              "Selector": "$.Instances[0].InstanceId",
              "Type": "String"
            }
          ],
          "nextStep": "verifyDCInstanceState"
        },
        {
          "name": "verifyDCInstanceState",
          "action": "aws:waitForAwsResourceProperty",
          "inputs": {
            "Service": "ec2",
            "Api": "DescribeInstanceStatus",
            "IncludeAllInstances": true,
            "InstanceIds": [
              "{{ launchDC1.pdcInstanceId }}",
              "{{ launchDC2.adcInstanceId }}"
            ],
            "PropertySelector": "$.InstanceStatuses[0].InstanceState.Name",
            "DesiredValues": [
              "running"
            ]
          },
          "nextStep": "verifyInstancesOnlineSSM"
        },
        {
          "name": "verifyInstancesOnlineSSM",
          "action": "aws:waitForAwsResourceProperty",
          "timeoutSeconds": 600,
          "inputs": {
            "Service": "ssm",
            "Api": "DescribeInstanceInformation",
            "InstanceInformationFilterList": [
              {
                "key": "InstanceIds",
                "valueSet": [
                  "{{ launchDC1.pdcInstanceId }}",
                  "{{ launchDC2.adcInstanceId }}"
                ]
              }
            ],
            "PropertySelector": "$.InstanceInformationList[0].PingStatus",
            "DesiredValues": [
              "Online"
            ]
          },
          "nextStep": "installADRoles"
        },
        {
          "name": "installADRoles",
          "action": "aws:runCommand",
          "inputs": {
            "DocumentName": "AWS-RunPowerShellScript",
            "InstanceIds": [
              "{{ launchDC1.pdcInstanceId }}",
              "{{ launchDC2.adcInstanceId }}"
            ],
            "Parameters": {
              "commands": [
                "try {",
                "  Install-WindowsFeature -Name AD-Domain-Services -IncludeManagementTools",
                "}",
                "catch {",
                "  Write-Error \"Failed to install ADDS Role.\"",
                "}"
              ]
            }
          },
          "nextStep": "setAdminPassword"
        },
        {
          "name": "setAdminPassword",
          "action": "aws:runCommand",
          "inputs": {
            "DocumentName": "AWS-RunPowerShellScript",
            "InstanceIds": [
              "{{ launchDC1.pdcInstanceId }}"
            ],
            "Parameters": {
              "commands": [
                "net user Administrator \"sampleAdminPass123!\""
              ]
            }
          },
          "nextStep": "createForest"
        },
        {
          "name": "createForest",
          "action": "aws:runCommand",
          "inputs": {
            "DocumentName": "AWS-RunPowerShellScript",
            "InstanceIds": [
              "{{ launchDC1.pdcInstanceId }}"
            ],
            "Parameters": {
              "commands": [
                "$dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force",
                "try {",
                "   Install-ADDSForest -DomainName \"sample.com\" -DomainMode 6 -ForestMode 6 -InstallDNS -DatabasePath \"D:\\NTDS\" -SysvolPath \"D:\\SYSVOL\" -SafeModeAdministratorPassword $dsrmPass -Force",
                "}",
                "catch {",
                "   Write-Error $_",
                "}",
                "try {",
                "   Add-DnsServerForwarder -IPAddress \"10.0.100.2\"",
                "}",
                "catch {",
                "   Write-Error $_",
                "}"
              ]
            }
          },
          "nextStep": "associateDhcpOptions"
        },
        {
          "name": "associateDhcpOptions",
          "action": "aws:executeAwsApi",
          "onFailure": "Abort",
          "inputs": {
            "Service": "ec2",
            "Api": "AssociateDhcpOptions",
            "DhcpOptionsId": "{{ createDhcpOptions.dhcpOptionsId }}",
            "VpcId": "{{ createVpc.vpcId }}"
          },
          "nextStep": "waitForADServices"
        },
        {
          "name": "waitForADServices",
          "action": "aws:sleep",
          "inputs": {
            "Duration": "PT1M"
          },
          "nextStep": "promoteADC"
        },
        {
          "name": "promoteADC",
          "action": "aws:runCommand",
          "inputs": {
            "DocumentName": "AWS-RunPowerShellScript",
            "InstanceIds": [
              "{{ launchDC2.adcInstanceId }}"
            ],
            "Parameters": {
              "commands": [
                "ipconfig /renew",
                "$dsrmPass = 'sample123!' | ConvertTo-SecureString -asPlainText -Force",
                "$domAdminUser = \"sample\\Administrator\"",
                "$domAdminPass = \"sampleAdminPass123!\" | ConvertTo-SecureString -asPlainText -Force",
                "$domAdminCred = New-Object System.Management.Automation.PSCredential($domAdminUser,$domAdminPass)",
                "try {",
                "   Install-ADDSDomainController -DomainName \"sample.com\" -InstallDNS -DatabasePath \"D:\\NTDS\" -SysvolPath \"D:\\SYSVOL\" -SafeModeAdministratorPassword $dsrmPass -Credential $domAdminCred -Force",
                "}",
                "catch {",
                "   Write-Error $_",
                "}"
              ]
            }
          }
        }
      ]
    }
```

------

# Restore a root volume from the latest snapshot


The operating system on a root volume can become corrupted for various reasons. For example, following a patching operation, instances might fail to boot successfully due to a corrupted kernel or registry. Automating common troubleshooting tasks, like restoring a root volume from the latest snapshot taken before the patching operation, can reduce downtime and expedite your troubleshooting efforts. AWS Systems Manager Automation actions can help you accomplish this. Automation is a tool in AWS Systems Manager.

The following example AWS Systems Manager runbook performs these actions: 
+ Uses the `aws:executeAwsApi` automation action to retrieve details from the root volume of the instance.
+ Uses the `aws:executeScript` automation action to retrieve the latest snapshot for the root volume.
+ Uses the `aws:branch` automation action to continue the automation if a snapshot is found for the root volume.

------
#### [ YAML ]

```
    ---
    description: Custom Automation Troubleshooting Example
    schemaVersion: '0.3'
    assumeRole: "{{ AutomationAssumeRole }}"
    parameters:
      AutomationAssumeRole:
        type: String
        description: "(Required) The ARN of the role that allows Automation to perform
          the actions on your behalf. If no role is specified, Systems Manager Automation
          uses your IAM permissions to use this runbook."
        default: ''
      InstanceId:
          type: String
          description: "(Required) The Instance Id whose root EBS volume you want to restore the latest Snapshot."
          default: ''
    mainSteps:
    - name: getInstanceDetails
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: DescribeInstances
        InstanceIds:
        - "{{ InstanceId }}"
      outputs:
        - Name: availabilityZone
          Selector: "$.Reservations[0].Instances[0].Placement.AvailabilityZone"
          Type: String
        - Name: rootDeviceName
          Selector: "$.Reservations[0].Instances[0].RootDeviceName"
          Type: String
      nextStep: getRootVolumeId
    - name: getRootVolumeId
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: DescribeVolumes
        Filters:
        -  Name: attachment.device
           Values: ["{{ getInstanceDetails.rootDeviceName }}"]
        -  Name: attachment.instance-id
           Values: ["{{ InstanceId }}"]
      outputs:
        - Name: rootVolumeId
          Selector: "$.Volumes[0].VolumeId"
          Type: String
      nextStep: getSnapshotsByStartTime
    - name: getSnapshotsByStartTime
      action: aws:executeScript
      timeoutSeconds: 45
      onFailure: Abort
      inputs:
        Runtime: python3.11
        Handler: getSnapshotsByStartTime
        InputPayload:
          rootVolumeId : "{{ getRootVolumeId.rootVolumeId }}"
        Script: |-
          def getSnapshotsByStartTime(events,context):
            import boto3
    
            #Initialize client
            ec2 = boto3.client('ec2')
            rootVolumeId = events['rootVolumeId']
            snapshotsQuery = ec2.describe_snapshots(
              Filters=[
                {
                  "Name": "volume-id",
                  "Values": [rootVolumeId]
                }
              ]
            )
            if not snapshotsQuery['Snapshots']:
              noSnapshotFoundString = "NoSnapshotFound"
              return { 'noSnapshotFound' : noSnapshotFoundString }
            else:
              jsonSnapshots = snapshotsQuery['Snapshots']
              sortedSnapshots = sorted(jsonSnapshots, key=lambda k: k['StartTime'], reverse=True)
              latestSortedSnapshotId = sortedSnapshots[0]['SnapshotId']
              return { 'latestSnapshotId' : latestSortedSnapshotId }
      outputs:
      - Name: Payload
        Selector: $.Payload
        Type: StringMap
      - Name: latestSnapshotId
        Selector: $.Payload.latestSnapshotId
        Type: String
      - Name: noSnapshotFound
        Selector: $.Payload.noSnapshotFound
        Type: String 
      nextStep: branchFromResults
    - name: branchFromResults
      action: aws:branch
      onFailure: Abort
      inputs:
        Choices:
        - NextStep: createNewRootVolumeFromSnapshot
          Not:
            Variable: "{{ getSnapshotsByStartTime.noSnapshotFound }}"
            StringEquals: "NoSnapshotFound"
      isEnd: true
    - name: createNewRootVolumeFromSnapshot
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: CreateVolume
        AvailabilityZone: "{{ getInstanceDetails.availabilityZone }}"
        SnapshotId: "{{ getSnapshotsByStartTime.latestSnapshotId }}"
      outputs:
        - Name: newRootVolumeId
          Selector: "$.VolumeId"
          Type: String
      nextStep: stopInstance
    - name: stopInstance
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: StopInstances
        InstanceIds:
        - "{{ InstanceId }}"
      nextStep: verifyVolumeAvailability
    - name: verifyVolumeAvailability
      action: aws:waitForAwsResourceProperty
      timeoutSeconds: 120
      inputs:
        Service: ec2
        Api: DescribeVolumes
        VolumeIds:
        - "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
        PropertySelector: "$.Volumes[0].State"
        DesiredValues:
        - "available"
      nextStep: verifyInstanceStopped
    - name: verifyInstanceStopped
      action: aws:waitForAwsResourceProperty
      timeoutSeconds: 120
      inputs:
        Service: ec2
        Api: DescribeInstances
        InstanceIds:
        - "{{ InstanceId }}"
        PropertySelector: "$.Reservations[0].Instances[0].State.Name"
        DesiredValues:
        - "stopped"
      nextStep: detachRootVolume
    - name: detachRootVolume
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: DetachVolume
        VolumeId: "{{ getRootVolumeId.rootVolumeId }}"
      nextStep: verifyRootVolumeDetached
    - name: verifyRootVolumeDetached
      action: aws:waitForAwsResourceProperty
      timeoutSeconds: 30
      inputs:
        Service: ec2
        Api: DescribeVolumes
        VolumeIds:
        - "{{ getRootVolumeId.rootVolumeId }}"
        PropertySelector: "$.Volumes[0].State"
        DesiredValues:
        - "available"
      nextStep: attachNewRootVolume
    - name: attachNewRootVolume
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: AttachVolume
        Device: "{{ getInstanceDetails.rootDeviceName }}"
        InstanceId: "{{ InstanceId }}"
        VolumeId: "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
      nextStep: verifyNewRootVolumeAttached
    - name: verifyNewRootVolumeAttached
      action: aws:waitForAwsResourceProperty
      timeoutSeconds: 30
      inputs:
        Service: ec2
        Api: DescribeVolumes
        VolumeIds:
        - "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
        PropertySelector: "$.Volumes[0].Attachments[0].State"
        DesiredValues:
        - "attached"
      nextStep: startInstance
    - name: startInstance
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: StartInstances
        InstanceIds:
        - "{{ InstanceId }}"
```

------
#### [ JSON ]

```
    {
       "description": "Custom Automation Troubleshooting Example",
       "schemaVersion": "0.3",
       "assumeRole": "{{ AutomationAssumeRole }}",
       "parameters": {
          "AutomationAssumeRole": {
             "type": "String",
             "description": "(Required) The ARN of the role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to run this runbook.",
             "default": ""
          },
          "InstanceId": {
             "type": "String",
             "description": "(Required) The Instance Id whose root EBS volume you want to restore the latest Snapshot.",
             "default": ""
          }
       },
       "mainSteps": [
          {
             "name": "getInstanceDetails",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeInstances",
                "InstanceIds": [
                   "{{ InstanceId }}"
                ]
             },
             "outputs": [
                {
                   "Name": "availabilityZone",
                   "Selector": "$.Reservations[0].Instances[0].Placement.AvailabilityZone",
                   "Type": "String"
                },
                {
                   "Name": "rootDeviceName",
                   "Selector": "$.Reservations[0].Instances[0].RootDeviceName",
                   "Type": "String"
                }
             ],
             "nextStep": "getRootVolumeId"
          },
          {
             "name": "getRootVolumeId",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeVolumes",
                "Filters": [
                   {
                      "Name": "attachment.device",
                      "Values": [
                         "{{ getInstanceDetails.rootDeviceName }}"
                      ]
                   },
                   {
                      "Name": "attachment.instance-id",
                      "Values": [
                         "{{ InstanceId }}"
                      ]
                   }
                ]
             },
             "outputs": [
                {
                   "Name": "rootVolumeId",
                   "Selector": "$.Volumes[0].VolumeId",
                   "Type": "String"
                }
             ],
             "nextStep": "getSnapshotsByStartTime"
          },
          {
             "name": "getSnapshotsByStartTime",
             "action": "aws:executeScript",
             "timeoutSeconds": 45,
             "onFailure": "Continue",
             "inputs": {
                "Runtime": "python3.11",
                "Handler": "getSnapshotsByStartTime",
                "InputPayload": {
                   "rootVolumeId": "{{ getRootVolumeId.rootVolumeId }}"
                },
                "Attachment": "getSnapshotsByStartTime.py"
             },
             "outputs": [
                {
                   "Name": "Payload",
                   "Selector": "$.Payload",
                   "Type": "StringMap"
                },
                {
                   "Name": "latestSnapshotId",
                   "Selector": "$.Payload.latestSnapshotId",
                   "Type": "String"
                },
                {
                   "Name": "noSnapshotFound",
                   "Selector": "$.Payload.noSnapshotFound",
                   "Type": "String"
                }
             ],
             "nextStep": "branchFromResults"
          },
          {
             "name": "branchFromResults",
             "action": "aws:branch",
             "onFailure": "Abort",
             "inputs": {
                "Choices": [
                   {
                      "NextStep": "createNewRootVolumeFromSnapshot",
                      "Not": {
                         "Variable": "{{ getSnapshotsByStartTime.noSnapshotFound }}",
                         "StringEquals": "NoSnapshotFound"
                      }
                   }
                ]
             },
             "isEnd": true
          },
          {
             "name": "createNewRootVolumeFromSnapshot",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "CreateVolume",
                "AvailabilityZone": "{{ getInstanceDetails.availabilityZone }}",
                "SnapshotId": "{{ getSnapshotsByStartTime.latestSnapshotId }}"
             },
             "outputs": [
                {
                   "Name": "newRootVolumeId",
                   "Selector": "$.VolumeId",
                   "Type": "String"
                }
             ],
             "nextStep": "stopInstance"
          },
          {
             "name": "stopInstance",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "StopInstances",
                "InstanceIds": [
                   "{{ InstanceId }}"
                ]
             },
             "nextStep": "verifyVolumeAvailability"
          },
          {
             "name": "verifyVolumeAvailability",
             "action": "aws:waitForAwsResourceProperty",
             "timeoutSeconds": 120,
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeVolumes",
                "VolumeIds": [
                   "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
                ],
                "PropertySelector": "$.Volumes[0].State",
                "DesiredValues": [
                   "available"
                ]
             },
             "nextStep": "verifyInstanceStopped"
          },
          {
             "name": "verifyInstanceStopped",
             "action": "aws:waitForAwsResourceProperty",
             "timeoutSeconds": 120,
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeInstances",
                "InstanceIds": [
                   "{{ InstanceId }}"
                ],
                "PropertySelector": "$.Reservations[0].Instances[0].State.Name",
                "DesiredValues": [
                   "stopped"
                ]
             },
             "nextStep": "detachRootVolume"
          },
          {
             "name": "detachRootVolume",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "DetachVolume",
                "VolumeId": "{{ getRootVolumeId.rootVolumeId }}"
             },
             "nextStep": "verifyRootVolumeDetached"
          },
          {
             "name": "verifyRootVolumeDetached",
             "action": "aws:waitForAwsResourceProperty",
             "timeoutSeconds": 30,
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeVolumes",
                "VolumeIds": [
                   "{{ getRootVolumeId.rootVolumeId }}"
                ],
                "PropertySelector": "$.Volumes[0].State",
                "DesiredValues": [
                   "available"
                ]
             },
             "nextStep": "attachNewRootVolume"
          },
          {
             "name": "attachNewRootVolume",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "AttachVolume",
                "Device": "{{ getInstanceDetails.rootDeviceName }}",
                "InstanceId": "{{ InstanceId }}",
                "VolumeId": "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
             },
             "nextStep": "verifyNewRootVolumeAttached"
          },
          {
             "name": "verifyNewRootVolumeAttached",
             "action": "aws:waitForAwsResourceProperty",
             "timeoutSeconds": 30,
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeVolumes",
                "VolumeIds": [
                   "{{ createNewRootVolumeFromSnapshot.newRootVolumeId }}"
                ],
                "PropertySelector": "$.Volumes[0].Attachments[0].State",
                "DesiredValues": [
                   "attached"
                ]
             },
             "nextStep": "startInstance"
          },
          {
             "name": "startInstance",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "StartInstances",
                "InstanceIds": [
                   "{{ InstanceId }}"
                ]
             }
          }
       ],
       "files": {
            "getSnapshotsByStartTime.py": {
                "checksums": {
                    "sha256": "sampleETagValue"
                }
            }
        }
    }
```

------

# Create an AMI and cross-Region copy


Creating an Amazon Machine Image (AMI) of an instance is a common process used in backup and recovery. You might also choose to copy an AMI to another AWS Region as part of a disaster recovery architecture. Automating common maintenance tasks can reduce downtime if an issue requires failover. AWS Systems Manager Automation actions can help you accomplish this. Automation is a tool in AWS Systems Manager.

The following example AWS Systems Manager runbook performs these actions:
+ Uses the `aws:executeAwsApi` automation action to create an AMI.
+ Uses the `aws:waitForAwsResourceProperty` automation action to confirm the availability of the AMI.
+ Uses the `aws:executeScript` automation action to copy the AMI to the destination Region.

------
#### [ YAML ]

```
    ---
    description: Custom Automation Backup and Recovery Example
    schemaVersion: '0.3'
    assumeRole: "{{ AutomationAssumeRole }}"
    parameters:
      AutomationAssumeRole:
        type: String
        description: "(Required) The ARN of the role that allows Automation to perform
          the actions on your behalf. If no role is specified, Systems Manager Automation
          uses your IAM permissions to use this runbook."
        default: ''
      InstanceId:
        type: String
        description: "(Required) The ID of the EC2 instance."
        default: ''
    mainSteps:
    - name: createImage
      action: aws:executeAwsApi
      onFailure: Abort
      inputs:
        Service: ec2
        Api: CreateImage
        InstanceId: "{{ InstanceId }}"
        Name: "Automation Image for {{ InstanceId }}"
        NoReboot: false
      outputs:
        - Name: newImageId
          Selector: "$.ImageId"
          Type: String
      nextStep: verifyImageAvailability
    - name: verifyImageAvailability
      action: aws:waitForAwsResourceProperty
      timeoutSeconds: 600
      inputs:
        Service: ec2
        Api: DescribeImages
        ImageIds:
        - "{{ createImage.newImageId }}"
        PropertySelector: "$.Images[0].State"
        DesiredValues:
        - available
      nextStep: copyImage
    - name: copyImage
      action: aws:executeScript
      timeoutSeconds: 45
      onFailure: Abort
      inputs:
        Runtime: python3.11
        Handler: crossRegionImageCopy
        InputPayload:
          newImageId : "{{ createImage.newImageId }}"
        Script: |-
          def crossRegionImageCopy(events,context):
            import boto3
    
            #Initialize client
            ec2 = boto3.client('ec2', region_name='us-east-1')
            newImageId = events['newImageId']
    
            ec2.copy_image(
              Name='DR Copy for ' + newImageId,
              SourceImageId=newImageId,
              SourceRegion='us-west-2'
            )
```

------
#### [ JSON ]

```
    {
       "description": "Custom Automation Backup and Recovery Example",
       "schemaVersion": "0.3",
       "assumeRole": "{{ AutomationAssumeRole }}",
       "parameters": {
          "AutomationAssumeRole": {
             "type": "String",
             "description": "(Required) The ARN of the role that allows Automation to perform\nthe actions on your behalf. If no role is specified, Systems Manager Automation\nuses your IAM permissions to run this runbook.",
             "default": ""
          },
          "InstanceId": {
             "type": "String",
             "description": "(Required) The ID of the EC2 instance.",
             "default": ""
          }
       },
       "mainSteps": [
          {
             "name": "createImage",
             "action": "aws:executeAwsApi",
             "onFailure": "Abort",
             "inputs": {
                "Service": "ec2",
                "Api": "CreateImage",
                "InstanceId": "{{ InstanceId }}",
                "Name": "Automation Image for {{ InstanceId }}",
                "NoReboot": false
             },
             "outputs": [
                {
                   "Name": "newImageId",
                   "Selector": "$.ImageId",
                   "Type": "String"
                }
             ],
             "nextStep": "verifyImageAvailability"
          },
          {
             "name": "verifyImageAvailability",
             "action": "aws:waitForAwsResourceProperty",
             "timeoutSeconds": 600,
             "inputs": {
                "Service": "ec2",
                "Api": "DescribeImages",
                "ImageIds": [
                   "{{ createImage.newImageId }}"
                ],
                "PropertySelector": "$.Images[0].State",
                "DesiredValues": [
                   "available"
                ]
             },
             "nextStep": "copyImage"
          },
          {
             "name": "copyImage",
             "action": "aws:executeScript",
             "timeoutSeconds": 45,
             "onFailure": "Abort",
             "inputs": {
                "Runtime": "python3.11",
                "Handler": "crossRegionImageCopy",
                "InputPayload": {
                   "newImageId": "{{ createImage.newImageId }}"
                },
                "Attachment": "crossRegionImageCopy.py"
             }
          }
       ],
       "files": {
            "crossRegionImageCopy.py": {
                "checksums": {
                    "sha256": "sampleETagValue"
                }
            }
        }
    }
```

------

# Creating input parameters that populate AWS resources


Automation, a tool in Systems Manager, populates AWS resources in the AWS Management Console that match the resource type you define for an input parameter. Resources in your AWS account that match the resource type are displayed in a dropdown list for you to choose. You can define input parameter types for Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Simple Storage Service (Amazon S3) buckets, and AWS Identity and Access Management (IAM) roles. The supported type definitions and the regular expressions used to locate matching resources are as follows:
+ `AWS::EC2::Instance::Id` - `^m?i-([a-z0-9]{8}|[a-z0-9]{17})$`
+ `List<AWS::EC2::Instance::Id>` - `^m?i-([a-z0-9]{8}|[a-z0-9]{17})$`
+ `AWS::S3::Bucket::Name` - `^[0-9a-z][a-z0-9\\-\\.]{3,63}$`
+ `List<AWS::S3::Bucket::Name>` - `^[0-9a-z][a-z0-9\\-\\.]{3,63}$`
+ `AWS::IAM::Role::Arn` - `^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):iam::[0-9]{12}:role/.*$`
+ `List<AWS::IAM::Role::Arn>` - `^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):iam::[0-9]{12}:role/.*$`

The following is an example of input parameter types defined in runbook content.

------
#### [ YAML ]

```
description: Enables encryption on an Amazon S3 bucket
schemaVersion: '0.3'
assumeRole: '{{ AutomationAssumeRole }}'
parameters:
  BucketName:
    type: 'AWS::S3::Bucket::Name'
    description: (Required) The name of the Amazon S3 bucket you want to encrypt.
  SSEAlgorithm:
    type: String
    description: (Optional) The server-side encryption algorithm to use for the default encryption.
    default: AES256
  AutomationAssumeRole:
    type: 'AWS::IAM::Role::Arn'
    description: (Optional) The Amazon Resource Name (ARN) of the role that allows Automation to perform the actions on your behalf.
    default: ''
mainSteps:
  - name: enableBucketEncryption
    action: 'aws:executeAwsApi'
    inputs:
      Service: s3
      Api: PutBucketEncryption
      Bucket: '{{BucketName}}'
      ServerSideEncryptionConfiguration:
        Rules:
          - ApplyServerSideEncryptionByDefault:
              SSEAlgorithm: '{{SSEAlgorithm}}'
    isEnd: true
```

------
#### [ JSON ]

```
{
   "description": "Enables encryption on an Amazon S3 bucket",
   "schemaVersion": "0.3",
   "assumeRole": "{{ AutomationAssumeRole }}",
   "parameters": {
      "BucketName": {
         "type": "AWS::S3::Bucket::Name",
         "description": "(Required) The name of the Amazon S3 bucket you want to encrypt."
      },
      "SSEAlgorithm": {
         "type": "String",
         "description": "(Optional) The server-side encryption algorithm to use for the default encryption.",
         "default": "AES256"
      },
      "AutomationAssumeRole": {
         "type": "AWS::IAM::Role::Arn",
         "description": "(Optional) The Amazon Resource Name (ARN) of the role that allows Automation to perform the actions on your behalf.",
         "default": ""
      }
   },
   "mainSteps": [
      {
         "name": "enableBucketEncryption",
         "action": "aws:executeAwsApi",
         "inputs": {
            "Service": "s3",
            "Api": "PutBucketEncryption",
            "Bucket": "{{BucketName}}",
            "ServerSideEncryptionConfiguration": {
               "Rules": [
                  {
                     "ApplyServerSideEncryptionByDefault": {
                        "SSEAlgorithm": "{{SSEAlgorithm}}"
                     }
                  }
               ]
            }
         },
         "isEnd": true
      }
   ]
}
```

------

# Using Document Builder to create runbooks


If the AWS Systems Manager public runbooks don't support all the actions you want to perform on your AWS resources, you can create your own runbooks. To create a custom runbook, you can manually create a local YAML or JSON format file with the appropriate automation actions. Alternatively, you can use Document Builder in the Systems Manager Automation console to build a custom runbook.

Using Document Builder, you can add automation actions to your custom runbook and provide the required parameters without having to use JSON or YAML syntax. After you add steps and create the runbook, the system converts the actions you've added into the YAML format that Systems Manager can use to run automation.

Runbooks support the use of Markdown, a markup language, which allows you to add wiki-style descriptions to runbooks and individual steps within the runbook. For more information about using Markdown, see [Using Markdown in AWS](https://docs.aws.amazon.com/general/latest/gr/aws-markdown.html).

## Create a runbook using Document Builder


**Before you begin**  
We recommend that you read about the different actions that you can use within a runbook. For more information, see [Systems Manager Automation actions reference](automation-actions.md).

**To create a runbook using Document Builder**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Choose **Create automation**.

1. For **Name**, enter a descriptive name for the runbook.

1. For **Document description**, provide the markdown style description for the runbook. You can provide instructions for using the runbook, numbered steps, or any other type of information to describe the runbook. Refer to the default text for information about formatting your content.
**Tip**  
Toggle between **Hide preview** and **Show preview** to see what your description content looks like as you compose.

1. (Optional) For **Assume role**, enter the name or ARN of a service role to perform actions on your behalf. If you don't specify a role, Automation uses the access permissions of the user who runs the automation.
**Important**  
For runbooks not owned by Amazon that use the `aws:executeScript` action, a role must be specified. For information, see [Permissions for using runbooks](automation-document-script-considerations.md#script-permissions).

1. (Optional) For **Outputs**, enter any outputs for the automation of this runbook to make available for other processes. 

   For example, if your runbook creates a new AMI, you might specify ["CreateImage.ImageId"], and then use this output to create new instances in a subsequent automation.

1. (Optional) Expand the **Input parameters** section and do the following.

   1. For **Parameter name**, enter a descriptive name for the runbook parameter you're creating.

   1. For **Type**, choose a type for the parameter, such as `String` or `MapList`.

   1. For **Required**, do one of the following: 
      + Choose **Yes** if a value for this runbook parameter must be supplied at runtime.
      + Choose **No** if the parameter isn't required, and (optional) enter a default parameter value in **Default value**.

   1. For **Description**, enter a description for the runbook parameter.
**Note**  
To add more runbook parameters, choose **Add a parameter**. To remove a runbook parameter, choose the **X** (Remove) button.

1. (Optional) Expand the **Target type** section and choose a target type to define the kinds of resources the automation can run on. For example, to use a runbook on EC2 instances, choose `/AWS::EC2::Instance`.
**Note**  
If you specify a value of '`/`', the runbook can run on all types of resources. For a list of valid resource types, see [AWS Resource Types Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) in the *AWS CloudFormation User Guide*.

1. (Optional) Expand the **Document tags** section and enter one or more tag key-value pairs to apply to the runbook. Tags make it easier to identify, organize, and search for resources.

1. In the **Step 1** section, provide the following information.
   + For **Step name**, enter a descriptive name for the first step of the automation.
   + For **Action type**, select the action type to use for this step.

     For a list and information about the available action types, see [Systems Manager Automation actions reference](automation-actions.md).
   + For **Description**, enter a description for the automation step. You can use Markdown to format your text.
   + Depending on the **Action type** selected, enter the required inputs for the action type in the **Step inputs** section. For example, if you selected the action `aws:approve`, you must specify a value for the `Approvers` property.

     For information about the step input fields, see the entry in [Systems Manager Automation actions reference](automation-actions.md) for the action type you selected. For example: [`aws:executeStateMachine` – Run an AWS Step Functions state machine](automation-action-executeStateMachine.md).
   + (Optional) For **Additional inputs**, provide any additional input values needed for your runbook. The available input types depend on the action type you selected for the step. (Note that some action types require input values.)
**Note**  
To add more inputs, choose **Add optional input**. To remove an input, choose the **X** (Remove) button.
   + (Optional) For **Outputs**, enter any outputs for this step to make available for other processes.
**Note**  
**Outputs** isn't available for all action types.
   + (Optional) Expand the **Common properties** section and specify properties for the actions that are common to all automation actions. For example, for **Timeout seconds**, you can provide a value in seconds to specify how long the step can run before it's stopped.

     For more information, see [Properties shared by all actions](automation-actions.md#automation-common).
**Note**  
To add more steps, select **Add step** and repeat the procedure for creating a step. To remove a step, choose **Remove step**.

1. Choose **Create automation** to save the runbook.

## Create a runbook that runs scripts


The following procedure shows how to use Document Builder in the AWS Systems Manager Automation console to create a custom runbook that runs a script.

The first step of the runbook you create runs a script to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance. The second step runs another script to monitor for the instance status check to change to `ok`. Then, an overall status of `Success` is reported for the automation.

**Before you begin**  
Make sure you have completed the following steps:
+ Verify that you have administrator privileges, or that you have been granted the appropriate permissions to access Systems Manager in AWS Identity and Access Management (IAM). 

  For information, see [Verifying user access for runbooks](automation-setup.md#automation-setup-user-access).
+ Verify that you have an IAM service role for Automation (also known as an *assume role*) in your AWS account. The role is required because this walkthrough uses the `aws:executeScript` action. 

  For information about creating this role, see [Configuring a service role (assume role) access for automations](automation-setup.md#automation-setup-configure-role). 

  For information about the IAM service role requirement for running `aws:executeScript`, see [Permissions for using runbooks](automation-document-script-considerations.md#script-permissions).
+ Verify that you have permission to launch EC2 instances. 

  For information, see [IAM and Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html#intro-to-iam) in the *Amazon EC2 User Guide*.

**To create a custom runbook that runs scripts using Document Builder**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Choose **Create automation**.

1. For **Name**, type this descriptive name for the runbook: **LaunchInstanceAndCheckStatus**.

1. (Optional) For **Document description**, replace the default text with a description for this runbook, using Markdown. The following is an example.

   ```
   ##Title: LaunchInstanceAndCheckState
       -----
       **Purpose**: This runbook first launches an EC2 instance using the AMI ID provided in the parameter ```imageId```. The second step of this runbook continuously checks the instance status check value for the launched instance until the status ```ok``` is returned.
       
       ##Parameters:
       -----
       Name | Type | Description | Default Value
       ------------- | ------------- | ------------- | -------------
       assumeRole | String | (Optional) The ARN of the role that allows Automation to perform the actions on your behalf. | -
       imageId  | String | (Optional) The AMI ID to use for launching the instance. The default value uses the latest Amazon Linux 2023 AMI ID available. | {{ ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64 }}
   ```

1. For **Assume role**, enter the ARN of the IAM service role for Automation (Assume role) for the automation, in the format **arn:aws:iam::111122223333:role/AutomationServiceRole**. Substitute your AWS account ID for 111122223333.

   The role you specify is used to provide the permissions needed to start the automation.
**Important**  
For runbooks not owned by Amazon that use the `aws:executeScript` action, a role must be specified. For information, see [Permissions for using runbooks](automation-document-script-considerations.md#script-permissions).

1. Expand **Input parameters** and do the following.

   1. For **Parameter name**, enter **imageId**.

   1. For **Type**, choose **String**.

   1. For **Required**, choose `No`. 

   1. For **Default value**, enter the following.

      ```
      {{ ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64 }}
      ```
**Note**  
This value launches an Amazon EC2 instance using the latest Amazon Linux 2023 Amazon Machine Image (AMI) ID. If you want to use a different AMI, replace the value with your AMI ID.

   1. For **Description**, enter the following.

      ```
      (Optional) The AMI ID to use for launching the instance. The default value uses the latest released Amazon Linux 2023 AMI ID.
      ```

1. Choose **Add a parameter** to create the second parameter, **tagValue**, and enter the following.

   1. For **Parameter name**, enter **tagValue**.

   1. For **Type**, choose **String**.

   1. For **Required**, choose `No`. 

   1. For **Default value**, enter **LaunchedBySsmAutomation**. This adds the tag key-pair value `Name:LaunchedBySsmAutomation` to the instance.

   1. For **Description**, enter the following.

      ```
      (Optional) The tag value to add to the instance. The default value is LaunchedBySsmAutomation.
      ```

1. Choose **Add a parameter** to create the third parameter, **instanceType**, and enter the following information.

   1. For **Parameter name**, enter **instanceType**.

   1. For **Type**, choose **String**.

   1. For **Required**, choose `No`. 

   1. For **Default value**, enter **t2.micro**.

   1. For **Parameter description**, enter the following.

      ```
      (Optional) The instance type to use for the instance. The default value is t2.micro.
      ```

1. Expand **Target type** and choose **"/"**.

1. (Optional) Expand **Document tags** to apply resource tags to your runbook. For **Tag key**, enter **Purpose**, and for **Tag value**, enter **LaunchInstanceAndCheckState**.

1. In the **Step 1** section, complete the following steps.

   1. For **Step name**, enter this descriptive step name for the first step of the automation: **LaunchEc2Instance**.

   1. For **Action type**, choose **Run a script** (**aws:executeScript**).

   1. For **Description**, enter a description for the automation step, such as the following.

      ```
      **About This Step**
          
          This step first launches an EC2 instance using the ```aws:executeScript``` action and the provided script.
      ```

   1. Expand **Inputs**.

   1. For **Runtime**, choose the runtime language to use to run the provided script.

   1. For **Handler**, enter **launch\$1instance**. This is the function name declared in the following script.
**Note**  
This is not required for PowerShell.

   1. For **Script**, replace the default contents with the following. Be sure to match the script with the corresponding runtime value.

------
#### [ Python ]

      ```
      def launch_instance(events, context):
            import boto3
            ec2 = boto3.client('ec2')
          
            image_id = events['image_id']
            tag_value = events['tag_value']
            instance_type = events['instance_type']
          
            tag_config = {'ResourceType': 'instance', 'Tags': [{'Key':'Name', 'Value':tag_value}]}
          
            res = ec2.run_instances(ImageId=image_id, InstanceType=instance_type, MaxCount=1, MinCount=1, TagSpecifications=[tag_config])
          
            instance_id = res['Instances'][0]['InstanceId']
          
            print('[INFO] 1 EC2 instance is successfully launched', instance_id)
          
            return { 'InstanceId' : instance_id }
      ```

------
#### [ PowerShell ]

      ```
      Install-Module AWS.Tools.EC2 -Force
          Import-Module AWS.Tools.EC2
          
          $payload = $env:InputPayload | ConvertFrom-Json
          
          $imageid = $payload.image_id
          
          $tagvalue = $payload.tag_value
          
          $instanceType = $payload.instance_type
          
          $type = New-Object Amazon.EC2.InstanceType -ArgumentList $instanceType
          
          $resource = New-Object Amazon.EC2.ResourceType -ArgumentList 'instance'
          
          $tag = @{Key='Name';Value=$tagValue}
          
          $tagSpecs = New-Object Amazon.EC2.Model.TagSpecification
          
          $tagSpecs.ResourceType = $resource
          
          $tagSpecs.Tags.Add($tag)
          
          $res = New-EC2Instance -ImageId $imageId -MinCount 1 -MaxCount 1 -InstanceType $type -TagSpecification $tagSpecs
          
          return @{'InstanceId'=$res.Instances.InstanceId}
      ```

------

   1. Expand **Additional inputs**. 

   1. For **Input name**, choose **InputPayload**. For **Input value**, enter the following YAML data. 

      ```
      image_id: "{{ imageId }}"
          tag_value: "{{ tagValue }}"
          instance_type: "{{ instanceType }}"
      ```

1. Expand **Outputs** and do the following:
   + For **Name**, enter **payload**.
   + For **Selector**, enter **\$1.Payload**.
   + For **Type**, choose `StringMap`.

1. Choose **Add step** to add a second step to the runbook. The second step queries the status of the instance launched in Step 1 and waits until the status returned is `ok`.

1. In the **Step 2** section, do the following.

   1. For **Step name**, enter this descriptive name for the second step of the automation: **WaitForInstanceStatusOk**.

   1. For **Action type**, choose **Run a script** (**aws:executeScript**).

   1. For **Description**, enter a description for the automation step, such as the following.

      ```
      **About This Step**
          
          The script continuously polls the instance status check value for the instance launched in Step 1 until the ```ok``` status is returned.
      ```

   1. For **Runtime**, choose the runtime language to be used for executing the provided script.

   1. For **Handler**, enter **poll\$1instance**. This is the function name declared in the following script.
**Note**  
This is not required for PowerShell.

   1. For **Script**, replace the default contents with the following. Be sure to match the script with the corresponding runtime value.

------
#### [ Python ]

      ```
      def poll_instance(events, context):
            import boto3
            import time
          
            ec2 = boto3.client('ec2')
          
            instance_id = events['InstanceId']
          
            print('[INFO] Waiting for instance status check to report ok', instance_id)
          
            instance_status = "null"
          
            while True:
              res = ec2.describe_instance_status(InstanceIds=[instance_id])
          
              if len(res['InstanceStatuses']) == 0:
                print("Instance status information is not available yet")
                time.sleep(5)
                continue
          
              instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']
          
              print('[INFO] Polling to get status of the instance', instance_status)
          
              if instance_status == 'ok':
                break
          
              time.sleep(10)
          
            return {'Status': instance_status, 'InstanceId': instance_id}
      ```

------
#### [ PowerShell ]

      ```
          Install-Module AWS.Tools.EC2 -Force
          
          $inputPayload = $env:InputPayload | ConvertFrom-Json
          
          $instanceId = $inputPayload.payload.InstanceId
          
          $status = Get-EC2InstanceStatus -InstanceId $instanceId
          
          while ($status.Status.Status -ne 'ok'){
             Write-Host 'Polling get status of the instance', $instanceId
          
             Start-Sleep -Seconds 5
          
             $status = Get-EC2InstanceStatus -InstanceId $instanceId
          }
          
          return @{Status = $status.Status.Status; InstanceId = $instanceId}
      ```

------

   1. Expand **Additional inputs**. 

   1. For **Input name**, choose **InputPayload**. For **Input value**, enter the following:

      ```
      {{ LaunchEc2Instance.payload }}
      ```

1. Choose **Create automation** to save the runbook.

# Using scripts in runbooks


Automation runbooks support running scripts as part of the automation. Automation is a tool in AWS Systems Manager. By using runbooks, you can run scripts directly in AWS without creating a separate compute environment to run your scripts. Because runbooks can run script steps along with other automation step types, such as approvals, you can manually intervene in critical or ambiguous situations. You can send the output from `aws:executeScript` actions in your runbooks to Amazon CloudWatch Logs. For more information, see [Logging Automation action output with CloudWatch Logs](automation-action-logging.md).

## Permissions for using runbooks


To use a runbook, Systems Manager must use the permissions of an AWS Identity and Access Management (IAM) role. The method that Automation uses to determine which role's permissions to use depends on a few factors, and whether a step uses the `aws:executeScript` action. 

For runbooks that don't use `aws:executeScript`, Automation uses one of two sources of permissions:
+ The permissions of an IAM service role, or Assume role, that is specified in the runbook or passed in as a parameter.
+ If no IAM service role is specified, the permissions of the user who started the automation. 

When a step in a runbook includes the `aws:executeScript` action, however, an IAM service role (Assume role) is always required if the Python or PowerShell script specified for the action is calling any AWS API operations. Automation checks for this role in the following order:
+ The permissions of an IAM service role, or Assume role, that is specified in the runbook or passed in as a parameter.
+ If no role is found, Automation attempts to run the Python or PowerShell script specified for `aws:executeScript` without any permissions. If the script is calling an AWS API operation (for example the Amazon EC2 `CreateImage` operation), or attempting to act on an AWS resource (such as an EC2 instance), the step containing the script fails, and Systems Manager returns an error message reporting the failure. 

## Adding scripts to runbooks


You can add scripts to your runbooks by including the script inline as part of a step in the runbook. You can also attach scripts to the runbook by uploading the scripts from your local machine or by specifying an Amazon Simple Storage Service (Amazon S3) bucket where the scripts are located. After a step that runs a script is complete, the output of the script is available as a JSON object, which you can then use as input for subsequent steps in your runbook. For more information about the `aws:executeScript` action and how to use attachments for scripts, see [`aws:executeScript` – Run a script](automation-action-executeScript.md).

## Script constraints for runbooks


Runbooks enforce a limit of five file attachments. Scripts can either be in the form of a Python script (.py), a PowerShell Core script (.ps1), or attached as contents within a .zip file.

# Using conditional statements in runbooks


By default, the steps that you define in the `mainSteps` section of a runbook run in sequential order. After one action is completed, the next action specified in the `mainSteps` section begins. Furthermore, if an action fails to run, the entire automation fails (by default). You can use the `aws:branch` automation action and the runbook options described in this section to create automations that perform *conditional branching*. This means that you can create automations that jump to a different step after evaluating different choices or that dynamically respond to changes when a step is complete. Here is a list of options that you can use to create dynamic automations:
+ **`aws:branch`**: This automation action allows you to create a dynamic automation that evaluates multiple choices in a single step and then jumps to a different step in the runbook based on the results of that evaluation.
+ **`nextStep`**: This option specifies which step in an automation to process next after successfully completing a step. 
+ **`isEnd`**: This option stops an automation at the end of a specific step. The default value for this option is false.
+ **`isCritical`**: This option designates a step as critical for the successful completion of the automation. If a step with this designation fails, then Automation reports the final status of the automation as `Failed`. The default value for this option is `true`.
+ **`onFailure`**: This option indicates whether the automation should stop, continue, or go to a different step on failure. The default value for this option is abort.

The following section describes the `aws:branch` automation action. For more information about the `nextStep`, `isEnd`, `isCritical`, and `onFailure` options, see [Example `aws:branch` runbooks](#branch-runbook-examples).

## Working with the `aws:branch` action


The `aws:branch` action offers the most dynamic conditional branching options for automations. As noted earlier, this action allows your automation to evaluate multiple conditions in a single step and then jump to a new step based on the results of that evaluation. The `aws:branch` action functions like an `IF-ELIF-ELSE` statement in programming.

Here is a YAML example of an `aws:branch` step.

```
- name: ChooseOSforCommands
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runPowerShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Windows
    - NextStep: runShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Linux
    Default:
      PostProcessing
```

When you specify the `aws:branch` action for a step, you specify `Choices` that the automation must evaluate. The automation can evaluate `Choices` based on the value of a parameter that you specified in the `Parameters` section of the runbook. The automation can also evaluate `Choices` based on output from a previous step.

The automation evaluates each choice by using a Boolean expression. If the evaluation determines that the first choice is `true`, then the automation jumps to the step designated for that choice. If the evaluation determines that the first choice is `false`, then the automation evaluates the next choice. If your step includes three or more `Choices`, then the automation evaluates each choice in sequential order until it evaluates a choice that is `true`. The automation then jumps to the designated step for the `true` choice.

If none of the `Choices` are `true`, the automation checks to see if the step contains a `Default` value. A `Default` value defines a step that the automation should jump to if none of the choices are `true`. If no `Default` value is specified for the step, then the automation processes the next step in the runbook.

Here is an `aws:branch` step in YAML named **chooseOSfromParameter**. The step includes two `Choices`: (`NextStep: runWindowsCommand`) and (`NextStep: runLinuxCommand`). The automation evaluates these `Choices` to determine which command to run for the appropriate operating system. The `Variable` for each choice uses `{{OSName}}`, which is a parameter that the runbook author defined in the `Parameters` section of the runbook.

```
mainSteps:
- name: chooseOSfromParameter
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runWindowsCommand
      Variable: "{{OSName}}"
      StringEquals: Windows
    - NextStep: runLinuxCommand
      Variable: "{{OSName}}"
      StringEquals: Linux
```

Here is an `aws:branch` step in YAML named **chooseOSfromOutput**. The step includes two `Choices`: (`NextStep: runPowerShellCommand`) and (`NextStep: runShellCommand`). The automation evaluates these `Choices` to determine which command to run for the appropriate operating system. The `Variable` for each choice uses `{{GetInstance.platform}}`, which is the output from an earlier step in the runbook. This example also includes an option called `Default`. If the automation evaluates both `Choices`, and neither choice is `true`, then the automation jumps to a step called `PostProcessing`.

```
mainSteps:
- name: chooseOSfromOutput
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runPowerShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Windows
    - NextStep: runShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Linux
    Default:
      PostProcessing
```

### Creating an `aws:branch` step in a runbook


When you create an `aws:branch` step in a runbook, you define the `Choices` the automation should evaluate to determine which step the automation should jump to next. As noted earlier, `Choices` are evaluated by using a Boolean expression. Each choice must define the following options:
+ **NextStep**: The next step in the runbook to process if the designated choice is `true`.
+ **Variable**: Specify either the name of a parameter that is defined in the `Parameters` section of the runbook, a variable defined in the `Variables` section, or specify an output object from a previous step.

  Specify variable values by using the following form.

  `Variable: "{{variable name}}"`

  Specify parameter values by using the following form.

  `Variable: "{{parameter name}}"`

  Specify output object variables by using the following form.

  `Variable: "{{previousStepName.outputName}}"`
**Note**  
Creating the output variable is described in more detail in the next section, [About creating the output variable](#branch-action-output).
+ **Operation**: The criteria used to evaluate the choice, such as `StringEquals: Linux`. The `aws:branch` action supports the following operations:

**String operations**
  + StringEquals
  + EqualsIgnoreCase
  + StartsWith
  + EndsWith
  + Contains

**Numeric operations**
  + NumericEquals
  + NumericGreater
  + NumericLesser
  + NumericGreaterOrEquals
  + NumericLesser
  + NumericLesserOrEquals

**Boolean operation**
  + BooleanEquals
**Important**  
When you create a runbook, the system validates each operation in the runbook. If an operation isn't supported, the system returns an error when you try to create the runbook.
+ **Default**: Specify a fallback step that the automation should jump to if none of the `Choices` are `true`.
**Note**  
If you don't want to specify a `Default` value, then you can specify the `isEnd` option. If none of the `Choices` are `true` and no `Default` value is specified, then the automation stops at the end of the step.

Use the following templates to help you construct the `aws:branch` step in your runbook. Replace each *example resource placeholder* with your own information.

------
#### [ YAML ]

```
mainSteps:
- name: step name
  action: aws:branch
  inputs:
    Choices:
    - NextStep: step to jump to if evaluation for this choice is true
      Variable: "{{parameter name or output from previous step}}"
      Operation type: Operation value
    - NextStep: step to jump to if evaluation for this choice is true
      Variable: "{{parameter name or output from previous step}}"
      Operation type: Operation value
    Default:
      step to jump to if all choices are false
```

------
#### [ JSON ]

```
{
   "mainSteps":[
      {
         "name":"a name for the step",
         "action":"aws:branch",
         "inputs":{
            "Choices":[
               {
                  "NextStep":"step to jump to if evaluation for this choice is true",
                  "Variable":"{{parameter name or output from previous step}}",
                  "Operation type":"Operation value"
               },
               {
                  "NextStep":"step to jump to if evaluation for this choice is true",
                  "Variable":"{{parameter name or output from previous step}}",
                  "Operation type":"Operation value"
               }
            ],
            "Default":"step to jump to if all choices are false"
         }
      }
   ]
}
```

------

#### About creating the output variable


To create an `aws:branch` choice that references the output from a previous step, you need to identify the name of the previous step and the name of the output field. You then combine the names of the step and the field by using the following format.

`Variable: "{{previousStepName.outputName}}"`

For example, the first step in the following example is named `GetInstance`. And then, under `outputs`, there is a field called `platform`. In the second step (`ChooseOSforCommands`), the author wants to reference the output from the platform field as a variable. To create the variable, simply combine the step name (GetInstance) and the output field name (platform) to create `Variable: "{{GetInstance.platform}}"`.

```
mainSteps:
- Name: GetInstance
  action: aws:executeAwsApi
  inputs:
    Service: ssm
    Api: DescribeInstanceInformation
    Filters:
    - Key: InstanceIds
      Values: ["{{ InstanceId }}"]
  outputs:
  - Name: myInstance
    Selector: "$.InstanceInformationList[0].InstanceId"
    Type: String
  - Name: platform
    Selector: "$.InstanceInformationList[0].PlatformType"
    Type: String
- name: ChooseOSforCommands
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runPowerShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Windows
    - NextStep: runShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Linux
    Default:
      Sleep
```

Here is an example that shows how *"Variable": "\$1\$1 describeInstance.Platform \$1\$1"* is created from the previous step and the output.

```
- name: describeInstance
  action: aws:executeAwsApi
  onFailure: Abort
  inputs:
    Service: ec2
    Api: DescribeInstances
    InstanceIds:
    - "{{ InstanceId }}"
  outputs:
  - Name: Platform
    Selector: "$.Reservations[0].Instances[0].Platform"
    Type: String
  nextStep: branchOnInstancePlatform
- name: branchOnInstancePlatform
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runEC2RescueForWindows
      Variable: "{{ describeInstance.Platform }}"
      StringEquals: windows
    Default: runEC2RescueForLinux
```

### Example `aws:branch` runbooks


Here are some example runbooks that use `aws:branch`.

**Example 1: Using `aws:branch` with an output variable to run commands based on the operating system type**

In the first step of this example (`GetInstance`), the runbook author uses the `aws:executeAwsApi` action to call the `ssm` `DescribeInstanceInformation` API operation. The author uses this action to determine the type of operating system being used by an instance. The `aws:executeAwsApi` action outputs the instance ID and the platform type.

In the second step (`ChooseOSforCommands`), the author uses the `aws:branch` action with two `Choices` (`NextStep: runPowerShellCommand`) and (`NextStep: runShellCommand`). The automation evaluates the operating system of the instance by using the output from the previous step (`Variable: "{{GetInstance.platform}}"`). The automation jumps to a step for the designated operating system.

```
---
schemaVersion: '0.3'
assumeRole: "{{AutomationAssumeRole}}"
parameters:
  AutomationAssumeRole:
    default: ""
    type: String
mainSteps:
- name: GetInstance
  action: aws:executeAwsApi
  inputs:
    Service: ssm
    Api: DescribeInstanceInformation
  outputs:
  - Name: myInstance
    Selector: "$.InstanceInformationList[0].InstanceId"
    Type: String
  - Name: platform
    Selector: "$.InstanceInformationList[0].PlatformType"
    Type: String
- name: ChooseOSforCommands
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runPowerShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Windows
    - NextStep: runShellCommand
      Variable: "{{GetInstance.platform}}"
      StringEquals: Linux
    Default:
      Sleep
- name: runShellCommand
  action: aws:runCommand
  inputs:
    DocumentName: AWS-RunShellScript
    InstanceIds:
    - "{{GetInstance.myInstance}}"
    Parameters:
      commands:
      - ls
  isEnd: true
- name: runPowerShellCommand
  action: aws:runCommand
  inputs:
    DocumentName: AWS-RunPowerShellScript
    InstanceIds:
    - "{{GetInstance.myInstance}}"
    Parameters:
      commands:
      - ls
  isEnd: true
- name: Sleep
  action: aws:sleep
  inputs:
    Duration: PT3S
```

**Example 2: Using `aws:branch` with a parameter variable to run commands based on the operating system type**

The runbook author defines several parameter options at the beginning of the runbook in the `parameters` section. One parameter is named `OperatingSystemName`. In the first step (`ChooseOS`), the author uses the `aws:branch` action with two `Choices` (`NextStep: runWindowsCommand`) and (`NextStep: runLinuxCommand`). The variable for these `Choices` references the parameter option specified in the parameters section (`Variable: "{{OperatingSystemName}}"`). When the user runs this runbook, they specify a value at runtime for `OperatingSystemName`. The automation uses the runtime parameter during the `Choices` evaluation. The automation jumps to a step for the designated operating system based on the runtime parameter specified for `OperatingSystemName`.

```
---
schemaVersion: '0.3'
assumeRole: "{{AutomationAssumeRole}}"
parameters:
  AutomationAssumeRole:
    default: ""
    type: String
  OperatingSystemName:
    type: String
  LinuxInstanceId:
    type: String
  WindowsInstanceId:
    type: String
mainSteps:
- name: ChooseOS
  action: aws:branch
  inputs:
    Choices:
    - NextStep: runWindowsCommand
      Variable: "{{OperatingSystemName}}"
      StringEquals: windows
    - NextStep: runLinuxCommand
      Variable: "{{OperatingSystemName}}"
      StringEquals: linux
    Default:
      Sleep
- name: runLinuxCommand
  action: aws:runCommand
  inputs:
    DocumentName: "AWS-RunShellScript"
    InstanceIds:
    - "{{LinuxInstanceId}}"
    Parameters:
      commands:
      - ls
  isEnd: true
- name: runWindowsCommand
  action: aws:runCommand
  inputs:
    DocumentName: "AWS-RunPowerShellScript"
    InstanceIds:
    - "{{WindowsInstanceId}}"
    Parameters:
      commands:
      - date
  isEnd: true
- name: Sleep
  action: aws:sleep
  inputs:
    Duration: PT3S
```

### Creating complex branching automations with operators


You can create complex branching automations by using the `And`, `Or`, and `Not` operators in your `aws:branch` steps.

**The 'And' operator**  
Use the `And` operator when you want multiple variables to be `true` for a choice. In the following example, the first choice evaluates if an instance is `running` and uses the `Windows` operating system. If the evaluation of *both* of these variables is true, then the automation jumps to the `runPowerShellCommand` step. If one or more of the variables is `false`, then the automation evaluates the variables for the second choice.

```
mainSteps:
- name: switch2
  action: aws:branch
  inputs:
    Choices:
    - And:
      - Variable: "{{GetInstance.pingStatus}}"
        StringEquals: running
      - Variable: "{{GetInstance.platform}}"
        StringEquals: Windows
      NextStep: runPowerShellCommand

    - And:
      - Variable: "{{GetInstance.pingStatus}}"
        StringEquals: running
      - Variable: "{{GetInstance.platform}}"
        StringEquals: Linux
      NextStep: runShellCommand
    Default:
      sleep3
```

**The 'Or' operator**  
Use the `Or` operator when you want *any* of multiple variables to be true for a choice. In the following example, the first choice evaluates if a parameter string is `Windows` and if the output from an AWS Lambda step is true. If the evaluation determines that *either* of these variables is true, then the automation jumps to the `RunPowerShellCommand` step. If both variables are false, then the automation evaluates the variables for the second choice.

```
- Or:
  - Variable: "{{parameter1}}"
    StringEquals: Windows
  - Variable: "{{BooleanParam1}}"
    BooleanEquals: true
  NextStep: RunPowershellCommand
- Or:
  - Variable: "{{parameter2}}"
    StringEquals: Linux
  - Variable: "{{BooleanParam2}}"
    BooleanEquals: true
  NextStep: RunShellScript
```

**The 'Not' operator**  
Use the `Not` operator when you want to jump to a step defined when a variable is *not* true. In the following example, the first choice evaluates if a parameter string is `Not Linux`. If the evaluation determines that the variable isn't Linux, then the automation jumps to the `sleep2` step. If the evaluation of the first choice determines that it *is* Linux, then the automation evaluates the next choice.

```
mainSteps:
- name: switch
  action: aws:branch
  inputs:
    Choices:
    - NextStep: sleep2
      Not:
        Variable: "{{testParam}}"
        StringEquals: Linux
    - NextStep: sleep1
      Variable: "{{testParam}}"
      StringEquals: Windows
    Default:
      sleep3
```

## Examples of how to use conditional options


This section includes different examples of how to use dynamic options in a runbook. Each example in this section extends the following runbook. This runbook has two actions. The first action is named `InstallMsiPackage`. It uses the `aws:runCommand` action to install an application on a Windows Server instance. The second action is named `TestInstall`. It uses the `aws:invokeLambdaFunction` action to perform a test of the installed application if the application installed successfully. Step one specifies `onFailure: Abort`. This means that if the application didn't install successfully, the automation stops before step two.

**Example 1: Runbook with two linear actions**

```
---
schemaVersion: '0.3'
description: Install MSI package and run validation.
assumeRole: "{{automationAssumeRole}}"
parameters:
  automationAssumeRole:
    type: String
    description: "(Required) Assume role."
  packageName:
    type: String
    description: "(Required) MSI package to be installed."
  instanceIds:
    type: String
    description: "(Required) Comma separated list of instances."
mainSteps:
- name: InstallMsiPackage
  action: aws:runCommand
  maxAttempts: 2
  onFailure: Abort
  inputs:
    InstanceIds:
    - "{{instanceIds}}"
    DocumentName: AWS-RunPowerShellScript
    Parameters:
      commands:
      - msiexec /i {{packageName}}
- name: TestInstall
  action: aws:invokeLambdaFunction
  maxAttempts: 1
  timeoutSeconds: 500
  inputs:
    FunctionName: TestLambdaFunction
...
```

**Creating a dynamic automation that jumps to different steps by using the `onFailure` option**

The following example uses the `onFailure: step:step name`, `nextStep`, and `isEnd` options to create a dynamic automation. With this example, if the `InstallMsiPackage` action fails, then the automation jumps to an action called *PostFailure* (`onFailure: step:PostFailure`) to run an AWS Lambda function to perform some action in the event the install failed. If the install succeeds, then the automation jumps to the TestInstall action (`nextStep: TestInstall`). Both the `TestInstall` and the `PostFailure` steps use the `isEnd` option (`isEnd: true`) so that the automation finishes when either of those steps is completed.

**Note**  
Using the `isEnd` option in the last step of the `mainSteps` section is optional. If the last step doesn't jump to other steps, then the automation stops after running the action in the last step.

**Example 2: A dynamic automation that jumps to different steps**

```
mainSteps
- name: InstallMsiPackage
  action: aws:runCommand
  onFailure: step:PostFailure
  maxAttempts: 2
  inputs:
    InstanceIds:
    - "{{instanceIds}}"
    DocumentName: AWS-RunPowerShellScript
    Parameters:
      commands:
      - msiexec /i {{packageName}}
  nextStep: TestInstall
- name: TestInstall
  action: aws:invokeLambdaFunction
  maxAttempts: 1
  timeoutSeconds: 500
  inputs:
    FunctionName: TestLambdaFunction
  isEnd: true
- name: PostFailure
  action: aws:invokeLambdaFunction
  maxAttempts: 1
  timeoutSeconds: 500
  inputs:
    FunctionName: PostFailureRecoveryLambdaFunction
  isEnd: true
...
```

**Note**  
Before processing a runbook, the system verifies that the runbook doesn't create an infinite loop. If an infinite loop is detected, Automation returns an error and a circle trace showing which steps create the loop.

**Creating a dynamic automation that defines critical steps**

You can specify that a step is critical for the overall success of the automation. If a critical step fails, then Automation reports the status of the automation as `Failed`, even if one or more steps ran successfully. In the following example, the user identifies the *VerifyDependencies* step if the *InstallMsiPackage* step fails (`onFailure: step:VerifyDependencies`). The user specifies that the `InstallMsiPackage` step isn't critical (`isCritical: false`). In this example, if the application failed to install, Automation processes the `VerifyDependencies` step to determine if one or more dependencies is missing, which therefore caused the application install to fail. 

**Example 3: Defining critical steps for the automation **

```
---
name: InstallMsiPackage
action: aws:runCommand
onFailure: step:VerifyDependencies
isCritical: false
maxAttempts: 2
inputs:
  InstanceIds:
  - "{{instanceIds}}"
  DocumentName: AWS-RunPowerShellScript
  Parameters:
    commands:
    - msiexec /i {{packageName}}
nextStep: TestPackage
...
```

# Using action outputs as inputs


Several automation actions return pre-defined outputs. You can pass these outputs as inputs to later steps in your runbook using the format `{{stepName.outputName}}`. You can also define custom outputs for automation actions in your runbooks. This allows you to run scripts, or invoke API operations for other AWS services once so you can reuse the values as inputs in later actions. Parameter types in runbooks are static. This means the parameter type can't be changed after it's defined. To define a step output provide the following fields:
+ Name: (Required) The output name which is used to reference the output value in later steps.
+ Selector: (Required) The JSONPath expression that is used to determine the output value.
+ Type: (Optional) The data type of the value returned by the selector field. Valid type values are `String`, `Integer`, `Boolean`, `StringList`, `StringMap`, `MapList`. The default value is `String`.

If the value of an output doesn't match the data type you've specified, Automation tries to convert the data type. For example, if the value returned is an `Integer`, but the `Type` specified is `String`, the final output value is a `String` value. The following type conversions are supported:
+ `String` values can be converted to `StringList`, `Integer` and `Boolean`.
+ `Integer` values can be converted to `String` and `StringList`.
+ `Boolean` values can be converted to `String` and `StringList`.
+ `StringList`, `IntegerList`, or `BooleanList` values containing one element can be converted to `String`, `Integer`, or `Boolean`.

When using parameters or outputs with automation actions, the data type can't be dynamically changed within an action's input.

Here is an example runbook that demonstrates how to define action outputs, and reference the value as input for a later action. The runbooks does the following:
+ Uses the `aws:executeAwsApi` action to call the Amazon EC2 DescribeImages API operation to get the name of a specific Windows Server 2016 AMI. It outputs the image ID as `ImageId`.
+ Uses the `aws:executeAwsApi` action to call the Amazon EC2 RunInstances API operation to launch one instance that uses the `ImageId` from the previous step. It outputs the instance ID as `InstanceId`.
+ Uses the` aws:waitForAwsResourceProperty` action to poll the Amazon EC2 DescribeInstanceStatus API operation to wait for the instance to reach the `running` state. The action times out in 60 seconds. The step times out if the instance state failed to reach `running` after 60 seconds of polling.
+ Uses the `aws:assertAwsResourceProperty` action to call the Amazon EC2 `DescribeInstanceStatus` API operation to assert that the instance is in the `running` state. The step fails if the instance state isn't `running`.

```
---
description: Sample runbook using AWS API operations
schemaVersion: '0.3'
assumeRole: "{{ AutomationAssumeRole }}"
parameters:
  AutomationAssumeRole:
    type: String
    description: "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf."
    default: ''
  ImageName:
    type: String
    description: "(Optional) Image Name to launch EC2 instance with."
    default: "Windows_Server-2022-English-Full-Base*"
mainSteps:
- name: getImageId
  action: aws:executeAwsApi
  inputs:
    Service: ec2
    Api: DescribeImages
    Filters:  
    - Name: "name"
      Values: 
      - "{{ ImageName }}"
  outputs:
  - Name: ImageId
    Selector: "$.Images[0].ImageId"
    Type: "String"
- name: launchOneInstance
  action: aws:executeAwsApi
  inputs:
    Service: ec2
    Api: RunInstances
    ImageId: "{{ getImageId.ImageId }}"
    MaxCount: 1
    MinCount: 1
  outputs:
  - Name: InstanceId
    Selector: "$.Instances[0].InstanceId"
    Type: "String"
- name: waitUntilInstanceStateRunning
  action: aws:waitForAwsResourceProperty
  timeoutSeconds: 60
  inputs:
    Service: ec2
    Api: DescribeInstanceStatus
    InstanceIds:
    - "{{ launchOneInstance.InstanceId }}"
    PropertySelector: "$.InstanceStatuses[0].InstanceState.Name"
    DesiredValues:
    - running
- name: assertInstanceStateRunning
  action: aws:assertAwsResourceProperty
  inputs:
    Service: ec2
    Api: DescribeInstanceStatus
    InstanceIds:
    - "{{ launchOneInstance.InstanceId }}"
    PropertySelector: "$.InstanceStatuses[0].InstanceState.Name"
    DesiredValues:
    - running
outputs:
- "launchOneInstance.InstanceId"
...
```

Each of the previously described automation actions allows you to call a specific API operation by specifying the service namespace, the API operation name, the input parameters, and the output parameters. Inputs are defined by the API operation that you choose. You can view the API operations (also called methods) by choosing a service in the left navigation on the following [Services Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html) page. Choose a method in the **Client** section for the service that you want to invoke. For example, all API operations (methods) for Amazon Relational Database Service (Amazon RDS) are listed on the following page: [Amazon RDS methods](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html).

You can view the schema for each automation action in the following locations:
+ [`aws:assertAwsResourceProperty` – Assert an AWS resource state or event state](automation-action-assertAwsResourceProperty.md)
+ [`aws:executeAwsApi` – Call and run AWS API operations](automation-action-executeAwsApi.md)
+ [`aws:waitForAwsResourceProperty` – Wait on an AWS resource property](automation-action-waitForAwsResourceProperty.md)

The schemas include descriptions of the required fields for using each action.

**Using the Selector/PropertySelector fields**  
Each Automation action requires that you specify either an output `Selector` (for `aws:executeAwsApi`) or a `PropertySelector` (for `aws:assertAwsResourceProperty` and `aws:waitForAwsResourceProperty`). These fields are used to process the JSON response from an AWS API operation. These fields use the JSONPath syntax.

Here is an example to help illustrate this concept for the `aws:executeAwsAPi` action.

```
---
mainSteps:
- name: getImageId
  action: aws:executeAwsApi
  inputs:
    Service: ec2
    Api: DescribeImages
    Filters:  
      - Name: "name"
        Values: 
          - "{{ ImageName }}"
  outputs:
    - Name: ImageId
      Selector: "$.Images[0].ImageId"
      Type: "String"
...
```

In the `aws:executeAwsApi` step `getImageId`, the automation invokes the `DescribeImages` API operation and receives a response from `ec2`. The automation then applies `Selector - "$.Images[0].ImageId"` to the API response and assigns the selected value to the output `ImageId` variable. Other steps in the same automation can use the value of `ImageId` by specifying `"{{ getImageId.ImageId }}"`.

Here is an example to help illustrate this concept for the `aws:waitForAwsResourceProperty` action.

```
---
- name: waitUntilInstanceStateRunning
  action: aws:waitForAwsResourceProperty
  # timeout is strongly encouraged for action - aws:waitForAwsResourceProperty
  timeoutSeconds: 60
  inputs:
    Service: ec2
    Api: DescribeInstanceStatus
    InstanceIds:
    - "{{ launchOneInstance.InstanceId }}"
    PropertySelector: "$.InstanceStatuses[0].InstanceState.Name"
    DesiredValues:
    - running
...
```

In the `aws:waitForAwsResourceProperty` step `waitUntilInstanceStateRunning`, the automation invokes the `DescribeInstanceStatus` API operation and receives a response from `ec2`. The automation then applies `PropertySelector - "$.InstanceStatuses[0].InstanceState.Name"` to the response and checks if the specified returned value matches a value in the `DesiredValues` list (in this case `running`). The step repeats the process until the response returns an instance state of `running`. 

## Using JSONPath in runbooks


A JSONPath expression is a string beginning with "\$1." that is used to select one of more components within a JSON element. The following list includes information about JSONPath operators that are supported by Systems Manager Automation:
+ **Dot-notated child (.)**: Use with a JSON object. This operator selects the value of a specific key.
+ **Deep-scan (..)**: Use with a JSON element. This operator scans the JSON element level by level and selects a list of values with the specific key. The return type of this operator is always a JSON array. In the context of an automation action output type, the operator can be either StringList or MapList.
+ **Array-Index ([ ])**: Use with a JSON array. This operator gets the value of a specific index.
+ **Filter ([?(*expression*)])**: Use with a JSON array. This operator filters JSON array values that match the criteria defined in the filter expression. Filter expressions can only use the following operators: ==, \$1=, >, <, >=, or <=. Combining multiple filter expressions with AND (&&) or OR (\$1\$1) is not supported. The return type of this operator is always a JSON array. 

To better understand JSONPath operators, review the following JSON response from the ec2 `DescribeInstances` API operation. Following this response are several examples that show different results by applying different JSONPath expressions to the response from the `DescribeInstances` API operation.

```
{
    "NextToken": "abcdefg",
    "Reservations": [
        {
            "OwnerId": "123456789012",
            "ReservationId": "r-abcd12345678910",
            "Instances": [
                {
                    "ImageId": "ami-12345678",
                    "BlockDeviceMappings": [
                        {
                            "Ebs": {
                                "DeleteOnTermination": true,
                                "Status": "attached",
                                "VolumeId": "vol-000000000000"
                            },
                            "DeviceName": "/dev/xvda"
                        }
                    ],
                    "State": {
                        "Code": 16,
                        "Name": "running"
                    }
                }
            ],
            "Groups": []
        },
        {
            "OwnerId": "123456789012",
            "ReservationId": "r-12345678910abcd",
            "Instances": [
                {
                    "ImageId": "ami-12345678",
                    "BlockDeviceMappings": [
                        {
                            "Ebs": {
                                "DeleteOnTermination": true,
                                "Status": "attached",
                                "VolumeId": "vol-111111111111"
                            },
                            "DeviceName": "/dev/xvda"
                        }
                    ],
                    "State": {
                        "Code": 80,
                        "Name": "stopped"
                    }
                }
            ],
            "Groups": []
        }
    ]
}
```

**JSONPath Example 1: Get a specific String from a JSON response**

```
JSONPath: 
$.Reservations[0].Instances[0].ImageId 

Returns:
"ami-12345678"

Type: String
```

**JSONPath Example 2: Get a specific Boolean from a JSON response**

```
JSONPath:
$.Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.DeleteOnTermination
        
Returns:
true

Type: Boolean
```

**JSONPath Example 3: Get a specific Integer from a JSON response**

```
JSONPath:
$.Reservations[0].Instances[0].State.Code
        
Returns:
16

Type: Integer
```

**JSONPath Example 4: Deep scan a JSON response, then get all of the values for VolumeId as a StringList** 

```
JSONPath:
$.Reservations..BlockDeviceMappings..VolumeId
        
Returns:
[
   "vol-000000000000",
   "vol-111111111111"
]

Type: StringList
```

**JSONPath Example 5: Get a specific BlockDeviceMappings object as a StringMap**

```
JSONPath:
$.Reservations[0].Instances[0].BlockDeviceMappings[0]
        
Returns:
{
   "Ebs" : {
      "DeleteOnTermination" : true,
      "Status" : "attached",
      "VolumeId" : "vol-000000000000"
   },
   "DeviceName" : "/dev/xvda"
}

Type: StringMap
```

**JSONPath Example 6: Deep scan a JSON response, then get all of the State objects as a MapList**

```
JSONPath:
$.Reservations..Instances..State 
    
Returns:
[
   {
      "Code" : 16,
      "Name" : "running"
   },
   {
      "Code" : 80,
      "Name" : "stopped"
   }
]

Type: MapList
```

**JSONPath Example 7: Filter for instances in the `running` state**

```
JSONPath:
$.Reservations..Instances[?(@.State.Name == 'running')]

Returns:
[
  {
    "ImageId": "ami-12345678",
    "BlockDeviceMappings": [
      {
        "Ebs": {
          "DeleteOnTermination": true,
          "Status": "attached",
          "VolumeId": "vol-000000000000"
        },
        "DeviceName": "/dev/xvda"
      }
    ],
    "State": {
      "Code": 16,
      "Name": "running"
    }
  }
]

Type: MapList
```

**JSONPath Example 8: Return the `ImageId` of instances which aren't in the `running` state**

```
JSONPath:
$.Reservations..Instances[?(@.State.Name != 'running')].ImageId

Returns:
[
  "ami-12345678"
]

Type: StringList | String
```

# Creating webhook integrations for Automation


To send messages using webhooks during an automation, create an integration. Integrations can be invoked during an automation by using the `aws:invokeWebhook` action in your runbook. If you haven't already created a webhook, see [Creating webhooks for integrations](#creating-webhooks). To learn more about the `aws:invokeWebhook` action, see [`aws:invokeWebhook` – Invoke an Automation webhook integration](invoke-webhook.md).

As shown in the following procedures, you can create an integration by using either the Systems Manager Automation console or your preferred command line tool. 

## Creating integrations (console)


**To create an integration for Automation (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose the **Integrations** tab.

1. Select **Add integration**, and choose **Webhook**.

1. Enter the required values and any optional values you want to include for the integration.

1. Choose **Add** to create the integration.

## Creating integrations (command line)


To create an integration using command line tools, you must create the required `SecureString` parameter for an integration. Automation uses a reserved namespace in Parameter Store, a tool in Systems Manager, to store information about your integration. If you create an integration using the AWS Management Console, Automation handles this process for you. Following the namespace, you must specify the type of integration you want to create and then the name of your integration. Currently, Automation supports `webhook` type integrations.

The supported fields for `webhook` type integrations are as follows:
+ Description
+ headers
+ payload
+ URL

**Before you begin**  
If you haven't already, install and configure the AWS Command Line Interface (AWS CLI) or the AWS Tools for PowerShell. For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

**To create an integration for Automation (command line)**
+ Run the following commands to create the required `SecureString` parameter for an integration. Replace each *example resource placeholder* with your own information. The `/d9d01087-4a3f-49e0-b0b4-d568d7826553/ssm/integrations/webhook/` namespace is reserved in Parameter Store for integrations. The name of your parameter must use this namespace followed by the name of your integration. For example `/d9d01087-4a3f-49e0-b0b4-d568d7826553/ssm/integrations/webhook/myWebhookIntegration`.

------
#### [ Linux & macOS ]

  ```
  aws ssm put-parameter \
      --name "/d9d01087-4a3f-49e0-b0b4-d568d7826553/ssm/integrations/webhook/myWebhookIntegration" \
      --type "SecureString" \
      --data-type "aws:ssm:integration" \
      --value '{"description": "My first webhook integration for Automation.", "url": "myWebHookURL"}'
  ```

------
#### [ Windows ]

  ```
  aws ssm put-parameter ^
      --name "/d9d01087-4a3f-49e0-b0b4-d568d7826553/ssm/integrations/webhook/myWebhookIntegration" ^
      --type "SecureString" ^
      --data-type "aws:ssm:integration" ^
      --value  "{\"description\":\"My first webhook integration for Automation.\",\"url\":\"myWebHookURL\"}"
  ```

------
#### [ PowerShell ]

  ```
  Write-SSMParameter `
      -Name "/d9d01087-4a3f-49e0-b0b4-d568d7826553/ssm/integrations/webhook/myWebhookIntegration" `
      -Type "SecureString"
      -DataType "aws:ssm:integration"
      -Value '{"description": "My first webhook integration for Automation.", "url": "myWebHookURL"}'
  ```

------

## Creating webhooks for integrations


When creating webhooks with your provider, note the following:
+ Protocol must be HTTPS.
+ Custom request headers are supported.
+ A default request body can be specified.
+ The default request body can be overridden when an integration is invoked by using the `aws:invokeWebhook` action.

# Handling timeouts in runbooks


The `timeoutSeconds` property is shared by all automation actions. You can use this property to specify the execution timeout value for an action. Further, you can change how an action timing out affects the automation and overall execution status. You can accomplish this by also defining the `onFailure` and `isCritical` shared properties for an action.

For example, depending on your use case, you might want your automation to continue to a different action and not affect the overall status of the automation if an action times out. In this example, you specify the length of time to wait before the action times out using the `timeoutSeconds` property. Then you specify the action, or step, the automation should go to if there is a timeout. Specify a value using the format `step:step name` for the `onFailure` property rather than the default value of `Abort`. By default, if an action times out, the automation execution status will be `Timed Out`. To prevent a timeout from affecting the automation execution status, specify `false` for the `isCritical` property.

The following example shows how to define the shared properties for an action described in this scenario.

------
#### [ YAML ]

```
- name: verifyImageAvailability
  action: 'aws:waitForAwsResourceProperty'
  timeoutSeconds: 600
  isCritical: false
  onFailure: 'step:getCurrentImageState'
  inputs:
    Service: ec2
    Api: DescribeImages
    ImageIds:
      - '{{ createImage.newImageId }}'
    PropertySelector: '$.Images[0].State'
    DesiredValues:
      - available
  nextStep: copyImage
```

------
#### [ JSON ]

```
{
    "name": "verifyImageAvailability",
    "action": "aws:waitForAwsResourceProperty",
    "timeoutSeconds": 600,
    "isCritical": false,
    "onFailure": "step:getCurrentImageState",
    "inputs": {
        "Service": "ec2",
        "Api": "DescribeImages",
        "ImageIds": [
            "{{ createImage.newImageId }}"
        ],
        "PropertySelector": "$.Images[0].State",
        "DesiredValues": [
            "available"
        ]
    },
    "nextStep": "copyImage"
}
```

------

For more information about properties shared by all automation actions, see [Properties shared by all actions](automation-actions.md#automation-common).

# Systems Manager Automation Runbook Reference
Automation Runbook Reference

To help you get started quickly, AWS Systems Manager provides predefined runbooks. These runbooks are maintained by Amazon Web Services, AWS Support, and AWS Config. The Runbook Reference describes each of the predefined runbooks provided by Systems Manager, Support, and AWS Config. For more information, see [Systems Manager Automation Runbook Reference](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide).

# Tutorials


The following tutorials help you to use AWS Systems Manager Automation to address common use cases. These tutorials demonstrate how to use your own runbooks, predefined runbooks provided by Automation, and other Systems Manager tools with other AWS services.

**Contents**
+ [

# Updating AMIs
](automation-tutorial-update-ami.md)
  + [

# Update a Linux AMI
](automation-tutorial-update-patch-linux-ami.md)
  + [

## Update a Linux AMI (AWS CLI)
](automation-tutorial-update-ami.md#update-patch-linux-ami-cli)
  + [

# Update a Windows Server AMI
](automation-tutorial-update-patch-windows-ami.md)
  + [

# Update a golden AMI using Automation, AWS Lambda, and Parameter Store
](automation-tutorial-update-patch-golden-ami.md)
    + [

## Task 1: Create a parameter in Systems Manager Parameter Store
](automation-tutorial-update-patch-golden-ami.md#create-parameter-ami)
    + [

## Task 2: Create an IAM role for AWS Lambda
](automation-tutorial-update-patch-golden-ami.md#create-lambda-role)
    + [

## Task 3: Create an AWS Lambda function
](automation-tutorial-update-patch-golden-ami.md#create-lambda-function)
    + [

## Task 4: Create a runbook and patch the AMI
](automation-tutorial-update-patch-golden-ami.md#create-custom-ami-update-runbook)
  + [

# Updating AMIs using Automation and Jenkins
](automation-tutorial-update-patch-ami-jenkins-integration.md)
  + [

# Updating AMIs for Auto Scaling groups
](automation-tutorial-update-patch-windows-ami-autoscaling.md)
    + [

## Create the **PatchAMIAndUpdateASG** runbook
](automation-tutorial-update-patch-windows-ami-autoscaling.md#create-autoscaling-update-runbook)
+ [

# Using AWS Support self-service runbooks
](automation-tutorial-support-runbooks.md)
  + [

# Run the EC2Rescue tool on unreachable instances
](automation-ec2rescue.md)
    + [

## How it works
](automation-ec2rescue.md#automation-ec2rescue-how)
    + [

## Before you begin
](automation-ec2rescue.md#automation-ec2rescue-begin)
      + [

### Granting `AWSSupport-EC2Rescue` permissions to perform actions on your instances
](automation-ec2rescue.md#automation-ec2rescue-access)
        + [

#### Granting permissions by using IAM policies
](automation-ec2rescue.md#automation-ec2rescue-access-iam)
        + [

#### Granting permissions by using an CloudFormation template
](automation-ec2rescue.md#automation-ec2rescue-access-cfn)
    + [

## Running the Automation
](automation-ec2rescue.md#automation-ec2rescue-executing)
  + [

# Reset passwords and SSH keys on EC2 instances
](automation-ec2reset.md)
    + [

## How it works
](automation-ec2reset.md#automation-ec2reset-how)
    + [

## Before you begin
](automation-ec2reset.md#automation-ec2reset-begin)
      + [

### Granting AWSSupport-EC2Rescue permissions to perform actions on your instances
](automation-ec2reset.md#automation-ec2reset-access)
        + [

#### Granting permissions by using IAM policies
](automation-ec2reset.md#automation-ec2reset-access-iam)
        + [

#### Granting permissions by using an CloudFormation template
](automation-ec2reset.md#automation-ec2reset-access-cfn)
    + [

## Running the Automation
](automation-ec2reset.md#automation-ec2reset-executing)
+ [

# Passing data to Automation using input transformers
](automation-tutorial-eventbridge-input-transformers.md)

# Updating AMIs


The following tutorials explain how to update Amazon Machine Image (AMIs) to include the latest patches.

**Topics**
+ [

# Update a Linux AMI
](automation-tutorial-update-patch-linux-ami.md)
+ [

## Update a Linux AMI (AWS CLI)
](#update-patch-linux-ami-cli)
+ [

# Update a Windows Server AMI
](automation-tutorial-update-patch-windows-ami.md)
+ [

# Update a golden AMI using Automation, AWS Lambda, and Parameter Store
](automation-tutorial-update-patch-golden-ami.md)
+ [

# Updating AMIs using Automation and Jenkins
](automation-tutorial-update-patch-ami-jenkins-integration.md)
+ [

# Updating AMIs for Auto Scaling groups
](automation-tutorial-update-patch-windows-ami-autoscaling.md)

# Update a Linux AMI


This Systems Manager Automation walkthrough shows you how to use the console or AWS CLI and the `AWS-UpdateLinuxAmi` runbook to update a Linux AMI with the latest patches of packages that you specify. Automation is a tool in AWS Systems Manager. The `AWS-UpdateLinuxAmi` runbook also automates the installation of additional site-specific packages and configurations. You can update a variety of Linux distributions using this walkthrough, including Ubuntu Server, Red Hat Enterprise Linux (RHEL), or Amazon Linux AMIs. For a full list of supported Linux versions, see [Patch Manager prerequisites](patch-manager-prerequisites.md).

The `AWS-UpdateLinuxAmi` runbook allows you to automate image maintenance tasks without having to author the runbook in JSON or YAML. You can use the `AWS-UpdateLinuxAmi` runbook to perform the following types of tasks.
+ Upgrade all distribution packages and Amazon software on an Amazon Linux, Red Hat Enterprise Linux, or Ubuntu Server Amazon Machine Image (AMI). This is the default runbook behavior.
+ Install AWS Systems Manager SSM Agent on an existing image to enable Systems Manager tools, such as running remote commands using AWS Systems Manager Run Command or software inventory collection using Inventory.
+ Install additional software packages.

**Before you begin**  
Before you begin working with runbooks, configure roles and, optionally, EventBridge for Automation. For more information, see [Setting up Automation](automation-setup.md). This walkthrough also requires that you specify the name of an AWS Identity and Access Management (IAM) instance profile. For more information about creating an IAM instance profile, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

The `AWS-UpdateLinuxAmi` runbook accepts the following input parameters.


****  

| Parameter | Type | Description | 
| --- | --- | --- | 
|  SourceAmiId  |  String  |  (Required) The source AMI ID.  | 
|  IamInstanceProfileName  |  String  |  (Required) The name of the IAM instance profile role you created in [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). The instance profile role gives Automation permission to perform actions on your instances, such as running commands or starting and stopping services. The runbook uses only the name of the instance profile role. If you specify the Amazon Resource Name (ARN), the automation fails.  | 
|  AutomationAssumeRole  |  String  |  (Required) The name of the IAM service role you created in [Setting up Automation](automation-setup.md). The service role (also called an assume role) gives Automation permission to assume your IAM role and perform actions on your behalf. For example, the service role allows Automation to create a new AMI when running the `aws:createImage` action in a runbook. For this parameter, the complete ARN must be specified.  | 
|  TargetAmiName  |  String  |  (Optional) The name of the new AMI after it is created. The default name is a system-generated string that includes the source AMI ID, and the creation time and date.  | 
|  InstanceType  |  String  |  (Optional) The type of instance to launch as the workspace host. Instance types vary by region. The default type is t2.micro.  | 
|  PreUpdateScript  |  String  |  (Optional) URL of a script to run before updates are applied. Default (\$1"none\$1") is to not run a script.  | 
|  PostUpdateScript  |  String  |  (Optional) URL of a script to run after package updates are applied. Default (\$1"none\$1") is to not run a script.  | 
|  IncludePackages  |  String  |  (Optional) Only update these named packages. By default (\$1"all\$1"), all available updates are applied.  | 
|  ExcludePackages  |  String  |  (Optional) Names of packages to hold back from updates, under all conditions. By default (\$1"none\$1"), no package is excluded.  | 

**Automation Steps**  
The `AWS-UpdateLinuxAmi` runbook includes the following automation actions, by default.

**Step 1: launchInstance (`aws:runInstances` action) **  
This step launches an instance using Amazon Elastic Compute Cloud (Amazon EC2) userdata and an IAM instance profile role. Userdata installs the appropriate SSM Agent, based on the operating system. Installing SSM Agent enables you to utilize Systems Manager tools such as Run Command, State Manager, and Inventory.

**Step 2: updateOSSoftware (`aws:runCommand` action) **  
This step runs the following commands on the launched instance:  
+ Downloads an update script from Amazon S3.
+ Runs an optional pre-update script.
+ Updates distribution packages and Amazon software.
+ Runs an optional post-update script.
The execution log is stored in the /tmp folder for the user to view later.  
If you want to upgrade a specific set of packages, you can supply the list using the `IncludePackages` parameter. When provided, the system attempts to update only these packages and their dependencies. No other updates are performed. By default, when no *include* packages are specified, the program updates all available packages.  
If you want to exclude upgrading a specific set of packages, you can supply the list to the `ExcludePackages` parameter. If provided, these packages remain at their current version, independent of any other options specified. By default, when no *exclude* packages are specified, no packages are excluded.

**Step 3: stopInstance (`aws:changeInstanceState` action)**  
This step stops the updated instance.

**Step 4: createImage (`aws:createImage` action) **  
This step creates a new AMI with a descriptive name that links it to the source ID and creation time. For example: “AMI Generated by EC2 Automation on \$1\$1global:DATE\$1TIME\$1\$1 from \$1\$1SourceAmiId\$1\$1” where DATE\$1TIME and SourceID represent Automation variables.

**Step 5: terminateInstance (`aws:changeInstanceState` action) **  
This step cleans up the automation by terminating the running instance.

**Output**  
The automation returns the new AMI ID as output.

**Note**  
By default, when Automation runs the `AWS-UpdateLinuxAmi` runbook, the system creates a temporary instance in the default VPC (172.30.0.0/16). If you deleted the default VPC, you will receive the following error:  
`VPC not defined 400`  
To solve this problem, you must make a copy of the `AWS-UpdateLinuxAmi` runbook and specify a subnet ID. For more information, see [VPC not defined 400](automation-troubleshooting.md#automation-trbl-common-vpc).

**To create a patched AMI using Automation (AWS Systems Manager)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose **Execute automation**.

1. In the **Automation document** list, choose `AWS-UpdateLinuxAmi`.

1. In the **Document details** section, verify that **Document version** is set to **Default version at runtime**.

1. Choose **Next**.

1. In the **Execution mode** section, choose **Simple Execution**.

1. In the **Input parameters** section, enter the information you collected in the **Before you begin** section.

1. Choose **Execute**. The console displays the status of the Automation execution.

After the automation finishes, launch a test instance from the updated AMI to verify changes.

**Note**  
If any step in the automation fails, information about the failure is listed on the **Automation Executions** page. The automation is designed to terminate the temporary instance after successfully completing all tasks. If a step fails, the system might not terminate the instance. So if a step fails, manually terminate the temporary instance.

## Update a Linux AMI (AWS CLI)


This AWS Systems Manager Automation walkthrough shows you how to use the AWS Command Line Interface (AWS CLI) and the Systems Manager `AWS-UpdateLinuxAmi` runbook to automatically patch a Linux Amazon Machine Image (AMI) with the latest versions of packages that you specify. Automation is a tool in AWS Systems Manager. The `AWS-UpdateLinuxAmi` runbook also automates the installation of additional site-specific packages and configurations. You can update a variety of Linux distributions using this walkthrough, including Ubuntu Server, Red Hat Enterprise Linux (RHEL), or Amazon Linux AMIs. For a full list of supported Linux versions, see [Patch Manager prerequisites](patch-manager-prerequisites.md).

The `AWS-UpdateLinuxAmi` runbook enables you to automate image-maintenance tasks without having to author the runbook in JSON or YAML. You can use the `AWS-UpdateLinuxAmi` runbook to perform the following types of tasks.
+ Upgrade all distribution packages and Amazon software on an Amazon Linux, RHEL, or Ubuntu Server Amazon Machine Image (AMI). This is the default runbook behavior.
+ Install AWS Systems Manager SSM Agent on an existing image to enable Systems Manager capabilities, such as running remote commands using AWS Systems Manager Run Command or software inventory collection using Inventory.
+ Install additional software packages.

**Before you begin**  
Before you begin working with runbooks, configure roles and, optionally, EventBridge for Automation. For more information, see [Setting up Automation](automation-setup.md). This walkthrough also requires that you specify the name of an AWS Identity and Access Management (IAM) instance profile. For more information about creating an IAM instance profile, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

The `AWS-UpdateLinuxAmi` runbook accepts the following input parameters.


****  

| Parameter | Type | Description | 
| --- | --- | --- | 
|  SourceAmiId  |  String  |  (Required) The source AMI ID. You can automatically reference the latest ID of an Amazon EC2 AMI for Linux by using a AWS Systems Manager Parameter Store *public* parameter. For more information, see [Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store](https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/).  | 
|  IamInstanceProfileName  |  String  |  (Required) The name of the IAM instance profile role you created in [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). The instance profile role gives Automation permission to perform actions on your instances, such as running commands or starting and stopping services. The runbook uses only the name of the instance profile role.  | 
|  AutomationAssumeRole  |  String  |  (Required) The name of the IAM service role you created in [Setting up Automation](automation-setup.md). The service role (also called an assume role) gives Automation permission to assume your IAM role and perform actions on your behalf. For example, the service role allows Automation to create a new AMI when running the `aws:createImage` action in a runbook. For this parameter, the complete ARN must be specified.  | 
|  TargetAmiName  |  String  |  (Optional) The name of the new AMI after it is created. The default name is a system-generated string that includes the source AMI ID, and the creation time and date.  | 
|  InstanceType  |  String  |  (Optional) The type of instance to launch as the workspace host. Instance types vary by Region. The default type is t2.micro.  | 
|  PreUpdateScript  |  String  |  (Optional) URL of a script to run before updates are applied. Default (\$1"none\$1") is to not run a script.  | 
|  PostUpdateScript  |  String  |  (Optional) URL of a script to run after package updates are applied. Default (\$1"none\$1") is to not run a script.  | 
|  IncludePackages  |  String  |  (Optional) Only update these named packages. By default (\$1"all\$1"), all available updates are applied.  | 
|  ExcludePackages  |  String  |  (Optional) Names of packages to hold back from updates, under all conditions. By default (\$1"none\$1"), no package is excluded.  | 

**Automation Steps**  
The `AWS-UpdateLinuxAmi` runbook includes the following steps, by default.

**Step 1: launchInstance (`aws:runInstances` action) **  
This step launches an instance using Amazon Elastic Compute Cloud (Amazon EC2) user data and an IAM instance profile role. User data installs the appropriate SSM Agent, based on the operating system. Installing SSM Agent enables you to utilize Systems Manager tools such as Run Command, State Manager, and Inventory.

**Step 2: updateOSSoftware (`aws:runCommand` action) **  
This step runs the following commands on the launched instance:  
+ Downloads an update script from Amazon Simple Storage Service (Amazon S3).
+ Runs an optional pre-update script.
+ Updates distribution packages and Amazon software.
+ Runs an optional post-update script.
The execution log is stored in the /tmp folder for the user to view later.  
If you want to upgrade a specific set of packages, you can supply the list using the `IncludePackages` parameter. When provided, the system attempts to update only these packages and their dependencies. No other updates are performed. By default, when no *include* packages are specified, the program updates all available packages.  
If you want to exclude upgrading a specific set of packages, you can supply the list to the `ExcludePackages` parameter. If provided, these packages remain at their current version, independent of any other options specified. By default, when no *exclude* packages are specified, no packages are excluded.

**Step 3: stopInstance (`aws:changeInstanceState` action)**  
This step stops the updated instance.

**Step 4: createImage (`aws:createImage` action) **  
This step creates a new AMI with a descriptive name that links it to the source ID and creation time. For example: “AMI Generated by EC2 Automation on \$1\$1global:DATE\$1TIME\$1\$1 from \$1\$1SourceAmiId\$1\$1” where DATE\$1TIME and SourceID represent Automation variables.

**Step 5: terminateInstance (`aws:changeInstanceState` action) **  
This step cleans up the automation by terminating the running instance.

**Output**  
The automation returns the new AMI ID as output.

**Note**  
By default, when Automation runs the `AWS-UpdateLinuxAmi` runbook, the system creates a temporary instance in the default VPC (172.30.0.0/16). If you deleted the default VPC, you will receive the following error:  
`VPC not defined 400`  
To solve this problem, you must make a copy of the `AWS-UpdateLinuxAmi` runbook and specify a subnet ID. For more information, see [VPC not defined 400](automation-troubleshooting.md#automation-trbl-common-vpc).

**To create a patched AMI using Automation**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to run the `AWS-UpdateLinuxAmi` runbook. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm start-automation-execution \
       --document-name "AWS-UpdateLinuxAmi" \
       --parameters \
       SourceAmiId=AMI ID, \
       IamInstanceProfileName=IAM instance profile, \
       AutomationAssumeRole='arn:aws:iam::{{global:ACCOUNT_ID}}:role/AutomationServiceRole'
   ```

   The command returns an execution ID. Copy this ID to the clipboard. You will use this ID to view the status of the automation.

   ```
   {
       "AutomationExecutionId": "automation execution ID"
   }
   ```

1. To view the automation using the AWS CLI, run the following command:

   ```
   aws ssm describe-automation-executions
   ```

1. To view details about the automation progress, run the following command. Replace *automation execution ID* with your own information.

   ```
   aws ssm get-automation-execution --automation-execution-id automation execution ID
   ```

   The update process can take 30 minutes or more to complete.
**Note**  
You can also monitor the status of the automation in the console. In the list, choose the automation you just ran and then choose the **Steps** tab. This tab shows you the status of the automation actions.

After the automation finishes, launch a test instance from the updated AMI to verify changes.

**Note**  
If any step in the automation fails, information about the failure is listed on the **Automation Executions** page. The automation is designed to terminate the temporary instance after successfully completing all tasks. If a step fails, the system might not terminate the instance. So if a step fails, manually terminate the temporary instance.

# Update a Windows Server AMI


The `AWS-UpdateWindowsAmi` runbook enables you to automate image maintenance tasks on your Amazon Windows Amazon Machine Image (AMI) without having to author the runbook in JSON or YAML. This runbook is supported for Windows Server 2008 R2 or later. You can use the `AWS-UpdateWindowsAmi` runbook to perform the following types of tasks.
+ Install all Windows updates and upgrade Amazon software (default behavior).
+ Install specific Windows updates and upgrade Amazon software.
+ Customize an AMI using your scripts.

**Before you begin**  
Before you begin working with runbooks, [configure roles for Automation](automation-setup-iam.md) to add an `iam:PassRole` policy that references the ARN of the instance profile you want to grant access to. Optionally, configure Amazon EventBridge for Automation, a tool in AWS Systems Manager. For more information, see [Setting up Automation](automation-setup.md). This walkthrough also requires that you specify the name of an AWS Identity and Access Management (IAM) instance profile. For more information about creating an IAM instance profile, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).

**Note**  
Updates to AWS Systems Manager SSM Agent are typically rolled out to different regions at different times. When you customize or update an AMI, use only source AMIs published for the region that you are working in. This will ensure that you are working with the latest SSM Agent released for that region and avoid compatibility issues.

The `AWS-UpdateWindowsAmi` runbook accepts the following input parameters.


****  

| Parameter | Type | Description | 
| --- | --- | --- | 
|  SourceAmiId  |  String  |  (Required) The source AMI ID. You can automatically reference the latest Windows Server AMI ID by using a Systems Manager Parameter Store *public* parameter. For more information, see [Query for the latest Windows AMI IDs using AWS Systems Manager Parameter Store](https://aws.amazon.com/blogs/mt/query-for-the-latest-windows-ami-using-systems-manager-parameter-store/).  | 
|  SubnetId  |  String  |  (Optional) The subnet you want to launch the temporary instance into. You must specify a value for this parameter if you've deleted your default VPC.  | 
|  IamInstanceProfileName  |  String  |  (Required) The name of the IAM instance profile role you created in [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). The instance profile role gives Automation permission to perform actions on your instances, such as running commands or starting and stopping services. The runbook uses only the name of the instance profile role.  | 
|  AutomationAssumeRole  |  String  |  (Required) The name of the IAM service role you created in [Setting up Automation](automation-setup.md). The service role (also called an assume role) gives Automation permission to assume your IAM role and perform actions on your behalf. For example, the service role allows Automation to create a new AMI when running the `aws:createImage` action in a runbook. For this parameter, the complete ARN must be specified.  | 
|  TargetAmiName  |  String  |  (Optional) The name of the new AMI after it is created. The default name is a system-generated string that includes the source AMI ID, and the creation time and date.  | 
|  InstanceType  |  String  |  (Optional) The type of instance to launch as the workspace host. Instance types vary by region. The default type is t2.medium.  | 
|  PreUpdateScript  |  String  |  (Optional) A script to run before updating the AMI. Enter a script in the runbook or at runtime as a parameter.  | 
|  PostUpdateScript  |  String  |  (Optional) A script to run after updating the AMI. Enter a script in the runbook or at runtime as a parameter.  | 
|  IncludeKbs  |  String  |  (Optional) Specify one or more Microsoft Knowledge Base (KB) article IDs to include. You can install multiple IDs using comma-separated values. Valid formats: KB9876543 or 9876543.  | 
|  ExcludeKbs  |  String  |  (Optional) Specify one or more Microsoft Knowledge Base (KB) article IDs to exclude. You can exclude multiple IDs using comma-separated values. Valid formats: KB9876543 or 9876543.  | 
|  Categories  |  String  |  (Optional)Specify one or more update categories. You can filter categories using comma-separated values. Options: Critical Update, Security Update, Definition Update, Update Rollup, Service Pack, Tool, Update, or Driver. Valid formats include a single entry, for example: Critical Update. Or, you can specify a comma separated list: Critical Update,Security Update,Definition Update.  | 
|  SeverityLevels  |  String  |  (Optional) Specify one or more MSRC severity levels associated with an update. You can filter severity levels using comma-separated values. Options: Critical, Important, Low, Moderate or Unspecified. Valid formats include a single entry, for example: Critical. Or, you can specify a comma separated list: Critical,Important,Low.  | 

**Automation Steps**  
The `AWS-UpdateWindowsAmi` runbook includes the following steps, by default.

**Step 1: launchInstance (`aws:runInstances` action)**  
This step launches an instance with an IAM instance profile role from the specified `SourceAmiID`.

**Step 2: runPreUpdateScript (`aws:runCommand` action)**  
This step enables you to specify a script as a string that runs before updates are installed.

**Step 3: updateEC2Config (`aws:runCommand` action)**  
This step uses the `AWS-InstallPowerShellModule` runbook to download an AWS public PowerShell module. Systems Manager verifies the integrity of the module by using an SHA-256 hash. Systems Manager then checks the operating system to determine whether to update EC2Config or EC2Launch. EC2Config runs on Windows Server 2008 R2 through Windows Server 2012 R2. EC2Launch runs on Windows Server 2016.

**Step 4: updateSSMAgent (`aws:runCommand` action)**  
This step updates SSM Agent by using the `AWS-UpdateSSMAgent` runbook.

**Step 5: updateAWSPVDriver (`aws:runCommand` action)**  
This step updates AWS PV drivers by using the `AWS-ConfigureAWSPackage` runbook.

**Step 6: updateAwsEnaNetworkDriver (`aws:runCommand` action)**  
This step updates AWS ENA Network drivers by using the `AWS-ConfigureAWSPackage` runbook.

**Step 7: installWindowsUpdates (`aws:runCommand` action) **  
This step installs Windows updates by using the `AWS-InstallWindowsUpdates` runbook. By default, Systems Manager searches for and installs all missing updates. You can change the default behavior by specifying one of the following parameters: `IncludeKbs`, `ExcludeKbs`, `Categories`, or `SeverityLevels`. 

**Step 8: runPostUpdateScript (`aws:runCommand` action)**  
This step enables you to specify a script as a string that runs after the updates have been installed.

**Step 9: runSysprepGeneralize (`aws:runCommand` action) **  
This step uses the `AWS-InstallPowerShellModule` runbook to download an AWS public PowerShell module. Systems Manager verifies the integrity of the module by using an SHA-256 hash. Systems Manager then runs sysprep using AWS-supported methods for either EC2Launch (Windows Server 2016) or EC2Config (Windows Server 2008 R2 through 2012 R2).

**Step 10: stopInstance (`aws:changeInstanceState` action) **  
This step stops the updated instance. 

**Step 11: createImage (`aws:createImage` action) **  
This step creates a new AMI with a descriptive name that links it to the source ID and creation time. For example: “AMI Generated by EC2 Automation on \$1\$1global:DATE\$1TIME\$1\$1 from \$1\$1SourceAmiId\$1\$1” where DATE\$1TIME and SourceID represent Automation variables.

**Step 12: TerminateInstance (`aws:changeInstanceState` action) **  
This step cleans up the automation by terminating the running instance. 

**Output**  
This section enables you to designate the outputs of various steps or values of any parameter as the Automation output. By default, the output is the ID of the updated Windows AMI created by the automation.

**Note**  
By default, when Automation runs the `AWS-UpdateWindowsAmi` runbook and creates a temporary instance, the system uses the default VPC (172.30.0.0/16). If you deleted the default VPC, you will receive the following error:  
VPC not defined 400  
To solve this problem, you must make a copy of the `AWS-UpdateWindowsAmi` runbook and specify a subnet ID. For more information, see [VPC not defined 400](automation-troubleshooting.md#automation-trbl-common-vpc).

**To create a patched Windows AMI by using Automation**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to run the `AWS-UpdateWindowsAmi` runbook. Replace each *example resource placeholder* with your own information. The example command below uses a recent Amazon EC2 AMI to minimize the number of patches that need to be applied. If you run this command more than once, you must specify a unique value for `targetAMIname`. AMI names must be unique.

   ```
   aws ssm start-automation-execution \
       --document-name="AWS-UpdateWindowsAmi" \
       --parameters SourceAmiId='AMI ID',IamInstanceProfileName='IAM instance profile',AutomationAssumeRole='arn:aws:iam::{{global:ACCOUNT_ID}}:role/AutomationServiceRole'
   ```

   The command returns an execution ID. Copy this ID to the clipboard. You will use this ID to view the status of the automation.

   ```
   {
       "AutomationExecutionId": "automation execution ID"
   }
   ```

1. To view the automation using the AWS CLI, run the following command:

   ```
   aws ssm describe-automation-executions
   ```

1. To view details about the automation progress, run the following command.

   ```
   aws ssm get-automation-execution 
       --automation-execution-id automation execution ID
   ```

**Note**  
Depending on the number of patches applied, the Windows patching process run in this sample automation can take 30 minutes or more to complete.

# Update a golden AMI using Automation, AWS Lambda, and Parameter Store


The following example uses the model where an organization maintains and periodically patches their own, proprietary AMIs rather than building from Amazon Elastic Compute Cloud (Amazon EC2) AMIs.

The following procedure shows how to automatically apply operating system (OS) patches to an AMI that is already considered to be the most up-to-date or *latest* AMI. In the example, the default value of the parameter `SourceAmiId` is defined by a AWS Systems Manager Parameter Store parameter called `latestAmi`. The value of `latestAmi` is updated by an AWS Lambda function invoked at the end of the automation. As a result of this Automation process, the time and effort spent patching AMIs is minimized because patching is always applied to the most up-to-date AMI. Parameter Store and Automation are tools of AWS Systems Manager.

**Before you begin**  
Configure Automation roles and, optionally, Amazon EventBridge for Automation. For more information, see [Setting up Automation](automation-setup.md).

**Topics**
+ [

## Task 1: Create a parameter in Systems Manager Parameter Store
](#create-parameter-ami)
+ [

## Task 2: Create an IAM role for AWS Lambda
](#create-lambda-role)
+ [

## Task 3: Create an AWS Lambda function
](#create-lambda-function)
+ [

## Task 4: Create a runbook and patch the AMI
](#create-custom-ami-update-runbook)

## Task 1: Create a parameter in Systems Manager Parameter Store


Create a string parameter in Parameter Store that uses the following information:
+ **Name**: `latestAmi`.
+ **Value**: An AMI ID. For example:` ami-188d6e0e`.

For information about how to create a Parameter Store string parameter, see [Creating Parameter Store parameters in Systems Manager](sysman-paramstore-su-create.md).

## Task 2: Create an IAM role for AWS Lambda


Use the following procedure to create an IAM service role for AWS Lambda. These policies give Lambda permission to update the value of the `latestAmi` parameter using a Lambda function and Systems Manager.

**To create an IAM service role for Lambda**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**.

1. Choose the **JSON** tab.

1. Replace the default contents with the following policy. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "logs:CreateLogGroup",
               "Resource": "arn:aws:logs:us-east-1:111122223333:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "logs:CreateLogStream",
                   "logs:PutLogEvents"
               ],
               "Resource": [
                   "arn:aws:logs:us-east-1:111122223333:log-group:/aws/lambda/function name:*"
               ]
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**.

1. (Optional) Add one or more tag-key value pairs to organize, track, or control access for this policy. 

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **amiLambda**.

1. Choose **Create policy**.

1. Repeat steps 2 and 3.

1. Paste the following policy. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "ssm:PutParameter",
               "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/latestAmi"
           },
           {
               "Effect": "Allow",
               "Action": "ssm:DescribeParameters",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**.

1. (Optional) Add one or more tag-key value pairs to organize, track, or control access for this policy. 

1. Choose **Next: Review**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **amiParameter**.

1. Choose **Create policy**.

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. Immediately under **Use case**, choose **Lambda**, and then choose **Next**.

1. On the **Add permissions** page, use the **Search** field to locate the two policies you created earlier.

1. Select the check box next to the policies, and then choose **Next**.

1. For **Role name**, enter a name for your new role, such as **lambda-ssm-role** or another name that you prefer. 
**Note**  
Because various entities might reference the role, you cannot change the name of the role after it has been created.

1. (Optional) Add one or more tag key-value pairs to organize, track, or control access for this role, and then choose **Create role**.

## Task 3: Create an AWS Lambda function


Use the following procedure to create a Lambda function that automatically updates the value of the `latestAmi` parameter.

**To create a Lambda function**

1. Sign in to the AWS Management Console and open the AWS Lambda console at [https://console.aws.amazon.com/lambda/](https://console.aws.amazon.com/lambda/).

1. Choose **Create function**.

1. On the **Create function** page, choose **Author from scratch**.

1. For **Function name**, enter **Automation-UpdateSsmParam**.

1. For **Runtime**, choose **Python 3.11**.

1. For **Architecture**, select the type of computer processor for Lambda to use to run the function, **x86\$164** or **arm64**, 

1. In the **Permissions** section, expand **Change default execution role**.

1. Choose **Use an existing role**, and then choose the service role for Lambda that you created in Task 2.

1. Choose **Create function**.

1. In the **Code source** area, on the **lambda\$1function** tab, delete the pre-populated code in the field, and then paste the following code sample.

   ```
   from __future__ import print_function
   
   import json
   import boto3
   
   print('Loading function')
   
   
   #Updates an SSM parameter
   #Expects parameterName, parameterValue
   def lambda_handler(event, context):
       print("Received event: " + json.dumps(event, indent=2))
   
       # get SSM client
       client = boto3.client('ssm')
   
       #confirm  parameter exists before updating it
       response = client.describe_parameters(
          Filters=[
             {
              'Key': 'Name',
              'Values': [ event['parameterName'] ]
             },
           ]
       )
   
       if not response['Parameters']:
           print('No such parameter')
           return 'SSM parameter not found.'
   
       #if parameter has a Description field, update it PLUS the Value
       if 'Description' in response['Parameters'][0]:
           description = response['Parameters'][0]['Description']
           
           response = client.put_parameter(
             Name=event['parameterName'],
             Value=event['parameterValue'],
             Description=description,
             Type='String',
             Overwrite=True
           )
       
       #otherwise just update Value
       else:
           response = client.put_parameter(
             Name=event['parameterName'],
             Value=event['parameterValue'],
             Type='String',
             Overwrite=True
           )
           
       responseString = 'Updated parameter %s with value %s.' % (event['parameterName'], event['parameterValue'])
           
       return responseString
   ```

1. Choose **File, Save**.

1. To test the Lambda function, from the **Test** menu, choose **Configure test event**.

1. For **Event name**, enter a name for the test event, such as **MyTestEvent**.

1. Replace the existing text with the following JSON. Replace *AMI ID* with your own information to set your `latestAmi` parameter value.

   ```
   {
      "parameterName":"latestAmi",
      "parameterValue":"AMI ID"
   }
   ```

1. Choose **Save**.

1. Choose **Test** to test the function. On the **Execution result** tab, the status should be reported as **Succeeded**, along with other details about the update.

## Task 4: Create a runbook and patch the AMI


Use the following procedure to create and run a runbook that patches the AMI you specified for the **latestAmi** parameter. After the automation completes, the value of **latestAmi** is updated with the ID of the newly-patched AMI. Subsequent automations use the AMI created by the previous execution.

**To create and run the runbook**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. For **Create document**, choose **Automation**.

1. For **Name**, enter **UpdateMyLatestWindowsAmi**.

1. Choose the **Editor** tab, and then choose **Edit**.

1. Choose **OK** when prompted.

1. In the **Document editor** field, replace the default content with the following YAML sample runbook content.

   ```
   ---
   description: Systems Manager Automation Demo - Patch AMI and Update ASG
   schemaVersion: '0.3'
   assumeRole: '{{ AutomationAssumeRole }}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: '(Required) The ARN of the role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to execute this document.'
       default: ''
     SourceAMI:
       type: String
       description: The ID of the AMI you want to patch.
       default: '{{ ssm:latestAmi }}'
     SubnetId:
       type: String
       description: The ID of the subnet where the instance from the SourceAMI parameter is launched.
     SecurityGroupIds:
       type: StringList
       description: The IDs of the security groups to associate with the instance that's launched from the SourceAMI parameter.
     NewAMI:
       type: String
       description: The name of of newly patched AMI.
       default: 'patchedAMI-{{global:DATE_TIME}}'
     InstanceProfile:
       type: String
       description: The name of the IAM instance profile you want the source instance to use.
     SnapshotId:
       type: String
       description: (Optional) The snapshot ID to use to retrieve a patch baseline snapshot.
       default: ''
     RebootOption:
       type: String
       description: '(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.'
       allowedValues:
         - NoReboot
         - RebootIfNeeded
       default: RebootIfNeeded
     Operation:
       type: String
       description: (Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.
       allowedValues:
         - Install
         - Scan
       default: Install
   mainSteps:
     - name: startInstances
       action: 'aws:runInstances'
       timeoutSeconds: 1200
       maxAttempts: 1
       onFailure: Abort
       inputs:
         ImageId: '{{ SourceAMI }}'
         InstanceType: m5.large
         MinInstanceCount: 1
         MaxInstanceCount: 1
         IamInstanceProfileName: '{{ InstanceProfile }}'
         SubnetId: '{{ SubnetId }}'
         SecurityGroupIds: '{{ SecurityGroupIds }}'
     - name: verifyInstanceManaged
       action: 'aws:waitForAwsResourceProperty'
       timeoutSeconds: 600
       inputs:
         Service: ssm
         Api: DescribeInstanceInformation
         InstanceInformationFilterList:
           - key: InstanceIds
             valueSet:
               - '{{ startInstances.InstanceIds }}'
         PropertySelector: '$.InstanceInformationList[0].PingStatus'
         DesiredValues:
           - Online
       onFailure: 'step:terminateInstance'
     - name: installPatches
       action: 'aws:runCommand'
       timeoutSeconds: 7200
       onFailure: Abort
       inputs:
         DocumentName: AWS-RunPatchBaseline
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
         InstanceIds:
           - '{{ startInstances.InstanceIds }}'
     - name: stopInstance
       action: 'aws:changeInstanceState'
       maxAttempts: 1
       onFailure: Continue
       inputs:
         InstanceIds:
           - '{{ startInstances.InstanceIds }}'
         DesiredState: stopped
     - name: createImage
       action: 'aws:createImage'
       maxAttempts: 1
       onFailure: Continue
       inputs:
         InstanceId: '{{ startInstances.InstanceIds }}'
         ImageName: '{{ NewAMI }}'
         NoReboot: false
         ImageDescription: Patched AMI created by Automation
     - name: terminateInstance
       action: 'aws:changeInstanceState'
       maxAttempts: 1
       onFailure: Continue
       inputs:
         InstanceIds:
           - '{{ startInstances.InstanceIds }}'
         DesiredState: terminated
     - name: updateSsmParam
       action: aws:invokeLambdaFunction
       timeoutSeconds: 1200
       maxAttempts: 1
       onFailure: Abort
       inputs:
           FunctionName: Automation-UpdateSsmParam
           Payload: '{"parameterName":"latestAmi", "parameterValue":"{{createImage.ImageId}}"}'
   outputs:
   - createImage.ImageId
   ```

1. Choose **Create automation**.

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Choose document** page, choose the **Owned by me** tab.

1. Search for the **UpdateMyLatestWindowsAmi** runbook, and select the button in the **UpdateMyLatestWindowsAmi** card.

1. Choose **Next**.

1. Choose **Simple execution**.

1. Specify values for the input parameters.

1. Choose **Execute**.

1. After the automation completes, choose **Parameter Store** in the navigation pane and confirm that the new value for `latestAmi` matches the value returned by the automation. You can also verify the new AMI ID matches the Automation output in the **AMIs** section of the Amazon EC2 console.

# Updating AMIs using Automation and Jenkins


If your organization uses Jenkins software in a CI/CD pipeline, you can add Automation as a post-build step to pre-install application releases into Amazon Machine Images (AMIs). Automation is a tool in AWS Systems Manager. You can also use the Jenkins scheduling feature to call Automation and create your own operating system (OS) patching cadence.

The example below shows how to invoke Automation from a Jenkins server that is running either on-premises or in Amazon Elastic Compute Cloud (Amazon EC2). For authentication, the Jenkins server uses AWS credentials based on an IAM policy that you create in the example and attach to your instance profile.

**Note**  
Be sure to follow Jenkins security best practices when configuring your instance.

**Before you begin**  
Complete the following tasks before you configure Automation with Jenkins:
+ Complete the [Update a golden AMI using Automation, AWS Lambda, and Parameter Store](automation-tutorial-update-patch-golden-ami.md) example. The following example uses the **UpdateMyLatestWindowsAmi** runbook created in that example.
+ Configure IAM roles for Automation. Systems Manager requires an instance profile role and a service role ARN to process automations. For more information, see [Setting up Automation](automation-setup.md).

**To create an IAM policy for the Jenkins server**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**.

1. Choose the **JSON** tab.

1. Replace each *example resource placeholder* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "ssm:StartAutomationExecution",
               "Resource": [
                   "arn:aws:ssm:us-east-1:111122223333:document/UpdateMyLatestWindowsAmi",
                   "arn:aws:ssm:us-east-1:111122223333:automation-execution/*"
               ]
           }
       ]
   }
   ```

------

1. Choose **Review policy**.

1. On the **Review policy** page, for **Name**, enter a name for the inline policy, such as **JenkinsPolicy**.

1. Choose **Create policy**.

1. In the navigation pane, choose **Roles**.

1. Choose the instance profile that's attached to your Jenkins server.

1. In the **Permissions** tab, select **Add permissions** and choose **Attach policies**.

1. In the **Other permissions policies** section, enter the name of policy you created in the previous steps. For example, **JenkinsPolicy**.

1. Select the check box next to your policy, and choose **Attach policies**.

Use the following procedure to configure the AWS CLI on your Jenkins server.

**To configure the Jenkins server for Automation**

1. Connect to your Jenkins server on port 8080 using your preferred browser to access the management interface.

1. Enter the password found in `/var/lib/jenkins/secrets/initialAdminPassword`. To display your password, run the following command.

   ```
   sudo cat /var/lib/jenkins/secrets/initialAdminPassword
   ```

1. The Jenkins installation script directs you to the **Customize Jenkins** page. Select **Install suggested plugins**.

1. Once the installation is complete, choose **Administrator Credentials**, select **Save Credentials**, and then select **Start Using Jenkins**.

1. In the left navigation pane, choose **Manage Jenkins**, and then choose **Manage Plugins**.

1. Choose the **Available** tab, and then enter **Amazon EC2 plugin**.

1. Select the check box for **Amazon EC2 plugin**, and then select **Install without restart**.

1. When the installation completes, select **Go back to the top page**.

1. Choose **Manage Jenkins**, and then choose **Manage nodes and clouds**.

1. In the **Configure Clouds** section, select **Add a new cloud**, and then choose **Amazon EC2**.

1. Enter your information in the remaining fields. Make sure you select the **Use EC2 instance profile to obtain credentials** option.

Use the following procedure to configure your Jenkins project to invoke Automation.

**To configure your Jenkins server to invoke Automation**

1. Open the Jenkins console in a web browser.

1. Choose the project that you want to configure with Automation, and then choose **Configure**.

1. On the **Build** tab, choose **Add Build Step**.

1. Choose **Execute shell** or **Execute Windows batch command** (depending on your operating system).

1. In the **Command** field, run an AWS CLI command like the following. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm start-automation-execution \
           --document-name runbook name \
           --region AWS Region of your source AMI \
           --parameters runbook parameters
   ```

   The following example command uses the **UpdateMyLatestWindowsAmi** runbook and the Systems Manager Parameter `latestAmi` created in [Update a golden AMI using Automation, AWS Lambda, and Parameter Store](automation-tutorial-update-patch-golden-ami.md).

   ```
   aws ssm start-automation-execution \
           --document-name UpdateMyLatestWindowsAmi \
           --parameters \
               "sourceAMIid='{{ssm:latestAmi}}'"
           --region region
   ```

   In Jenkins, the command looks like the example in the following screenshot.  
![\[A sample command in Jenkins software.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-ami-jenkins2.png)

1. In the Jenkins project, choose **Build Now**. Jenkins returns output similar to the following example.  
![\[Sample command output in Jenkins software.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/sysman-ami-jenkins.png)

# Updating AMIs for Auto Scaling groups


The following example updates an Auto Scaling group with a newly patched AMI. This approach ensures that new images are automatically made available to different computing environments that use Auto Scaling groups.

The final step of the automation in this example uses a Python function to create a new launch template that uses the newly patched AMI. Then the Auto Scaling group is updated to use the new launch template. In this type of Auto Scaling scenario, users could terminate existing instances in the Auto Scaling group to force a new instance to launch that uses the new image. Or, users could wait and allow scale-in or scale-out events to naturally launch newer instances.

**Before you begin**  
Complete the following tasks before you begin this example.
+ Configure IAM roles for Automation, a tool in AWS Systems Manager. Systems Manager requires an instance profile role and a service role ARN to process automations. For more information, see [Setting up Automation](automation-setup.md).

## Create the **PatchAMIAndUpdateASG** runbook


Use the following procedure to create the **PatchAMIAndUpdateASG** runbook that patches the AMI you specify for the **SourceAMI** parameter. The runbook also updates an Auto Scaling group to use the latest, patched AMI.

**To create and run the runbook**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the **Create document** dropdown, choose **Automation**.

1. In the **Name** field, enter **PatchAMIAndUpdateASG**.

1. Choose the **Editor** tab, and choose the **Edit**.

1. Choose **OK** when prompted, and delete the content in the **Document editor** field.

1. In the **Document editor** field, paste the following YAML sample runbook content.

   ```
   ---
   description: Systems Manager Automation Demo - Patch AMI and Update ASG
   schemaVersion: '0.3'
   assumeRole: '{{ AutomationAssumeRole }}'
   parameters:
     AutomationAssumeRole:
       type: String
       description: '(Required) The ARN of the role that allows Automation to perform the actions on your behalf. If no role is specified, Systems Manager Automation uses your IAM permissions to execute this document.'
       default: ''
     SourceAMI:
       type: String
       description: '(Required) The ID of the AMI you want to patch.'
     SubnetId:
       type: String
       description: '(Required) The ID of the subnet where the instance from the SourceAMI parameter is launched.'
     SecurityGroupIds:
       type: StringList
       description: '(Required) The IDs of the security groups to associate with the instance launched from the SourceAMI parameter.'
     NewAMI:
       type: String
       description: '(Optional) The name of of newly patched AMI.'
       default: 'patchedAMI-{{global:DATE_TIME}}'
     TargetASG:
       type: String
       description: '(Required) The name of the Auto Scaling group you want to update.'
     InstanceProfile:
       type: String
       description: '(Required) The name of the IAM instance profile you want the source instance to use.'
     SnapshotId:
       type: String
       description: (Optional) The snapshot ID to use to retrieve a patch baseline snapshot.
       default: ''
     RebootOption:
       type: String
       description: '(Optional) Reboot behavior after a patch Install operation. If you choose NoReboot and patches are installed, the instance is marked as non-compliant until a subsequent reboot and scan.'
       allowedValues:
         - NoReboot
         - RebootIfNeeded
       default: RebootIfNeeded
     Operation:
       type: String
       description: (Optional) The update or configuration to perform on the instance. The system checks if patches specified in the patch baseline are installed on the instance. The install operation installs patches missing from the baseline.
       allowedValues:
         - Install
         - Scan
       default: Install
   mainSteps:
     - name: startInstances
       action: 'aws:runInstances'
       timeoutSeconds: 1200
       maxAttempts: 1
       onFailure: Abort
       inputs:
         ImageId: '{{ SourceAMI }}'
         InstanceType: m5.large
         MinInstanceCount: 1
         MaxInstanceCount: 1
         IamInstanceProfileName: '{{ InstanceProfile }}'
         SubnetId: '{{ SubnetId }}'
         SecurityGroupIds: '{{ SecurityGroupIds }}'
     - name: verifyInstanceManaged
       action: 'aws:waitForAwsResourceProperty'
       timeoutSeconds: 600
       inputs:
         Service: ssm
         Api: DescribeInstanceInformation
         InstanceInformationFilterList:
           - key: InstanceIds
             valueSet:
               - '{{ startInstances.InstanceIds }}'
         PropertySelector: '$.InstanceInformationList[0].PingStatus'
         DesiredValues:
           - Online
       onFailure: 'step:terminateInstance'
     - name: installPatches
       action: 'aws:runCommand'
       timeoutSeconds: 7200
       onFailure: Abort
       inputs:
         DocumentName: AWS-RunPatchBaseline
         Parameters:
           SnapshotId: '{{SnapshotId}}'
           RebootOption: '{{RebootOption}}'
           Operation: '{{Operation}}'
         InstanceIds:
           - '{{ startInstances.InstanceIds }}'
     - name: stopInstance
       action: 'aws:changeInstanceState'
       maxAttempts: 1
       onFailure: Continue
       inputs:
         InstanceIds:
           - '{{ startInstances.InstanceIds }}'
         DesiredState: stopped
     - name: createImage
       action: 'aws:createImage'
       maxAttempts: 1
       onFailure: Continue
       inputs:
         InstanceId: '{{ startInstances.InstanceIds }}'
         ImageName: '{{ NewAMI }}'
         NoReboot: false
         ImageDescription: Patched AMI created by Automation
     - name: terminateInstance
       action: 'aws:changeInstanceState'
       maxAttempts: 1
       onFailure: Continue
       inputs:
         InstanceIds:
           - '{{ startInstances.InstanceIds }}'
         DesiredState: terminated
     - name: updateASG
       action: 'aws:executeScript'
       timeoutSeconds: 300
       maxAttempts: 1
       onFailure: Abort
       inputs:
         Runtime: python3.11
         Handler: update_asg
         InputPayload:
           TargetASG: '{{TargetASG}}'
           NewAMI: '{{createImage.ImageId}}'
         Script: |-
           from __future__ import print_function
           import datetime
           import json
           import time
           import boto3
   
           # create auto scaling and ec2 client
           asg = boto3.client('autoscaling')
           ec2 = boto3.client('ec2')
   
           def update_asg(event, context):
               print("Received event: " + json.dumps(event, indent=2))
   
               target_asg = event['TargetASG']
               new_ami = event['NewAMI']
   
               # get object for the ASG we're going to update, filter by name of target ASG
               asg_query = asg.describe_auto_scaling_groups(AutoScalingGroupNames=[target_asg])
               if 'AutoScalingGroups' not in asg_query or not asg_query['AutoScalingGroups']:
                   return 'No ASG found matching the value you specified.'
   
               # gets details of an instance from the ASG that we'll use to model the new launch template after
               source_instance_id = asg_query.get('AutoScalingGroups')[0]['Instances'][0]['InstanceId']
               instance_properties = ec2.describe_instances(
                   InstanceIds=[source_instance_id]
               )
               source_instance = instance_properties['Reservations'][0]['Instances'][0]
   
               # create list of security group IDs
               security_groups = []
               for group in source_instance['SecurityGroups']:
                   security_groups.append(group['GroupId'])
   
               # create a list of dictionary objects for block device mappings
               mappings = []
               for block in source_instance['BlockDeviceMappings']:
                   volume_query = ec2.describe_volumes(
                       VolumeIds=[block['Ebs']['VolumeId']]
                   )
                   volume_details = volume_query['Volumes']
                   device_name = block['DeviceName']
                   volume_size = volume_details[0]['Size']
                   volume_type = volume_details[0]['VolumeType']
                   device = {'DeviceName': device_name, 'Ebs': {'VolumeSize': volume_size, 'VolumeType': volume_type}}
                   mappings.append(device)
   
               # create new launch template using details returned from instance in the ASG and specify the newly patched AMI
               time_stamp = time.time()
               time_stamp_string = datetime.datetime.fromtimestamp(time_stamp).strftime('%m-%d-%Y_%H-%M-%S')
               new_template_name = f'{new_ami}_{time_stamp_string}'
               try:
                   ec2.create_launch_template(
                       LaunchTemplateName=new_template_name,
                       LaunchTemplateData={
                           'BlockDeviceMappings': mappings,
                           'ImageId': new_ami,
                           'InstanceType': source_instance['InstanceType'],
                           'IamInstanceProfile': {
                               'Arn': source_instance['IamInstanceProfile']['Arn']
                           },
                           'KeyName': source_instance['KeyName'],
                           'SecurityGroupIds': security_groups
                       }
                   )
               except Exception as e:
                   return f'Exception caught: {str(e)}'
               else:
                   # update ASG to use new launch template
                   asg.update_auto_scaling_group(
                       AutoScalingGroupName=target_asg,
                       LaunchTemplate={
                           'LaunchTemplateName': new_template_name
                       }
                   )
                   return f'Updated ASG {target_asg} with new launch template {new_template_name} which uses AMI {new_ami}.'
   outputs:
   - createImage.ImageId
   ```

1. Choose **Create automation**.

1. In the navigation pane, choose **Automation**, and then choose **Execute automation**.

1. In the **Choose document** page, choose the **Owned by me** tab.

1. Search for the **PatchAMIAndUpdateASG** runbook, and select the button in the **PatchAMIAndUpdateASG** card.

1. Choose **Next**.

1. Choose **Simple execution**.

1. Specify values for the input parameters. Be sure the `SubnetId` and `SecurityGroupIds` you specify allow access to the public Systems Manager endpoints, or your interface endpoints for Systems Manager.

1. Choose **Execute**.

1. After automation completes, in the Amazon EC2 console, choose **Auto Scaling**, and then choose **Launch Templates**. Verify that you see the new launch template, and that it uses the new AMI.

1. Choose **Auto Scaling**, and then choose **Auto Scaling Groups**. Verify that the Auto Scaling group uses the new launch template.

1. Terminate one or more instances in your Auto Scaling group. Replacement instances will be launched using the new AMI.

# Using AWS Support self-service runbooks


This section describes how to use some of the self-service automations created by the AWS Support team. These automations help you manage your AWS resources.

**Support Automation Workflows**  
Support Automation Workflows (SAW) are automation runbooks written and maintained by the AWS Support team. These runbooks help you troubleshoot common issues with your AWS resources, proactively monitor and identify network issues, collect and analyze logs, and more.

SAW runbooks use the **`AWSSupport`** prefix. For example, [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-activatewindowswithamazonlicense.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-activatewindowswithamazonlicense.html).

Additionally, customers with Business Support\$1 and higher AWS Support plans also have access to runbooks that use the **`AWSPremiumSupport`** prefix. For example, [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awspremiumsupport-troubleshootEC2diskusage.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awspremiumsupport-troubleshootEC2diskusage.html).

To learn more about AWS Support, see [Getting started with AWS Support](https://docs.aws.amazon.com/awssupport/latest/user/getting-started.html).

**Topics**
+ [

# Run the EC2Rescue tool on unreachable instances
](automation-ec2rescue.md)
+ [

# Reset passwords and SSH keys on EC2 instances
](automation-ec2reset.md)

# Run the EC2Rescue tool on unreachable instances


EC2Rescue can help you diagnose and troubleshoot problems on Amazon Elastic Compute Cloud (Amazon EC2) instances for Linux and Windows Server. You can run the tool manually, as described in [Using EC2Rescue for Linux Server](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Linux-Server-EC2Rescue.html) and [Using EC2Rescue for Windows Server](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/Windows-Server-EC2Rescue.html). Or, you can run the tool automatically by using Systems Manager Automation and the **`AWSSupport-ExecuteEC2Rescue`** runbook. Automation is a tool in AWS Systems Manager. The **`AWSSupport-ExecuteEC2Rescue`** runbook is designed to perform a combination of Systems Manager actions, CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue. 

You can use the **`AWSSupport-ExecuteEC2Rescue`** runbook to troubleshoot and potentially remediate different types of operating system (OS) issues. Instances with encypted root volumes are not supported. See the following topics for a complete list:

**Windows**: See *Rescue Action* in [Using EC2Rescue for Windows Server with the Command Line](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2rw-cli.html#ec2rw-rescue).

**Linux** and **macOS**: Some EC2Rescue for Linux modules detect and attempt to remediate issues. For more information, see the [https://github.com/awslabs/aws-ec2rescue-linux/tree/master/docs](https://github.com/awslabs/aws-ec2rescue-linux/tree/master/docs) documentation for each module on GitHub.

## How it works


Troubleshooting an instance with Automation and the **`AWSSupport-ExecuteEC2Rescue`** runbook works as follows:
+ You specify the ID of the unreachable instance and start the runbook.
+ The system creates a temporary VPC, and then runs a series of Lambda functions to configure the VPC.
+ The system identifies a subnet for your temporary VPC in the same Availability Zone as your original instance.
+ The system launches a temporary, SSM-enabled helper instance.
+ The system stops your original instance, and creates a backup. It then attaches the original root volume to the helper instance.
+ The system uses Run Command to run EC2Rescue on the helper instance. EC2Rescue identifies and attempts to fix issues on the attached, original root volume. When finished, EC2Rescue reattaches the root volume back to the original instance.
+ The system restarts your original instance, and terminates the temporary instance. The system also terminates the temporary VPC and the Lambda functions created at the start of the automation.

## Before you begin


Before you run the following Automation, do the following:
+ Copy the instance ID of the unreachable instance. You will specify this ID in the procedure.
+ Optionally, collect the ID of a subnet in the same availability zone as your unreachable instance. The EC2Rescue instance will be created in this subnet. If you don’t specify a subnet, then Automation creates a new temporary VPC in your AWS account. Verify that your AWS account has at least one VPC available. By default, you can create five VPCs in a Region. If you already created five VPCs in the Region, the automation fails without making changes to your instance. For more information about Amazon VPC quotas, see [VPC and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-vpcs-subnets) in the *Amazon VPC User Guide*.
+ Optionally, you can create and specify an AWS Identity and Access Management (IAM) role for Automation. If you don't specify this role, then Automation runs in the context of the user who ran the automation.

### Granting `AWSSupport-EC2Rescue` permissions to perform actions on your instances


EC2Rescue needs permission to perform a series of actions on your instances during the automation. These actions invoke the AWS Lambda, IAM, and Amazon EC2 services to safely and securely attempt to remediate issues with your instances. If you have Administrator-level permissions in your AWS account and/or VPC, you might be able to run the automation without configuring permissions, as described in this section. If you don't have Administrator-level permissions, then you or an administrator must configure permissions by using one of the following options.
+ [Granting permissions by using IAM policies](#automation-ec2rescue-access-iam)
+ [Granting permissions by using an CloudFormation template](#automation-ec2rescue-access-cfn)

#### Granting permissions by using IAM policies


You can either attach the following IAM policy to your user, group, or role as an inline policy; or, you can create a new IAM managed policy and attach it to your user, group, or role. For more information about adding an inline policy to your user, group, or role see [Working With Inline Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_inline-using.html). For more information about creating a new managed policy, see [Working With Managed Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-using.html).

**Note**  
If you create a new IAM managed policy, you must also attach the **AmazonSSMAutomationRole** managed policy to it so that your instances can communicate with the Systems Manager API.

**IAM Policy for AWSSupport-EC2Rescue**

Replace *account ID* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "lambda:InvokeFunction",
                "lambda:DeleteFunction",
                "lambda:GetFunction"
            ],
            "Resource": "arn:aws:lambda:*:111122223333:function:AWSSupport-EC2Rescue-*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::awssupport-ssm.*/*.template",
                "arn:aws:s3:::awssupport-ssm.*/*.zip"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "iam:CreateRole",
                "iam:CreateInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:PutRolePolicy",
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy",
                "iam:PassRole",
                "iam:AddRoleToInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:DeleteRole",
                "iam:DeleteRolePolicy",
                "iam:DeleteInstanceProfile"
            ],
            "Resource": [
                "arn:aws:iam::111122223333:role/AWSSupport-EC2Rescue-*",
                "arn:aws:iam::111122223333:instance-profile/AWSSupport-EC2Rescue-*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "lambda:CreateFunction",
                "ec2:CreateVpc",
                "ec2:ModifyVpcAttribute",
                "ec2:DeleteVpc",
                "ec2:CreateInternetGateway",
                "ec2:AttachInternetGateway",
                "ec2:DetachInternetGateway",
                "ec2:DeleteInternetGateway",
                "ec2:CreateSubnet",
                "ec2:DeleteSubnet",
                "ec2:CreateRoute",
                "ec2:DeleteRoute",
                "ec2:CreateRouteTable",
                "ec2:AssociateRouteTable",
                "ec2:DisassociateRouteTable",
                "ec2:DeleteRouteTable",
                "ec2:CreateVpcEndpoint",
                "ec2:DeleteVpcEndpoints",
                "ec2:ModifyVpcEndpoint",
                "ec2:Describe*",
                "autoscaling:DescribeAutoScalingInstances"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
```

------

#### Granting permissions by using an CloudFormation template


CloudFormation automates the process of creating IAM roles and policies by using a preconfigured template. Use the following procedure to create the required IAM roles and policies for the EC2Rescue Automation by using CloudFormation.

**To create the required IAM roles and policies for EC2Rescue**

1. Download [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWSSupport-EC2RescueRole.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWSSupport-EC2RescueRole.zip) and extract the `AWSSupport-EC2RescueRole.json` file to a directory on your local machine.

1. If your AWS account is in a special partition, edit the template to change the ARN values to those for your partition.

   For example, for the China Regions, change all cases of `arn:aws` to `arn:aws-cn`.

1. Sign in to the AWS Management Console and open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create stack**, **With new resources (standard)**.

1. On the **Create stack** page, for **Prerequisite - Prepare template**, choose **Template is ready**.

1. For **Specify template**, choose **Upload a template file**.

1. Choose **Choose file**, and then browse to and select the `AWSSupport-EC2RescueRole.json` file from the directory where you extracted it.

1. Choose **Next**.

1. On the **Specify stack details** page, for **Stack name** field, enter a name to identify this stack, and then choose **Next**.

1. (Optional) In the **Tags** area, apply one or more tag key name/value pairs to the stack.

   Tags are optional metadata that you assign to a resource. Tags enable you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a stack to identify the type of tasks it runs, the types of targets or other resources involved, and the environment it runs in.

1. Choose **Next**

1. On the **Review** page, review the stack details, and then scroll down and choose the **I acknowledge that CloudFormation might create IAM resources** option.

1. Choose **Create stack**.

   CloudFormation shows the **CREATE\$1IN\$1PROGRESS** status for a few minutes. The status changes to **CREATE\$1COMPLETE** after the stack has been created. You can also choose the refresh icon to check the status of the create process.

1. In the **Stacks** list, choose the option button the stack you just created, and then choose the **Outputs** tab.

1. Note the **Value**. The is the ARN of the AssumeRole. You specify this ARN when you run the Automation in the next procedure, [Running the Automation](#automation-ec2rescue-executing). 

## Running the Automation


**Important**  
The following automation stops the unreachable instance. Stopping the instance can result in lost data on attached instance store volumes (if present). Stopping the instance can also cause the public IP to change, if no Elastic IP is associated.

**To run the `AWSSupport-ExecuteEC2Rescue` Automation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose **Execute automation**.

1. In the **Automation document** section, choose **Owned by Amazon** from the list.

1. In the runbooks list, choose the button in the card for `AWSSupport-ExecuteEC2Rescue`, and then choose **Next**.

1. In the **Execute automation document** page, choose **Simple execution**.

1. In the **Document details** section, verify that **Document version** is set to the highest default version. For example, **\$1DEFAULT** or **3 (default)**.

1. In the **Input parameters** section, specify the following parameters: 

   1. For **UnreachableInstanceId**, specify the ID of the unreachable instance. 

   1. (Optional) For **EC2RescueInstanceType**, specify an instance type for the EC2Rescue instance. The default instance type is `t2.medium`.

   1. For **AutomationAssumeRole**, if you created roles for this Automation by using the CloudFormation procedure described earlier in this topic, then choose the ARN of the AssumeRole that you created in the CloudFormation console.

   1. (Optional) For **LogDestination**, specify an S3 bucket if you want to collect operating system-level logs while troubleshooting your instance. Logs are automatically uploaded to the specified bucket.

   1. For **SubnetId**, specify a subnet in an existing VPC in the same availability zone as the unreachable instance. By default, Systems Manager creates a new VPC, but you can specify a subnet in an existing VPC if you want.
**Note**  
If you don't see the option to specify a bucket or a subnet ID, verify that you are using the latest **Default** version of the runbook.

1. (Optional) In the **Tags** area, apply one or more tag key name/value pairs to help identify the automation, for example `Key=Purpose,Value=EC2Rescue`.

1. Choose **Execute**.

The runbook creates a backup AMI as part of the automation. All other resources created by the automation are automatically deleted, but this AMI remains in your account. The AMI is named using the following convention:

Backup AMI: AWSSupport-EC2Rescue:*UnreachableInstanceId*

You can locate this AMI in the Amazon EC2 console by searching on the Automation execution ID.

# Reset passwords and SSH keys on EC2 instances


You can use the `AWSSupport-ResetAccess` runbook to automatically re-enable local Administrator password generation on Amazon Elastic Compute Cloud (Amazon EC2) instances for Windows Server and to generate a new SSH key on EC2 instances for Linux. The `AWSSupport-ResetAccess` runbook is designed to perform a combination of AWS Systems Manager actions, AWS CloudFormation actions, and AWS Lambda functions that automate the steps normally required to reset the local administrator password.

You can use Automation, a tool in AWS Systems Manager, with the `AWSSupport-ResetAccess` runbook to solve the following problems:

**Windows**

*You lost the EC2 key pair*: To resolve this problem, you can use the **AWSSupport-ResetAccess** runbook to create a password-enabled AMI from your current instance, launch a new instance from the AMI, and select a key pair you own.

*You lost the local Administrator password*: To resolve this problem, you can use the `AWSSupport-ResetAccess` runbook to generate a new password that you can decrypt with the current EC2 key pair.

**Linux**

*You lost your EC2 key pair, or you configured SSH access to the instance with a key you lost*: To resolve this problem, you can use the `AWSSupport-ResetAccess` runbook to create a new SSH key for your current instance, which enables you to connect to the instance again.

**Note**  
If your EC2 instance for Windows Server is configured for Systems Manager, you can also reset your local Administrator password by using EC2Rescue and AWS Systems Manager Run Command. For more information, see [Using EC2Rescue for Windows Server with Systems Manager Run Command](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2rw-ssm.html) in the *Amazon EC2 User Guide*.

**Related information**  
[Connect to your Linux instance from Windows using PuTTY](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html) in the *Amazon EC2 User Guide*

## How it works


Troubleshooting an instance with Automation and the `AWSSupport-ResetAccess` runbook works as follows:
+ You specify the ID of the instance and run the runbook.
+ The system creates a temporary VPC, and then runs a series of Lambda functions to configure the VPC.
+ The system identifies a subnet for your temporary VPC in the same Availability Zone as your original instance.
+ The system launches a temporary, SSM-enabled helper instance.
+ The system stops your original instance, and creates a backup. It then attaches the original root volume to the helper instance.
+ The system uses Run Command to run EC2Rescue on the helper instance. On Windows, EC2Rescue enables password generation for the local Administrator by using EC2Config or EC2Launch on the attached, original root volume. On Linux, EC2Rescue generates and injects a new SSH key and saves the private key, encrypted, in Parameter Store. When finished, EC2Rescue reattaches the root volume back to the original instance.
+ The system creates a new Amazon Machine Image (AMI) of your instance, now that password generation is enabled. You can use this AMI to create a new EC2 instance, and associate a new key pair if needed.
+ The system restarts your original instance, and terminates the temporary instance. The system also terminates the temporary VPC and the Lambda functions created at the start of the automation.
+ **Windows**: Your instance generates a new password you can decode from the Amazon EC2 console using the current key pair assigned to the instance.

  **Linux**: You can SSH to the instance by using the SSH key stored in Systems Manager Parameter Store as **/ec2rl/openssh/*instance ID*/key**.

## Before you begin


Before you run the following Automation, do the following:
+ Copy the instance ID of the instance on which you want to reset the Administrator password. You will specify this ID in the procedure.
+ Optionally, collect the ID of a subnet in the same availability zone as your unreachable instance. The EC2Rescue instance will be created in this subnet. If you don’t specify a subnet, then Automation creates a new temporary VPC in your AWS account. Verify that your AWS account has at least one VPC available. By default, you can create five VPCs in a Region. If you already created five VPCs in the Region, the automation fails without making changes to your instance. For more information about Amazon VPC quotas, see [VPC and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html#vpc-limits-vpcs-subnets) in the *Amazon VPC User Guide*.
+ Optionally, you can create and specify an AWS Identity and Access Management (IAM) role for Automation. If you don't specify this role, then Automation runs in the context of the user who ran the automation.

### Granting AWSSupport-EC2Rescue permissions to perform actions on your instances


EC2Rescue needs permission to perform a series of actions on your instances during the automation. These actions invoke the AWS Lambda, IAM, and Amazon EC2 services to safely and securely attempt to remediate issues with your instances. If you have Administrator-level permissions in your AWS account and/or VPC, you might be able to run the automation without configuring permissions, as described in this section. If you don't have Administrator-level permissions, then you or an administrator must configure permissions by using one of the following options.
+ [Granting permissions by using IAM policies](#automation-ec2reset-access-iam)
+ [Granting permissions by using an CloudFormation template](#automation-ec2reset-access-cfn)

#### Granting permissions by using IAM policies


You can either attach the following IAM policy to your user, group, or role as an inline policy; or, you can create a new IAM managed policy and attach it to your user, group, or role. For more information about adding an inline policy to your user, group, or role see [Working With Inline Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_inline-using.html). For more information about creating a new managed policy, see [Working With Managed Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-using.html).

**Note**  
If you create a new IAM managed policy, you must also attach the **AmazonSSMAutomationRole** managed policy to it so that your instances can communicate with the Systems Manager API.

**IAM Policy for `AWSSupport-ResetAccess`**

Replace *account ID* with your own information.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "lambda:InvokeFunction",
                "lambda:DeleteFunction",
                "lambda:GetFunction"
            ],
            "Resource": "arn:aws:lambda:*:111122223333:function:AWSSupport-EC2Rescue-*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::awssupport-ssm.*/*.template",
                "arn:aws:s3:::awssupport-ssm.*/*.zip"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "iam:CreateRole",
                "iam:CreateInstanceProfile",
                "iam:GetRole",
                "iam:GetInstanceProfile",
                "iam:PutRolePolicy",
                "iam:DetachRolePolicy",
                "iam:AttachRolePolicy",
                "iam:PassRole",
                "iam:AddRoleToInstanceProfile",
                "iam:RemoveRoleFromInstanceProfile",
                "iam:DeleteRole",
                "iam:DeleteRolePolicy",
                "iam:DeleteInstanceProfile"
            ],
            "Resource": [
                "arn:aws:iam::111122223333:role/AWSSupport-EC2Rescue-*",
                "arn:aws:iam::111122223333:instance-profile/AWSSupport-EC2Rescue-*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "lambda:CreateFunction",
                "ec2:CreateVpc",
                "ec2:ModifyVpcAttribute",
                "ec2:DeleteVpc",
                "ec2:CreateInternetGateway",
                "ec2:AttachInternetGateway",
                "ec2:DetachInternetGateway",
                "ec2:DeleteInternetGateway",
                "ec2:CreateSubnet",
                "ec2:DeleteSubnet",
                "ec2:CreateRoute",
                "ec2:DeleteRoute",
                "ec2:CreateRouteTable",
                "ec2:AssociateRouteTable",
                "ec2:DisassociateRouteTable",
                "ec2:DeleteRouteTable",
                "ec2:CreateVpcEndpoint",
                "ec2:DeleteVpcEndpoints",
                "ec2:ModifyVpcEndpoint",
                "ec2:Describe*"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
```

------

#### Granting permissions by using an CloudFormation template


CloudFormation automates the process of creating IAM roles and policies by using a preconfigured template. Use the following procedure to create the required IAM roles and policies for the EC2Rescue Automation by using CloudFormation.

**To create the required IAM roles and policies for EC2Rescue**

1. Download [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWSSupport-EC2RescueRole.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/AWSSupport-EC2RescueRole.zip) and extract the `AWSSupport-EC2RescueRole.json` file to a directory on your local machine.

1. If your AWS account is in a special partition, edit the template to change the ARN values to those for your partition.

   For example, for the China Regions, change all cases of `arn:aws` to `arn:aws-cn`.

1. Sign in to the AWS Management Console and open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create stack**, **With new resources (standard)**.

1. On the **Create stack** page, for **Prerequisite - Prepare template**, choose **Template is ready**.

1. For **Specify template**, choose **Upload a template file**.

1. Choose **Choose file**, and then browse to and select the `AWSSupport-EC2RescueRole.json` file from the directory where you extracted it.

1. Choose **Next**.

1. On the **Specify stack details** page, for **Stack name** field, enter a name to identify this stack, and then choose **Next**.

1. (Optional) In the **Tags** area, apply one or more tag key name/value pairs to the stack.

   Tags are optional metadata that you assign to a resource. Tags enable you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a stack to identify the type of tasks it runs, the types of targets or other resources involved, and the environment it runs in.

1. Choose **Next**

1. On the **Review** page, review the stack details, and then scroll down and choose the **I acknowledge that CloudFormation might create IAM resources** option.

1. CloudFormation shows the **CREATE\$1IN\$1PROGRESS** status for a few minutes. The status changes to **CREATE\$1COMPLETE** after the stack has been created. You can also choose the refresh icon to check the status of the create process.

1. In the stack list, choose the option next to the stack you just created, and then choose the **Outputs** tab.

1. Copy the **Value**. The is the ARN of the AssumeRole. You will specify this ARN when you run the Automation. 

## Running the Automation


The following procedure describes how to run the `AWSSupport-ResetAccess` runbook by using the AWS Systems Manager console.

**Important**  
The following automation stops the instance. Stopping the instance can result in lost data on attached instance store volumes (if present). Stopping the instance can also cause the public IP to change, if no Elastic IP is associated. To avoid these configuration changes, use Run Command to reset access. For more information, see [Using EC2Rescue for Windows Server with Systems Manager Run Command](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2rw-ssm.html) in the *Amazon EC2 User Guide*.

**To run the AWSSupport-ResetAccess Automation**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Automation**.

1. Choose **Execute automation**.

1. In the **Automation document** section, choose **Owned by Amazon** from the list.

1. In the runbooks list, choose the button in the card for **AWSSupport-ResetAccess**, and then choose **Next**.

1. In the **Execute automation document** page, choose **Simple execution**.

1. In the **Document details** section, verify that **Document version** is set to the highest default version. For example, **\$1DEFAULT** or **3 (default)**.

1. In the **Input parameters** section, specify the following parameters: 

   1. For **InstanceID**, specify the ID of the unreachable instance. 

   1. For **SubnetId**, specify a subnet in an existing VPC in the same availability zone as the instance you specified. By default, Systems Manager creates a new VPC, but you can specify a subnet in an existing VPC if you want.
**Note**  
If you don't see the option to specify a subnet ID, verify that you are using the latest **Default** version of the runbook.

   1. For **EC2RescueInstanceType**, specify an instance type for the EC2Rescue instance. The default instance type is `t2.medium`.

   1. For **AssumeRole**, if you created roles for this Automation by using the CloudFormation procedure described earlier in this topic, then specify the AssumeRole ARN that you noted in the CloudFormation console.

1. (Optional) In the **Tags** area, apply one or more tag key name/value pairs to help identify the automation, for example `Key=Purpose,Value=ResetAccess`.

1. Choose **Execute**.

1. To monitor the automation progress, choose the running automation, and then choose the **Steps** tab. When the automation is finished, choose the **Descriptions** tab, and then choose **View output** to view the results. To view the output of individual steps, choose the **Steps** tab, and then choose **View Outputs** next to a step.

The runbook creates a backup AMI and a password-enabled AMI as part of the automation. All other resources created by the automation are automatically deleted, but these AMIs remain in your account. The AMIs are named using the following conventions:
+ Backup AMI: `AWSSupport-EC2Rescue:InstanceID`
+ Password-enabled AMI: AWSSupport-EC2Rescue: Password-enabled AMI from *Instance ID*

You can locate these AMIs by searching on the Automation execution ID.

For Linux, the new SSH private key for your instance is saved, encrypted, in Parameter Store. The parameter name is **/ec2rl/openssh/*instance ID*/key**.

# Passing data to Automation using input transformers


This AWS Systems Manager Automation tutorial shows how to use the input transformer feature of Amazon EventBridge to extract the `instance-id` of an Amazon Elastic Compute Cloud (Amazon EC2) instance from an instance state change event. Automation is a tool in AWS Systems Manager. We use the input transformer to pass that data to the `AWS-CreateImage` runbook target as the `InstanceId` input parameter. The rule is triggered when any instance changes to the `stopped` state.

For more information about working with input transformers, see [Tutorial: Use Input Transformer to Customize What is Passed to the Event Target](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-input-transformer-tutorial.html) in the *Amazon EventBridge User Guide*.

**Before you begin**  
Verify that you added the required permissions and trust policy for EventBridge to your Systems Manager Automation service role. For more information, see [Overview of Managing Access Permissions to Your EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/iam-access-control-identity-based-eventbridge.html) in the *Amazon EventBridge User Guide*.

**To use input transformers with Automation**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to respond to matching events that come from your own AWS account, select **default**. When an AWS service in your account emits an event, it always goes to your account’s default event bus.

1. For **Rule type**, choose **Rule with an event pattern**.

1. Choose **Next**.

1. For **Event source**, choose **AWS events or EventBridge partner events**.

1. In the **Event pattern** section, choose **Use pattern form**.

1. For **Event source**, choose **AWS services**.

1. For **AWS service**, choose **EC2**.

1. For **Event type**, choose **EC2 Instance State-change Notification**.

1. For **Event Type Specification 1**, select **Specific state(s)**, and then choose **stopped**.

1. For **Event Type Specification 2**, select **Any instance**, or select **Specific instance Id(s)** and enter the IDs of the instances to monitor.

1. Choose **Next**.

1. For **Target types**, choose **AWS service**.

1. For **Select a target**, choose **Systems Manager Automation**.

1. For **Document**, choose **AWS-CreateImage**.

1. In the **Configure automation parameter(s)** section, choose **Input Transformer**.

1. For **Input path**, enter **\$1"instance":"\$1.detail.instance-id"\$1**.

1. For **Template**, enter **\$1"InstanceId":[<instance>]\$1**.

1. For **Execution role**, choose **Use existing role** and choose your Automation service role.

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Tagging Your Amazon EventBridge Resources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Review the details of the rule and choose **Create rule**.

# Learn about statuses returned by Systems Manager Automation


AWS Systems Manager Automation reports detailed status information about the various statuses an automation action or step goes through when you run an automation and for the overall automation. Automation is a tool in AWS Systems Manager. You can monitor automation statuses using the following methods:
+ Monitor the **Execution status** in the Systems Manager Automation console.
+ Use your preferred command line tools. For the AWS Command Line Interface (AWS CLI), you can use [describe-automation-step-executions](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-automation-step-executions.html) or [get-automation-execution](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-automation-execution.html). For the AWS Tools for Windows PowerShell, you can use [Get-SSMAutomationStepExecution](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SSMAutomationStepExecution.html) or [Get-SSMAutomationExecution](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-SSMAutomationExecution.html).
+ Configure Amazon EventBridge to respond to action or automation status changes.

For more information about handling timeouts in an automation, see [Handling timeouts in runbooks](automation-handling-timeouts.md).

## About automation statuses


Automation reports status details for individual automation actions in addition to the overall automation.

The overall automation status can be different than the status reported by an individual action or step as noted in the following tables.


**Detailed status for actions**  

| Status | Details | 
| --- | --- | 
| Pending | The step hasn't started running. If your automation uses conditional actions, steps remain in this state after an automation has completed if the condition wasn't met to run the step. Steps also remain in this state if the automation is canceled before the step runs. | 
| InProgress | The step is running. | 
| Waiting | The step is waiting for input. | 
| Success | The step completed successfully. This is a terminal state. | 
| TimedOut | A step or approval wasn't completed before the specified timeout period. This is a terminal state. | 
| Cancelling | The step is in the process of stopping after being canceled by a requester. | 
| Cancelled | The step was stopped by a requester before it completed. This is a terminal state. | 
| Failed |  The step didn't complete successfully. This is a terminal state.  | 
| Exited |  Only returned by the `aws:loop` action. The loop didn't fully complete. A step inside the loop moved to an outside step using the `nextStep`, `onCancel`, or `onFailure` properties.  | 


**Detailed status for an automation**  

| Status | Details | 
| --- | --- | 
| Pending | The automation hasn't started running. | 
| InProgress | The automation is running. | 
| Waiting | The automation is waiting for input. | 
| Success | The automation completed successfully. This is a terminal state. | 
| TimedOut | A step or approval wasn't completed before the specified timeout period. This is a terminal state. | 
| Cancelling | The automation is in the process of stopping after being canceled by a requester. | 
| Cancelled | The automation was stopped by a requester before it completed. This is a terminal state. | 
| Failed |  The automation didn't complete successfully. This is a terminal state.  | 

# Troubleshooting Systems Manager Automation


Use the following information to help you troubleshoot problems with AWS Systems Manager Automation, a tool in AWS Systems Manager. This topic includes specific tasks to resolve issues based on Automation error messages.

**Topics**
+ [

## Common Automation errors
](#automation-trbl-common)
+ [

## Automation execution failed to start
](#automation-trbl-access)
+ [

## Execution started, but status is failed
](#automation-trbl-exstrt)
+ [

## Execution started, but timed out
](#automation-trbl-to)

## Common Automation errors


This section includes information about common Automation errors.

### VPC not defined 400


By default, when Automation runs either the `AWS-UpdateLinuxAmi` runbook or the `AWS-UpdateWindowsAmi` runbook, the system creates a temporary instance in the default VPC (172.30.0.0/16). If you deleted the default VPC, you will receive the following error:

`VPC not defined 400`

To solve this problem, you must specify a value for the `SubnetId` input parameter.

## Automation execution failed to start


An automation can fail with an access denied error or an invalid assume role error if you haven't properly configured AWS Identity and Access Management (IAM) roles, and policies for Automation.

### Access denied


The following examples describe situations when an automation failed to start with an access denied error.

**Access Denied to Systems Manager API**  
**Error message**: `User: user arn isn't authorized to perform: ssm:StartAutomationExecution on resource: document arn (Service: AWSSimpleSystemsManagement; Status Code: 400; Error Code: AccessDeniedException; Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)`
+ Possible cause 1: The user attempting to start the automation doesn't have permission to invoke the `StartAutomationExecution` API. To resolve this issue, attach the required IAM policy to the user that was used to start the automation. 
+ Possible cause 2: The user attempting to start the automation has permission to invoke the `StartAutomationExecution` API but doesn't have permission to invoke the API by using the specific runbook. To resolve this issue, attach the required IAM policy to the user that was used to start the automation. 

**Access denied due to missing PassRole permissions**  
**Error message**: `User: user arn isn't authorized to perform: iam:PassRole on resource: automation assume role arn (Service: AWSSimpleSystemsManagement; Status Code: 400; Error Code: AccessDeniedException; Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)`

The user attempting to start the automation doesn't have PassRole permission for the assume role. To resolve this issue, attach the iam:PassRole policy to the role of the user attempting to start the automation. For more information, see [Task 2: Attach the iam:PassRole policy to your Automation role](automation-setup-iam.md#attach-passrole-policy).

### Invalid assume role


When you run an Automation, an assume role is either provided in the runbook or passed as a parameter value for the runbook. Different types of errors can occur if the assume role isn't specified or configured properly.

**Malformed Assume Role**  
**Error message**: `The format of the supplied assume role ARN isn't valid.` The assume role is improperly formatted. To resolve this issue, verify that a valid assume role is specified in your runbook or as a runtime parameter when starting the automation.

**Assume role can't be assumed**  
**Error message**: `The defined assume role is unable to be assumed. (Service: AWSSimpleSystemsManagement; Status Code: 400; Error Code: InvalidAutomationExecutionParametersException; Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)`
+ Possible cause 1: The assume role doesn't exist. To resolve this issue, create the role. For more information, see [Setting up Automation](automation-setup.md). Specific details for creating this role are described in the following topic, [Task 1: Create a service role for Automation](automation-setup-iam.md#create-service-role).
+ Possible cause 2: The assume role doesn't have a trust relationship with the Systems Manager service. To resolve this issue, create the trust relationship. For more information, see [I Can't Assume A Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_roles.html#troubleshoot_roles_cant-assume-role) in the *IAM User Guide*. 

## Execution started, but status is failed


### Action-specific failures


Runbooks contain steps and steps run in order. Each step invokes one or more AWS service APIs. The APIs determine the inputs, behavior, and outputs of the step. There are multiple places where an error can cause a step to fail. Failure messages indicate when and where an error occurred.

To see a failure message in the Amazon Elastic Compute Cloud (Amazon EC2) console, choose the **View Outputs** link of the failed step. To see a failure message from the AWS CLI, call `get-automation-execution` and look for the `FailureMessage` attribute in a failed `StepExecution`.

In the following examples, a step associated with the `aws:runInstance` action failed. Each example explores a different type of error.

**Missing Image**  
**Error message**: `Automation Step Execution fails when it's launching the instance(s). Get Exception from RunInstances API of ec2 Service. Exception Message from RunInstances API: [The image id '[ami id]' doesn't exist (Service: AmazonEC2; Status Code: 400; Error Code: InvalidAMIID.NotFound; Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)]. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.`

The `aws:runInstances` action received input for an `ImageId` that doesn't exist. To resolve this problem, update the runbook or parameter values with the correct AMI ID.

**Assume role policy lacks sufficient permissions**  
**Error message**: `Automation Step Execution fails when it's launching the instance(s). Get Exception from RunInstances API of ec2 Service. Exception Message from RunInstances API: [You aren't authorized to perform this operation. Encoded authorization failure message: xxxxxxx (Service: AmazonEC2; Status Code: 403; Error Code: UnauthorizedOperation; Request ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)]. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.`

The assume role doesn't have sufficient permission to invoke the `RunInstances` API on EC2 instances. To resolve this problem, attach an IAM policy to the assume role that has permission to invoke the `RunInstances` API. For more information, see the [Create the service roles for Automation using the console](automation-setup-iam.md).

**Unexpected State**  
**Error message**: `Step fails when it's verifying launched instance(s) are ready to be used. Instance i-xxxxxxxxx entered unexpected state: shutting-down. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.`
+ Possible cause 1: There is a problem with the instance or the Amazon EC2 service. To resolve this problem, login to the instance or review the instance system log to understand why the instance started shutting down.
+ Possible cause 2: The user data script specified for the `aws:runInstances` action has a problem or incorrect syntax. Verify the syntax of the user data script. Also, verify that the user data scripts doesn't shut down the instance, or invoke other scripts that shut down the instance.

**Action-Specific Failures Reference**  
When a step fails, the failure message might indicate which service was being invoked when the failure occurred. The following table lists the services invoked by each action. The table also provides links to information about each service.


****  

| Action | AWS services invoked by this action | For information about this service | Troubleshooting content | 
| --- | --- | --- | --- | 
|  `aws:runInstances`  |  Amazon EC2  |  [ Amazon EC2 User Guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/)  |  [Troubleshooting EC2 Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-troubleshoot.html)  | 
|  `aws:changeInstanceState`  |  Amazon EC2  |  [Amazon EC2 User Guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/)  |  [Troubleshooting EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-troubleshoot.html)  | 
|  `aws:runCommand`  |  Systems Manager  |   [AWS Systems Manager Run Command](run-command.md)  |   [Troubleshooting Systems Manager Run Command](troubleshooting-remote-commands.md)  | 
|  `aws:createImage`  |  Amazon EC2  |  [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)  |  | 
|  `aws:createStack`  |  CloudFormation  |  [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)  |  [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html)  | 
|  `aws:deleteStack`  |  CloudFormation  |  [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)  |  [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html)  | 
|  `aws:deleteImage`  |  Amazon EC2  |  [Amazon Machines Images](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)  |  | 
|  `aws:copyImage`  |  Amazon EC2  |  [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)  |  | 
|  `aws:createTag`  |  Amazon EC2, Systems Manager  |  [EC2 Resource and Tags](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_Resources.html)  |  | 
|  `aws:invokeLambdaFunction`  |  AWS Lambda  |  [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/)  |  [Troubleshooting Lambda](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions.html)  | 

### Automation service internal error


**Error message**: `Internal Server Error. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.`

A problem with the Automation service is preventing the specified runbook from running correctly. To resolve this issue, contact AWS Support. Provide the execution ID and customer ID, if available.

## Execution started, but timed out


**Error message**: `Step timed out while step is verifying launched instance(s) are ready to be used. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.`

A step in the `aws:runInstances` action timed out. This can happen if the step action takes longer to run than the value specified for `timeoutSeconds` in the step. To resolve this issue, specify a longer value for the `timeoutSeconds` parameter in the `aws:runInstances` action. If that doesn't solve the problem, investigate why the step takes longer to run than expected

# AWS Systems Manager Change Calendar
Change Calendar

Change Calendar, a tool in AWS Systems Manager, allows you to set up date and time ranges when actions you specify (for example, in [Systems Manager Automation](systems-manager-automation.md) runbooks) might or might not be performed in your AWS account. In Change Calendar, these ranges are called *events*. When you create a Change Calendar entry, you're creating a [Systems Manager document](documents.md) of the type `ChangeCalendar`. In Change Calendar, the document stores [iCalendar 2.0](https://icalendar.org/) data in plaintext format. Events that you add to the Change Calendar entry become part of the document. To get started with Change Calendar, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/change-calendar). In the navigation pane, choose **Change Calendar**.

You can create a calendar and its events in the Systems Manager console. You can also import an iCalendar (`.ics`) file that you have exported from a supported third-party calendar provider to add its events to your calendar. Supported providers include Google Calendar, Microsoft Outlook, and iCloud Calendar.

A Change Calendar entry can be one of two types:

**`DEFAULT_OPEN`**, or Open by default  
All actions can run by default, except during calendar events. During events, the state of a `DEFAULT_OPEN` calendar is `CLOSED` and events are blocked from running.

**`DEFAULT_CLOSED`**, or Closed by default  
All actions are blocked by default, except during calendar events. During events, the state of a `DEFAULT_CLOSED` calendar is `OPEN` and actions are permitted to run.

You can choose to have all scheduled Automation workflows, maintenance windows, and State Manager associations added automatically to a calendar. You can also remove any of those individual types from the calendar display. 

## Who should use Change Calendar?


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 
+ AWS customers who perform the following action types:
  + Create or run Automation runbooks.
  + Create change requests in Change Manager.
  + Run maintenance windows.
  + Create associations in State Manager.

  Automation, Change Manager, Maintenance Windows, and State Manager are all tools AWS Systems Manager. By integrating these tools with Change Calendar, you can allow or block these action types depending on the current state of the change calendar you associate with each one.
+ Administrators who are responsible for keeping the configurations of Systems Manager managed nodes consistent, stable, and functional.

## Benefits of Change Calendar


The following are some benefits of Change Calendar.
+ **Review changes before they're applied**

  A Change Calendar entry can help ensure that potentially destructive changes to your environment are reviewed before they're applied.
+ **Apply changes only during appropriate times**

  Change Calendar entries help keep your environment stable during event times. For example, you can create a Change Calendar entry to block changes when you expect high demand on your resources, such as during a conference or public marketing promotion. A calendar entry can also block changes when you expect limited administrator support, such as during vacations or holidays. You can use a calendar entry to allow changes except for certain times of the day or week when there is limited administrator support to troubleshoot failed actions or deployments.
+ **Get the current or upcoming state of the calendar**

  You can run the Systems Manager `GetCalendarState` API operation to show you the current state of the calendar, the state at a specified time, or the next time that the calendar state is scheduled to change.
**Note**  
The `GetCalendarState` API has a quota of 10 requests per second. For more information about Systems Manager quotas, see [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*.
+ 

**EventBridge support**  
This Systems Manager tool is supported as an *event* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**Topics**
+ [

## Who should use Change Calendar?
](#systems-manager-change-calendar-who)
+ [

## Benefits of Change Calendar
](#systems-manager-change-calendar-benefits)
+ [

# Setting up Change Calendar
](systems-manager-change-calendar-prereqs.md)
+ [

# Working with Change Calendar
](systems-manager-change-calendar-working.md)
+ [

# Adding Change Calendar dependencies to Automation runbooks
](systems-manager-change-calendar-automations.md)
+ [

# Troubleshooting Change Calendar
](change-calendar-troubleshooting.md)

# Setting up Change Calendar


Complete the following before using Change Calendar, a tool in AWS Systems Manager.

## Install latest command line tools


Install the latest command line tools to get state information about calendars.


| Requirement | Description | 
| --- | --- | 
|  AWS CLI  |  (Optional) To use the AWS Command Line Interface (AWS CLI) to get state information about calendars, install the newest release of the AWS CLI on your local computer. For more information about how to install or upgrade the CLI, see [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the *AWS Command Line Interface User Guide*.  | 
|  AWS Tools for PowerShell  |  (Optional) To use the Tools for PowerShell to get state information about calendars, install the newest release of Tools for PowerShell on your local computer. For more information about how to install or upgrade the Tools for PowerShell, see [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html) in the *AWS Tools for PowerShell User Guide*.  | 

## Set up permissions


If your user, group, or role is assigned administrator permissions, then you have full access to Change Calendar. If you don't have administrator permissions, then an administrator must give you permission by either assigning the `AmazonSSMFullAccess` managed policy, or assigning a policy that provides the necessary permissions to your user, group, or role.

The following permissions are required to work with Change Calendar.

**Change Calendar entries**  
To create, update, or delete a Change Calendar entry, including adding and removing events from the entry, a policy attached to your user, group, or role must allow the following actions:  
+ `ssm:CreateDocument`
+ `ssm:DeleteDocument`
+ `ssm:DescribeDocument`
+ `ssm:DescribeDocumentPermission`
+ `ssm:GetCalendar`
+ `ssm:ListDocuments`
+ `ssm:ModifyDocumentPermission`
+ `ssm:PutCalendar`
+ `ssm:UpdateDocument`
+ `ssm:UpdateDocumentDefaultVersion`

**Calendar state**  
To get information about the current or upcoming state of the calendar, a policy attached to your user, group, or role must allow the following action:  
+ `ssm:GetCalendarState`

**Operational events**  
To view operational events, such as maintenance windows, associations, and planned automations, the policy attached to your user, group, or role must allow the following actions:  
+ `ssm:DescribeMaintenanceWindows`
+ `ssm:DescribeMaintenanceWindowExecution`
+ `ssm:DescribeAutomationExecutions`
+ `ssm:ListAssociations`

**Note**  
Change Calendar entries that are owned by (that is, created by) accounts other than yours are read-only, even if they're shared with your account. Maintenance windows, State Manager associations, and automations aren't shared.

# Working with Change Calendar


You can use the AWS Systems Manager console to add, manage, or delete entries in Change Calendar, a tool in AWS Systems Manager. You can also import events from supported third-party calendar providers by importing an iCalendar (`.ics`) file that you exported from the source calendar. And, you can use the `GetCalendarState` API operation or the `get-calendar-state` AWS Command Line Interface (AWS CLI) command to get information about the state of Change Calendar at a specific time.

**Topics**
+ [

# Creating a change calendar
](change-calendar-create.md)
+ [

# Creating and managing events in Change Calendar
](change-calendar-events.md)
+ [

# Importing and managing events from third-party calendars
](third-party-events.md)
+ [

# Updating a change calendar
](change-calendar-update.md)
+ [

# Sharing a change calendar
](change-calendar-share.md)
+ [

# Deleting a change calendar
](change-calendar-delete.md)
+ [

# Getting the state of a change calendar
](change-calendar-getstate.md)

# Creating a change calendar


When you create an entry in Change Calendar, a tool in AWS Systems Manager, you're creating a Systems Manager document (SSM document) that uses the `text` format.

**To create a change calendar**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. Choose **Create calendar**.

   -or-

   If the **Change Calendar** home page opens first, choose **Create change calendar**.

1. On the **Create calendar** page, in **Calendar details**, enter a name for your calendar entry. Calendar entry names can contain letters, numbers, periods, dashes, and underscores. The name should be specific enough to identify the purpose of the calendar entry at a glance. An example is **support-off-hours**. You can't update this name after you create the calendar entry.

1. (Optional) For **Description**, enter a description for your calendar entry.

1. (Optional) In the **Import calendar** area, choose **Choose file** to select an iCalendar (`.ics`) file that you have exported from a third-party calendar provider. Importing the file will add its events to your calendar.

   Supported providers include Google Calendar, Microsoft Outlook, and iCloud Calendar.

   For more information, see [Importing events from third-party calendar providers](change-calendar-import.md).

1. In **Calendar type**, choose one of the following.
   + **Open by default** - The calendar is open (Automation actions can run until an event starts), then closed for the duration of an associated event.
   + **Closed by default** - The calendar is closed (Automation actions can't run until an event starts) but open for the duration of an associated event.

1. (Optional) In **Change management events**, select **Add change management events to the calendar**. This selection displays all scheduled maintenance windows, State Manager associations, Automation workflows, and Change Manager change requests in your monthly calendar display.
**Tip**  
If later you want to permanently remove these events types from the calendar display, edit the calendar, clear this check box, and then choose **Save**.

1. Choose **Create calendar**.

   After the calendar entry is created, Systems Manager displays your calendar entry in the **Change Calendar** list. The columns show the calendar version and the calendar owner's AWS account number. Your calendar entry can't prevent or allow any actions until you have created or imported at least one event. For information about creating an event, see [Creating a Change Calendar event](change-calendar-create-event.md). For information about importing events, see [Importing events from third-party calendar providers](change-calendar-import.md).

# Creating and managing events in Change Calendar


After you create a calendar in AWS Systems Manager Change Calendar, you can create, update, and delete events that are included in your open or closed calendar. Change Calendar is a tool in AWS Systems Manager.

**Tip**  
As an alternative to creating events directly in the Systems Manager console, you can import an iCalendar (`.ics`) file from a supported third-party calendar application. For information, see [Importing and managing events from third-party calendars](third-party-events.md).

**Topics**
+ [

# Creating a Change Calendar event
](change-calendar-create-event.md)
+ [

# Updating a Change Calendar event
](change-calendar-update-event.md)
+ [

# Deleting a Change Calendar event
](change-calendar-delete-event.md)

# Creating a Change Calendar event
Create an event

When you add an event to an entry in Change Calendar, a tool in AWS Systems Manager, you're specifying a period of time during which the default action of the calendar entry is suspended. For example, if the calendar entry type is closed by default, the calendar is open to changes during events. (Alternatively, you can create an advisory event, which serves an informational role on the calendar only.)

Currently, you can only create a Change Calendar event by using the console. Events are added to the Change Calendar document that you create when you create a Change Calendar entry.

**To create a Change Calendar event**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. In the list of calendars, choose the name of the calendar entry to which you want to add an event.

1. On the calendar entry's details page, choose **Create event**.

1. On the **Create scheduled event** page, in **Event details**, enter a display name for your event. Event names can contain letters, numbers, periods, dashes, and underscores. The name should be specific enough to identify the purpose of the event. An example is **nighttime-hours**.

1. For **Description**, enter a description for your event. For example, **The support team isn't available during these hours**.

1. (Optional) If you want this event to serve as a visual notification or reminder only, select the **Advisory** check box. Advisory events play no functional role on your calendar. They serve informational purposes only for those who view your calendar.

1. For **Event start date**, enter or choose a day in the format `MM/DD/YYYY` to start the event, and enter a time on the specified day in the format `hh:mm:ss` (hours, minutes, and seconds) to start the event.

1. For **Event end date**, enter or choose a day in the format `MM/DD/YYYY` to end the event, and enter a time on the specified day in the format `hh:mm:ss` (hours, minutes, and seconds) to end the event.

1. For **Schedule time zone**, choose a time zone that applies to the start and end times of the event. You can enter part of a city name or time zone difference from Greenwich Mean Time (GMT) to find a time zone faster. The default is Coordinated Universal Time (UTC).

1. (Optional) To create an event that recurs daily, weekly, or monthly, turn on **Recurrence**, and then specify the frequency and optional end date for the recurrence.

1. Choose **Create scheduled event**. The new event is added to your calendar entry, and is displayed on the **Events** tab of the calendar entry's details page.

# Updating a Change Calendar event
Update an event

Use the following procedure to update a Change Calendar event in the AWS Systems Manager console. Change Calendar is a tool in AWS Systems Manager.

**To update a Change Calendar event**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. In the list of calendars, choose the name of the calendar entry for which you want to edit an event.

1. On the calendar entry's details page, choose **Events**.

1. In the calendar page, choose the event that you want to edit.
**Tip**  
Use the buttons on the upper left to move back or forward one year, or back or forward one month. Change the time zone, if required, by choosing the correct time zone from the list on the upper right.

1. In **Event details**, choose **Edit**.

   To change the event name and description, add to or replace the current text values.

1. To change the **Event start date** value, choose the current start date, and then choose a new date from the calendar. To change the start time, choose the current start time, and then choose a new time from the list.

1. To change the **Event end date** value, choose the current date, and then choose a new end date from the calendar. To change the end time, choose the current end time, and then choose a new time from the list.

1. To change the **Schedule time zone** value, choose a time zone to apply to the start and end times of the event. You can enter part of a city name or time zone difference from Greenwich Mean Time (GMT) to find a time zone faster. The default is Coordinated Universal Time (UTC).

1. (Optional) If you want this event to serve as a visual notification or reminder only, select the **Advisory** check box. Advisory events play no functional role on your calendar. They serve informational purposes only for those who view your calendar.

1. Choose **Save**. Your changes are displayed on the **Events** tab of the calendar entry's details page. Choose the event that you updated to view your changes.

# Deleting a Change Calendar event
Delete an event

You can delete one event at a time in Change Calendar, a tool in AWS Systems Manager, by using the AWS Management Console. 

**Tip**  
If you selected **Add change management events to the calendar** when you created the calendar, you can do the following:  
To *temporarily* hide a change management event type from the calendar display, choose the **X** for the type at the top of the monthly preview.
To *permanently* remove these types from the calendar display, edit the calendar, clear the **Add change management events to the calendar** check box, and then choose **Save**. Removing the types from the calendar display doesn't delete them from your account.

**To delete a Change Calendar event**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. In the list of calendars, choose the name of the calendar entry from which you want to delete an event.

1. On the calendar entry's details page, choose **Events**.

1. In the calendar page, choose the event that you want to delete.
**Tip**  
Use the buttons on the upper left to move the calendar back or forward one year, or back or forward one month. Change the time zone, if required, by choosing the correct time zone from the list on the upper right.

1. On the **Event details** page, choose **Delete**. When you're prompted to confirm that you want to delete the event, choose **Confirm**.

# Importing and managing events from third-party calendars


As an alternative to creating events directly in the AWS Systems Manager console, you can import an iCalendar (`.ics`) file from a supported third-party calendar application. Your calendar can include both imported events and events that you create in Change Calendar, which is a tool in AWS Systems Manager.

**Before you begin**  
Before you attempt to import a calendar file, review the following requirements and constraints:

Calendar file format  
Only valid iCalendar files (`.ics`) are supported.

Supported calendar providers  
Only `.ics` files exported from the following third-party calendar providers are supported:  
+ Google Calendar ([Export instructions](https://support.google.com/calendar/answer/37111))
+ Microsoft Outlook ([Export instructions](https://support.microsoft.com/en-us/office/export-an-outlook-calendar-to-google-calendar-662fa3bb-0794-4b18-add8-9968b665f4e6))
+ iCloud Calendar ([Export instructions](https://support.apple.com/guide/calendar/import-or-export-calendars-icl1023/mac))

File size  
You can import any number of valid `.ics` files. However, the total size of all imported files for each calendar can't exceed 64KB.  
To minimize the size of the `.ics` file, ensure that you are exporting only basic details about your calendar entries. If necessary, reduce the length of the time period that you are exporting.

Time zone  
In addition to a calendar name, a calendar provider, and at least one event, your exported `.ics` file should also indicate the time zone for the calendar. If it does not, or there is a problem identifying the time zone, you will be prompted to specify one after you import the file.

Recurring event limitation  
Your exported `.ics` file can include recurring events. However, if one or more occurrences of a recurring event had been deleted in the source calendar, the import fails.

**Topics**
+ [

# Importing events from third-party calendar providers
](change-calendar-import.md)
+ [

# Updating all events from a third-party calendar provider
](change-calendar-import-add-remove.md)
+ [

# Deleting all events imported from a third-party calendar
](change-calendar-delete-ics.md)

# Importing events from third-party calendar providers


Use the following procedure to import an iCalendar (`.ics`) file from a supported third-party calendar application. The events contained in the file are incorporated into the rules for your open or closed calendar. You can import a file into a new calendar you are creating with Change Calendar (a tool in AWS Systems Manager) or into an existing calendar.

After you import the `.ics` file, you can remove individual events from it using the Change Calendar interface. For information, see [Deleting a Change Calendar event](change-calendar-delete-event.md). You can also delete all events from the source calendar by deleting the `.ics` file. For information, see [Deleting all events imported from a third-party calendar](change-calendar-delete-ics.md).

**To import events from third-party calendar providers**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. To start with a new calendar, choose **Create calendar**. In the **Import calendar** area, choose **Choose file**. For information about other steps to create a new calendar, see [Creating a change calendar](change-calendar-create.md).

   -or-

   To import third-party events into an existing calendar, choose the name of an existing calendar to open it.

1. Choose **Actions, Edit**, and then in the **Import calendar** area, choose **Choose file**.

1. Navigate to and select the exported `.ics` file on your local computer.

1. If prompted, for **Select a time zone**, select which time zone applies to the calendar.

1. Choose **Save**.

# Updating all events from a third-party calendar provider


If several events are added to or removed from your source calendar after you have imported its iCalendar `.ics` file, you can reflect those changes in Change Calendar. First, re-export the source calendar, and then import the new file into Change Calendar, which is a tool in AWS Systems Manager. Events in your change calendar will be updated to reflect the contents of the newer file.

**To update all events from a third-party calendar provider**

1. In your third-party calendar, add or remove events as you want them to be reflected in Change Calendar, and then re-export the calendar to a new `.ics` file.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. From the list of calendars, choose the calendar name from the list.

1. Choose **Choose file**, and then navigate to and select the replacement `.ics` file.

1. In response to the notification about overwriting the existing file, choose **Confirm**.

# Deleting all events imported from a third-party calendar


If you no longer want any of the events that you imported from a third-party provider included in your calendar, you can delete the imported iCalendar `.ics` file.

**To delete all events imported from a third-party calendar**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. From the list of calendars, choose the calendar name from the list.

1. In the **Import calendar** area, under **My imported calendars**, locate the name of the imported calendar, and then choose the **X** in its card.

1. Choose **Save**.

# Updating a change calendar


You can update the description of a change calendar, but not its name. Although you can change the default state of a calendar, be aware that this reverses the behavior of change actions during events that are associated with the calendar. For example, if you change the state of a calendar from **Open by default** to **Closed by default**, unwanted changes might be made during event periods when the users who created the associated events aren't expecting changes.

When you update a change calendar, you're editing the Change Calendar document that you created when you created the entry. Change Calendar is a tool in AWS Systems Manager.

**To update a change calendar**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. In the list of calendars, choose the name of the calendar that you want to update.

1. On the calendar's details page, choose **Actions, Edit**.

1. In **Description**, you can change the description text. You can't edit the name of a change calendar.

1. To change the calendar state, in **Calendar type**, choose a different value. Be aware that this reverses the behavior of change actions during events that are associated with the calendar. Before you change the calendar type, you should verify with other Change Calendar users that changing the calendar type doesn't allow unwanted changes during events that they have created.
   + **Open by default** – The calendar is open (Automation actions can run until an event starts) then closed for the duration of an associated event.
   + **Closed by default** – The calendar is closed (Automation actions can't run until an event starts) but open for the duration of an associated event.

1. Choose **Save**.

   Your calendar can't prevent or allow any actions until you add at least one event. For information about how to add an event, see [Creating a Change Calendar event](change-calendar-create-event.md).

# Sharing a change calendar


You can share a calendar in Change Calendar, a tool in AWS Systems Manager, with other AWS accounts by using the AWS Systems Manager console. When you share a calendar, the calendar is read-only to users in the shared account. Maintenance windows, State Manager associations, and automations aren't shared.

**To share a change calendar**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. In the list of calendars, choose the name of the calendar that you want to share.

1. On the calendar's details page, choose the **Sharing** tab.

1. Choose **Actions, Share**.

1. In **Share calendar**, for **Account ID**, enter the ID number of a valid AWS account, and then choose **Share**.

   Users of the shared account can read the change calendar, but they can't make changes.

# Deleting a change calendar


You can delete a calendar in Change Calendar, a tool in AWS Systems Manager, by using either the Systems Manager console or the AWS Command Line Interface (AWS CLI). Deleting a change calendar deletes all associated events.

**To delete a change calendar**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Calendar**.

1. In the list of calendars, choose the name of the calendar that you want to delete.

1. On the calendar's details page, choose **Actions, Delete**. When you're prompted to confirm that you want to delete the calendar, choose **Delete**.

# Getting the state of a change calendar


You can get the overall state of a calendar or the state of a calendar at a specific time in Change Calendar, a tool in AWS Systems Manager. You can also show the next time that the calendar state changes from `OPEN` to `CLOSED`, or the reverse.

**Note**  
For information about integrating Change Calendar with Amazon EventBridge for automated monitoring of calendar state changes, see [Change Calendar integration with Amazon EventBridge](monitoring-systems-manager-event-examples.md#change-calendar-eventbridge-integration). EventBridge integration provides event-driven notifications when calendar states transition, complementing the polling-based approach of the `GetCalendarState` API action.

You can do this task only by using the `GetCalendarState` API operation. The procedure in this section uses the AWS Command Line Interface (AWS CLI).

**To get the state of a change calendar**
+ Run the following command to show the state of one or more calendars at a specific time. The `--calendar-names` parameter is required, but `--at-time` is optional. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

  ```
  aws ssm get-calendar-state \
      --calendar-names "Calendar_name_or_document_ARN_1" "Calendar_name_or_document_ARN_2" \
      --at-time "ISO_8601_time_format"
  ```

  The following is an example.

  ```
  aws ssm get-calendar-state \
      --calendar-names "arn:aws:ssm:us-east-2:123456789012:document/MyChangeCalendarDocument" "arn:aws:ssm:us-east-2:123456789012:document/SupportOffHours" \
      --at-time "2020-07-30T11:05:14-0700"
  ```

------
#### [ Windows ]

  ```
  aws ssm get-calendar-state ^
      --calendar-names "Calendar_name_or_document_ARN_1" "Calendar_name_or_document_ARN_2" ^
      --at-time "ISO_8601_time_format"
  ```

  The following is an example.

  ```
  aws ssm get-calendar-state ^
      --calendar-names "arn:aws:ssm:us-east-2:123456789012:document/MyChangeCalendarDocument" "arn:aws:ssm:us-east-2:123456789012:document/SupportOffHours" ^
      --at-time "2020-07-30T11:05:14-0700"
  ```

------

  The command returns information like the following.

  ```
  {
      "State": "OPEN",
      "AtTime": "2020-07-30T16:18:18Z",
      "NextTransitionTime": "2020-07-31T00:00:00Z"
  }
  ```

  The results show the state of the calendar (whether the calendar is of type `DEFAULT_OPEN` or `DEFAULT_CLOSED`) for the specified calendar entries that are owned by or shared with your account, at the time specified as the value of `--at-time`, and the time of the next transition. If you don't add the `--at-time` parameter, the current time is used.
**Note**  
If you specify more than one calendar in a request, the command returns the status of `OPEN` only if all calendars in the request are open. If one or more calendars in the request are closed, the status returned is `CLOSED`.

# Adding Change Calendar dependencies to Automation runbooks


To make Automation actions adhere to Change Calendar, a tool in AWS Systems Manager, add a step in an Automation runbook that uses the [`aws:assertAwsResourceProperty`](automation-action-assertAwsResourceProperty.md) action. Configure the action to run `GetCalendarState` to verify that a specified calendar entry is in the state that you want (`OPEN` or `CLOSED`). The Automation runbook is only allowed to continue to the next step if the calendar state is `OPEN`. The following is a YAML-based sample excerpt of an Automation runbook that can't advance to the next step, `LaunchInstance`, unless the calendar state matches `OPEN`, the state specified in `DesiredValues`.

The following is an example.

```
mainSteps:
  - name: MyCheckCalendarStateStep
    action: 'aws:assertAwsResourceProperty'
    inputs:
      Service: ssm
      Api: GetCalendarState
      CalendarNames: ["arn:aws:ssm:us-east-2:123456789012:document/SaleDays"]
      PropertySelector: '$.State'
      DesiredValues:
      - OPEN
    description: "Use GetCalendarState to determine whether a calendar is open or closed."
    nextStep: LaunchInstance
  - name: LaunchInstance
    action: 'aws:executeScript'
    inputs:
      Runtime: python3.11 
...
```

# Troubleshooting Change Calendar


Use the following information to help you troubleshoot problems with Change Calendar, a tool in AWS Systems Manager.

**Topics**
+ [

## 'Calendar import failed' error
](#change-manager-troubleshooting-1)

## 'Calendar import failed' error


**Problem**: When importing an iCalendar (`.ics`) file, the system reports that the calendar import failed.
+ **Solution 1** – Ensure that you are importing a file that was exported from a supported third-party calendar provider, which include the following:
  + Google Calendar ([Export instructions](https://support.google.com/calendar/answer/37111))
  + Microsoft Outlook ([Export instructions](https://support.microsoft.com/en-us/office/export-an-outlook-calendar-to-google-calendar-662fa3bb-0794-4b18-add8-9968b665f4e6))
  + iCloud Calendar ([Export instructions](https://support.apple.com/guide/calendar/import-or-export-calendars-icl1023/mac))
+ **Solution 2** – If your source calendar contains any recurring events, ensure that no individual occurrences of the event have been canceled or deleted. Currently, Change Calendar doesn't support importing recurring events with individual cancellations. To resolve the issue, remove the recurring event from the source calendar, re-export the calendar and re-import it into Change Calendar, and then add the recurring event using the Change Calendar interface. For information, see [Creating a Change Calendar event](change-calendar-create-event.md).
+ **Solution 3** – Ensure that your source calendar contains at least one event. Uploads of `.ics` files that don't contain events don't succeed.
+ **Solution 4** – If the system reports that the import failed because the `.ics` is too large, ensure that you are exporting only basic details about your calendar entries. If necessary, reduce the length of the time period that you export.
+ **Solution 5** – If Change Calendar is unable to determine the time zone of your exported calendar when you attempt to import it from the **Events** tab, you might receive this message: "Calendar import failed. Change Calendar couldn't locate a valid time zone. You can import the calendar from the Edit menu." In this case, choose **Actions, Edit**, and then try importing the file from the **Edit calendar** page.
+ **Solution 6** – Don't edit the `.ics` file before import. Attempting to modify the contents of the file can corrupt the calendar data. If you have modified the file before attempting the import, export the calendar from the source calendar again, and then re-attempt the upload.

# AWS Systems Manager Change Manager
Change Manager

**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

Change Manager, a tool in AWS Systems Manager, is an enterprise change management framework for requesting, approving, implementing, and reporting on operational changes to your application configuration and infrastructure. From a single *delegated administrator account*, if you use AWS Organizations, you can manage changes across multiple AWS accounts and across AWS Regions. Alternatively, using a *local account*, you can manage changes for a single AWS account. Use Change Manager for managing changes to both AWS resources and on-premises resources. To get started with Change Manager, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/change-manager). In the navigation pane, choose **Change Manager**.

With Change Manager, you can use pre-approved *change templates* to help automate change processes for your resources and help avoid unintentional results when making operational changes. Each change template specifies the following:
+ One or more Automation runbooks for a user to choose from when creating a change request. The changes that are made to your resources are defined in Automation runbooks. You can include custom runbooks or [AWS managed runbooks](automation-documents-reference.md) in the change templates you create. When a user creates a change request, they can choose which one of the available runbooks to include in the request. Additionally, you can create change templates that let the user making the request specify any runbook in the change request.
+ The users in the account who must review change requests that were made using that change template.
+ The Amazon Simple Notification Service (Amazon SNS) topic that is used to notify assigned approvers that a change request is ready for review.
+ The Amazon CloudWatch alarm that is used to monitor the runbook workflow.
+ The Amazon SNS topic that is used to send notifications about status changes for change requests that are created using the change template.
+ The tags to apply to the change template for use in categorizing and filtering your change templates.
+ Whether change requests created from the change template can be run without an approval step (auto-approved requests).

Through its integration with Change Calendar, which is another tool in Systems Manager, Change Manager also helps you safely implement changes while avoiding schedule conflicts with important business events. Change Manager integration with AWS Organizations and AWS IAM Identity Center helps you manage changes across your organization from a single account using your existing identity management system. You can monitor change progress from Change Manager and audit operational changes across your organization, providing improved visibility and accountability.

Change Manager complements the safety controls of your [continuous integration](https://aws.amazon.com/devops/continuous-integration) (CI) practices and [continuous delivery](https://aws.amazon.com/devops/continuous-delivery) (CD) methodology. Change Manager isn't intended for changes made as part of an automated release process, such as a CI/CD pipeline, unless there is an exception or approval required.

## How Change Manager works


When the need for a standard or emergency operational change is identified, someone in the organization creates a change request that is based on one of the change templates created for use in your organization or account.

If the requested change requires manual approvals, Change Manager notifies the designated approvers through an Amazon SNS notification that a change request is ready for their review. You can designate approvers for change requests in the change template, or let users designate approvers in the change request itself. You can assign different reviewers to different templates. For example, assign one user, user group, or AWS Identity and Access Management (IAM) role who must approve requests for changes to managed nodes, and another user, group, or IAM role for database changes. If the change template allows auto-approvals, and a requester's user policy doesn't prohibit it, the user can also choose to run the Automation runbook for their request without a review step (with the exception of change freeze events).

For each change template, you can add up to five levels of approvers. For example, you might require technical reviewers to approve a change request created from a change template first, and then require a second level of approvals from one or more managers.

Change Manager is integrated with [AWS Systems Manager Change Calendar](systems-manager-change-calendar.md). When a requested change is approved, the system first determines whether the request conflicts with other scheduled business activities. If a conflict is detected, Change Manager can block the change or require additional approvals before starting the runbook workflow. For example, you might allow changes only during business hours to ensure that teams are available to manage any unexpected problems. For any changes requested to run outside those hours, you can require higher-level management approval in the form of *change freeze approvers*. For emergency changes, Change Manager can skip the step of checking Change Calendar for conflicts or blocking events after a change request is approved.

When it's time to implement an approved change, Change Manager runs the Automation runbook that is specified in the associated change request. Only the operations defined in approved change requests are permitted when runbook workflows run. This approach helps you avoid unintentional results while changes are being implemented.

In addition to restricting the changes that can be made when a runbook workflow runs, Change Manager also helps you control concurrency and error thresholds. You choose how many resources a runbook workflow can run on at once, how many accounts the change can run in at once, and how many failures to allow before the process is stopped and (if the runbook includes a rollback script) rolled back. You can also monitor the progress of changes being made by using CloudWatch alarms.

After a runbook workflow has completed, you can review details about the changes made. These details include the reason for a change request, which change template was used, who requested and approved the changes, and how the changes were implemented.

**More info**  
[Introducing AWS Systems Manager Change Manager](https://aws.amazon.com/blogs/aws/introducing-systems-manager-change-manager/) on the *AWS News Blog*

## How can Change Manager benefit my operations?


Benefits of Change Manager include the following:
+ **Reduce risk of service disruption and downtime**

  Change Manager can make operational changes safer by ensuring that only approved changes are implemented when a runbook workflow runs. You can block unplanned and unreviewed changes. Change Manager helps you avoid the types of unintentional results caused by human error that require costly hours of research and backtracking.
+ **Get detailed auditing and reporting on change histories**

  Change Manager provides accountability with a consistent way to report and audit changes made across your organization, the intent of the changes, and details about who approved and implemented them.
+ **Avoid schedule conflicts or violations**

  Change Manager can detect schedule conflicts such as holiday events or new product launches, based on the active change calendar for your organization. You can allow runbook workflows to run only during business hours, or allow them only with additional approvals.
+ **Adapt change requirements to your changing business**

  During different business periods, you can implement different change management requirements. For example, during end-of-month reporting, tax season, or other critical business periods, you can block changes or require director-level approval for changes that could introduce unnecessary operational risks.
+ **Centrally manage changes across accounts**

  Through its integration with Organizations, Change Manager makes it possible for you to manage changes throughout all of your organizational units (OUs) from a single delegated administrator account. You can turn on Change Manager for use with your entire organization or with only some of your OUs.

## Who should use Change Manager?


Change Manager is appropriate for the following AWS customers and organizations:
+ Any AWS customer who wants to improve the safety and governance of operational changes made to their cloud or on-premises environments.
+ Organizations that want to increase collaboration and visibility across teams, improve application availability by avoiding downtime, and reduce the risk associated with manual and repetitive tasks.
+ Organizations that must comply with best practices for change management. 
+ Customers who need a fully auditable history of changes made to their application configuration and infrastructure.

## What are the main features of Change Manager?


Primary features of Change Manager include the following:
+ **Integrated support for change management best practices**

  With Change Manager, you can apply select change management best practices to your operations. You can choose to turn on the following options:
  + Check Change Calendar to see if events are currently restricted so changes are made only during open calendar periods.
  + Allow changes during restricted events with extra approvals from change freeze approvers.
  + Require CloudWatch alarms to be specified for all change templates.
  + Require all change templates created in your account to be reviewed and approved before they can be used to create change requests.
+ **Different approval paths for closed calendar periods and emergency change requests**

  You can allow an option to check Change Calendar for restricted events and block approved change requests until the event is complete. However, you can also designate a second group of approvers, change freeze approvers, who can permit the change to be made even if the calendar is closed. You can also create emergency change templates. Change requests created from an emergency change template still require regular approvals but aren't subject to calendar restrictions and don't require change freeze approvals.
+ **Control how and when runbook workflows are started**

  Runbook workflows can be started according to a schedule, or as soon as approvals are complete (subject to calendar restriction rules).
+ **Built-in notification support**

  Specify who in your organization should review and approve change templates and change requests. Assign an Amazon SNS topic to a change template to send notifications to the topic's subscribers about status changes for change requests created with that change template.
+ **Integration with AWS Systems Manager Change Calendar**

  Change Manager allows administrators to restrict scheduling changes during specified time periods. For instance, you can create a policy that allows changes only during business hours to ensure that the team is available to handle any issues. You can also restrict changes during important business events. For example, retail businesses might restrict changes during large sales events. You can also require additional approvals during restricted periods. 
+ **Integration with AWS IAM Identity Center and Active Directory support**

  With IAM Identity Center integration, members of your organization can access AWS accounts and manage their resources using Systems Manager based on a common user identity. Using IAM Identity Center, you can assign your users access to accounts across AWS.

  Integration with Active Directory makes it possible to assign users in your Active Directory account as approvers for change templates created for your Change Manager operations.
+ **Integration with Amazon CloudWatch alarms**

  Change Manager is integrated with CloudWatch alarms. Change Manager listens for CloudWatch alarms during the runbook workflow and takes any actions, including sending notifications, that are defined for the alarm.
+ **Integration with AWS CloudTrail Lake**

  By creating an event data store in AWS CloudTrail Lake, you can view auditable information about the changes made by change requests that run in your account or organization. The event information stored includes such details as the following:
  + The API actions that were run
  + Tthe request parameters included for those actions
  + The user that ran the action
  + The resources that were updated during the process
+ **Integration with AWS Organizations**

  Using the cross-account capabilities provided by Organizations, you can use a delegated administrator account for managing Change Manager operations in OUs in your organization. In your Organizations management account, you can specify which account is to be the delegated administrator account. You can also control which of your OUs Change Manager can be used in.

## Is there a charge to use Change Manager?


Yes. Change Manager is priced on a pay-per-use basis. You pay only for what you use. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

## What are the primary components of Change Manager?


Change Manager components that you use to manage the change process in your organization or account include the following:

### Delegated administrator account


If you use Change Manager across an organization, you use a delegated administrator account. This is the AWS account designated as the account for managing operations activities across Systems Manager, including Change Manager. The delegated administrator account manages change activities across your organization. When you set up your organization for use with Change Manager, you specify which of your accounts serves in this role. The delegated administrator account must be the only member of the organizational unit (OU) to which it's assigned. The delegated administrator account isn't required if you use Change Manager with a single AWS account only.

**Important**  
If you use Change Manager across an organization, we recommend always making changes from the delegated administrator account. Although you can make changes from other accounts in the organization, those changes won't be reported in or viewable from the delegated administrator account.

### Change template


A change template is a collection of configuration settings in Change Manager that define such things as required approvals, available runbooks, and notification options for change requests.

You can require that the change templates created by users in your organization or account go through an approval process before they can be used.

Change Manager supports two types of change templates. For an approved change request that is based on an *emergency change template*, the requested change can be made even if there are blocking events in Change Calendar. For an approved change request that is based on a *standard change template*, the requested change can't be made if there are blocking events in Change Calendar unless additional approvals are received from designated *change freeze event * approvers.

### Change request


A change request is a request in Change Manager to run an Automation runbook that updates one or more resources in your AWS or on-premises environments. A change request is created using a change template.

When you create a change request, one or more approvers in your organization or account must review and approve the request. Without the required approvals, the runbook workflow, which applies the changes you request, isn't permitted to run.

In the system, change requests are a type of OpsItem in AWS Systems Manager OpsCenter. However, OpsItems of the type `/aws/changerequest` aren't displayed in OpsCenter. As OpsItems, change requests are subject to the same enforced quotas as other types of OpsItems. 

Additionally, to create a change request programmatically, you don't call the `CreateOpsItem` API operation. Instead, you use the `[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartChangeRequestExecution.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartChangeRequestExecution.html)` API operation. But rather than running immediately, the change request must be approved, and there must not any blocking events in Change Calendar to prevent the workflow from running. When approvals have been received and the calendar isn't blocked (or permission has been given to bypass blocking calendar events), the `StartChangeRequestExecution` action is able to complete.

### Runbook workflow


A runbook workflow is the process of requested changes being made to the targeted resources in your cloud or on-premises environment. Each change request designates a single Automation runbook to use to make the requested change. The runbook workflow occurs after all required approvals have been granted and there are no blocking events in Change Calendar. If the change has been scheduled for a specific date and time, the runbook workflow doesn't begin until scheduled, even if all approvals have been received and the calendar isn't blocked.

**Topics**
+ [

## How Change Manager works
](#how-change-manager-works)
+ [

## How can Change Manager benefit my operations?
](#change-manager-benefits)
+ [

## Who should use Change Manager?
](#change-manager-who)
+ [

## What are the main features of Change Manager?
](#change-manager-features)
+ [

## Is there a charge to use Change Manager?
](#change-manager-cost)
+ [

## What are the primary components of Change Manager?
](#change-manager-primary-components)
+ [

# Setting up Change Manager
](change-manager-setting-up.md)
+ [

# Working with Change Manager
](working-with-change-manager.md)
+ [

# Auditing and logging Change Manager activity
](change-manager-auditing.md)
+ [

# Troubleshooting Change Manager
](change-manager-troubleshooting.md)

# Setting up Change Manager


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

You can use Change Manager, a tool in AWS Systems Manager, to manage changes for an entire organization, as configured in AWS Organizations, or for a single AWS account.

If you're using Change Manager with an organization, begin with the topic [Setting up Change Manager for an organization (management account)](change-manager-organization-setup.md), and then proceed to [Configuring Change Manager options and best practices](change-manager-account-setup.md).

If you're using Change Manager with a single account, proceed directly to [Configuring Change Manager options and best practices](change-manager-account-setup.md).

**Note**  
If you begin using Change Manager with a single account, but that account is later added to an organizational unit for which Change Manager is allowed, your single account settings are disregarded.

**Topics**
+ [

# Setting up Change Manager for an organization (management account)
](change-manager-organization-setup.md)
+ [

# Configuring Change Manager options and best practices
](change-manager-account-setup.md)
+ [

# Configuring roles and permissions for Change Manager
](change-manager-permissions.md)
+ [

# Controlling access to auto-approval runbook workflows
](change-manager-auto-approval-access.md)

# Setting up Change Manager for an organization (management account)


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

The tasks in this topic apply if you're using Change Manager, a tool in AWS Systems Manager, with an organization that is set up in AWS Organizations. If you want to use Change Manager only with a single AWS account, skip to the topic [Configuring Change Manager options and best practices](change-manager-account-setup.md).

Perform the tasks in this section in an AWS account that is serving as the *management account* in Organizations. For information about the management account and other Organizations concepts, see [AWS Organizations terminology and concepts](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html).

If you need to turn on Organizations and specify your account as the management account before proceeding, see [Creating and managing an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org.html) in the *AWS Organizations User Guide*. 

**Note**  
This setup process can't be performed in the following AWS Regions:  
Europe (Milan) (eu-south-1)
Middle East (Bahrain) (me-south-1)
Africa (Cape Town) (af-south-1)
Asia Pacific (Hong Kong) (ap-east-1)
Ensure that you're working in a different Region in your management account for this procedure.

During the setup procedure, you perform the following major tasks in Quick Setup, a tool in AWS Systems Manager.
+ **Task 1: Register the delegated administrator account for your organization**

  The change-related tasks that are performed using Change Manager are managed in one of your member accounts, which you specify to be the *delegated administrator account*. The delegated administrator account you register for Change Manager becomes the delegated administrator account for all your Systems Manager operations. (You might have delegated administrator accounts for other AWS services). Your delegated administrator account for Change Manager, which isn't the same as your management account, manages change activities across your organization, including change templates, change requests, and approvals for each. In the delegated administrator account, you also specify other configuration options for your Change Manager operations. 
**Important**  
The delegated administrator account must be the only member of the organizational unit (OU) to which it's assigned in Organizations.
+ **Task 2: Define and specify runbook access policies for change requester roles, or custom job functions, that you want to use for your Change Manager operations**

  In order to create change requests in Change Manager, users in your member accounts must be granted AWS Identity and Access Management (IAM) permissions that allow them to access only the Automation runbooks and change templates you choose to make available to them. 
**Note**  
When a user creates a change request, they first select a change template. This change template might make multiple runbooks available, but the user can select only one runbook for each change request. Change templates can also be configured to allow users to include any available runbook in their requests.

  To grant the needed permissions, Change Manager uses the concept of *job functions*, which is also used by IAM. However, unlike the [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in IAM, you specify both the names of your Change Manager job functions and the IAM permissions for those job functions. 

  When you configure a job function, we recommend creating a custom policy and providing only the permissions needed to perform change management tasks. For instance, you might specify permissions that limit users to that specific set of runbooks based on *job functions* that you define. 

  For example, you might create a job function with the name `DBAdmin`. For this job function, you might grant only permissions needed for runbooks related to Amazon DynamoDB databases, such as `AWS-CreateDynamoDbBackup` and `AWSConfigRemediation-DeleteDynamoDbTable`. 

  As another example, you might want to grant some users only the permissions needed to work with runbooks related to Amazon Simple Storage Service (Amazon S3) buckets, such as `AWS-ConfigureS3BucketLogging` and `AWSConfigRemediation-ConfigureS3BucketPublicAccessBlock`. 

  The configuration process in Quick Setup for Change Manager also makes a set of full Systems Manager administrative permissions available for you to apply to an administrative role you create. 

  Each Change Manager Quick Setup configuration you deploy creates a job function in your delegated administrator account with permissions to run Change Manager templates and Automation runbooks in the organizational units you have selected. You can create up to 15 Quick Setup configurations for Change Manager. 
+ **Task 3: Choose which member accounts in your organization to use with Change Manager**

  You can use Change Manager with all the member accounts in all your organizational units that are set up in Organizations, and in all the AWS Regions they operate in. If you prefer, you can instead use Change Manager with only some of your organizational units.

**Important**  
We strongly recommend, before you begin this procedure, that you read through its steps to understand the configuration choices you're making and the permissions you're granting. In particular, plan the custom job functions you will create and the permissions you assign to each job function. This ensures that when later you attach the job function policies you create to individual users, user groups, or IAM roles, they're being granted only the permissions you intend for them to have.  
As a best practice, begin by setting up the delegated administrator account using the login for an AWS account administrator. Then configure job functions and their permissions after you have created change templates and identified the runbooks that each one uses.

To set up Change Manager for use with an organization, perform the following task in the Quick Setup area of the Systems Manager console.

You repeat this task for each job function you want to create for your organization. Each job function you create can have permissions for a different set of organizational units.

**To set up an organization for Change Manager in the Organizations management account**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Change Manager** card, choose **Create**.

1. For **Delegated administrator account**, enter the ID of the AWS account you want to use for managing change templates, change requests, and runbook workflows in Change Manager. 

   If you have previously specified a delegated administrator account for Systems Manager, its ID is already reported in this field. 
**Important**  
The delegated administrator account must be the only member of the organizational unit (OU) to which it's assigned in Organizations.  
If the delegated administrator account you register is later deregistered from that role, the system removes its permissions for managing Systems Manager operations at the same time. Keep in mind that it will be necessary for you return to Quick Setup, designate a different delegated administrator account, and specify all job functions and permissions again.  
If you use Change Manager across an organization, we recommend always making changes from the delegated administrator account. Although you can make changes from other accounts in the organization, those changes won't be reported in or viewable from the delegated administrator account.

1. In the **Permissions to request and make changes** section, do the following.
**Note**  
Each deployment configuration you create provides the permissions policy for just one job function. You can return to Quick Setup later to create more job functions when you have created change templates to use in your operations.

   **To create an administrative role** – For an administrator job function that has IAM permissions for all AWS actions, do the following.
**Important**  
Granting users full administrative permissions should be done sparingly, and only if their roles require full Systems Manager access. For important information about security considerations for Systems Manager access, see [Identity and access management for AWS Systems Manager](security-iam.md) and [Security best practices for Systems Manager](security-best-practices.md).

   1. For **Job function**, enter a name to identify this role and its permissions, such as **MyAWSAdmin**.

   1. For **Role and permissions option**, choose **Administrator permissions**.

   **To create other job functions** – To create a non-administrative role, do the following:

   1. For **Job function**, enter a name to identify this role and suggest its permissions. The name you choose should represent scope of the runbooks for which you will provide permissions, such as `DBAdmin` or `S3Admin`. 

   1. For **Role and permissions option**, choose **Custom permissions**.

   1. In the **Permissions policy editor**, enter the IAM permissions, in JSON format, to grant to this job function.
**Tip**  
We recommend that you use the IAM policy editor to construct your policy and then paste the policy JSON into the **Permissions policy** field.

**Sample policy: DynamoDB database management**  
For example, you might begin with policy content that provides permissions for working with the Systems Manager documents (SSM documents) the job function needs access to. Here is a sample policy content that grants access to all the AWS managed Automation runbooks related to DynamoDB databases and two change templates that have been created in the sample AWS account `123456789012`, in the US East (Ohio) Region (`us-east-2`). 

   The policy also includes permission for the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartChangeRequestExecution.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartChangeRequestExecution.html) operation, which is required for creating a change request in Change Calendar. 
**Note**  
This example isn't comprehensive. Additional permissions might be needed for working with other AWS resources, such as databases and nodes.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:CreateDocument",
                   "ssm:DescribeDocument",
                   "ssm:DescribeDocumentParameters",
                   "ssm:DescribeDocumentPermission",
                   "ssm:GetDocument",
                   "ssm:ListDocumentVersions",
                   "ssm:ModifyDocumentPermission",
                   "ssm:UpdateDocument",
                   "ssm:UpdateDocumentDefaultVersion"
               ],
               "Resource": [
                   "arn:aws:ssm:us-east-1:*:document/AWS-CreateDynamoDbBackup",
                   "arn:aws:ssm:us-east-1:*:document/AWS-AWS-DeleteDynamoDbBackup",
                   "arn:aws:ssm:us-east-1:*:document/AWS-DeleteDynamoDbTableBackups",
                   "arn:aws:ssm:us-east-1:*:document/AWSConfigRemediation-DeleteDynamoDbTable",
                   "arn:aws:ssm:us-east-1:*:document/AWSConfigRemediation-EnableEncryptionOnDynamoDbTable",
                   "arn:aws:ssm:us-east-1:*:document/AWSConfigRemediation-EnablePITRForDynamoDbTable",
                   "arn:aws:ssm:us-east-1:111122223333:document/MyFirstDBChangeTemplate",
                   "arn:aws:ssm:us-east-1:111122223333:document/MySecondDBChangeTemplate"
               ]
           },
           {
               "Effect": "Allow",
               "Action": "ssm:ListDocuments",
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": "ssm:StartChangeRequestExecution",
               "Resource": [
                   "arn:aws:ssm:us-east-1:111122223333:document/*",
                   "arn:aws:ssm:us-east-1:111122223333:automation-execution/*"
               ]
           }
       ]
   }
   ```

------

   For more information about IAM policies, see [Access management for AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html) and [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide.*

1. In the **Targets** section, choose whether to grant permissions for the job function you're creating to your entire organization or only some of your organizational units.

   If you choose **Entire organization**, continue to step 9.

   If you choose **Custom**, continue to step 8.

1. In the **Target OUs** section, select the check boxes of the organizational units to use with Change Manager.

1. Choose **Create**.

After the system finishes setting up Change Manager for your organization, it displays a summary of your deployments. This summary information includes the name of the role that was created for the job function you configured. For example, `AWS-QuickSetup-SSMChangeMgr-DBAdminInvocationRole`.

**Note**  
Quick Setup uses AWS CloudFormation StackSets to deploy your configurations. You can also view information about a completed deployment configuration in the CloudFormation console. For information about StackSets, see [Working with AWS CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html) in the *AWS CloudFormation User Guide*.

Your next step is to configure additional Change Manager options. You can complete this task in either your delegated administrator account or any account in an organization unit that you have allowed for use with Change Manager. You configure options such as choosing a user identity management option, specifying which users can review and approve or reject change templates and change requests, and choosing which best practice options to allow for your organization. For information, see [Configuring Change Manager options and best practices](change-manager-account-setup.md).

# Configuring Change Manager options and best practices


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

The tasks in this section must be performed whether you're using Change Manager, a tool in AWS Systems Manager, across an organization or in a single AWS account.

If you're using Change Manager for an organization, you can perform the following tasks in either your delegated administrator account or any account in an organization unit that you have allowed for use with Change Manager.

**Topics**
+ [

## Task 1: Configuring Change Manager user identity management and template reviewers
](#cm-configure-account-task-1)
+ [

## Task 2: Configuring Change Manager change freeze event approvers and best practices
](#cm-configure-account-task-2)
+ [

# Configuring Amazon SNS topics for Change Manager notifications
](change-manager-sns-setup.md)

## Task 1: Configuring Change Manager user identity management and template reviewers


Perform the task in this procedure the first time you access Change Manager. You can update these configuration settings later by returning to Change Manager and choosing **Edit** on the **Settings** tab.

**To configure Change Manager user identity management and template reviewers**

1. Sign in to the AWS Management Console.

   If you're using Change Manager for an organization, sign in using your credentials for your delegated administrator account. The user must have the necessary AWS Identity and Access Management (IAM) permissions for making updates to your Change Manager settings.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. On the service home page, depending on the available options, do one of the following:
   + If you're using Change Manager with AWS Organizations , choose **Set up delegated account**.
   + If you're using Change Manager with a single AWS account, choose **Set up Change Manager**.

     -or-

     Choose **Create sample change request**, **Skip**, and then choose the **Settings** tab.

1. For **User identity management**, choose one of the following.
   + **AWS Identity and Access Management (IAM)** – Identify the users who make and approve requests and perform other actions in Change Manager by using your existing user, groups, and roles.
   + **AWS IAM Identity Center (IAM Identity Center)** – Allow [IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/) to create and manage identities, or connect to your existing identity source to identify the users who perform actions in Change Manager.

1. In the **Template reviewer notification** section, specify the Amazon Simple Notification Service (Amazon SNS) topics to use to notify template reviewers that a new change template or change template version is ready for review. Ensure that the Amazon SNS topic you choose is configured to send notifications to your template reviewers. 

   For information about creating and configuring Amazon SNS topics for change template reviewer notifications, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md).

   1. To specify the Amazon SNS topic for template reviewer notification, choose one of the following:
      + **Enter an SNS Amazon Resource Name (ARN)** – For **Topic ARN**, enter the ARN of an existing Amazon SNS topic. This topic can be in any of your organization's accounts.
      + **Select an existing SNS topic** – For **Target notification topic**, select the ARN of an existing Amazon SNS topic in your current AWS account. (This option isn't available if you haven't yet created any Amazon SNS topics in your current AWS account and AWS Region.)
**Note**  
The Amazon SNS topic you select must be configured to specify the notifications it sends and the subscribers they're sent to. Its access policy must also grant permissions to Systems Manager so Change Manager can send notifications. For information, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md). 

   1. Choose **Add notification**.

1. In the **Change template reviewers** section, select the users in your organization or account to review new change templates or change template versions before they can be used in your operations. 

   Change template reviewers are responsible for verifying the suitability and security of templates other users have submitted for use in Change Manager runbook workflows.

   Select change template reviewers by doing the following:

   1. Choose **Add**.

   1. Select the check box next to the name of each user, group, or IAM role you want to assign as a change template reviewer.

   1. Choose **Add approvers**.

1. Choose **Submit**.

 After you complete this initial setup process, configure additional Change Manager settings and best practices by following the steps in [Task 2: Configuring Change Manager change freeze event approvers and best practices](#cm-configure-account-task-2).

## Task 2: Configuring Change Manager change freeze event approvers and best practices


After you complete the steps in [Task 1: Configuring Change Manager user identity management and template reviewers](#cm-configure-account-task-1), you can designate extra reviewers for change requests during *change freeze events* and specify which available best practices you want to allow for your Change Manager operations.

A change freeze event means that restrictions are in place in the current change calendar (the calendar state in AWS Systems Manager Change Calendar is `CLOSED`). In these cases, in addition to regular approvers for change requests, or if the change request is created using a template that allow auto-approvals, change freeze approvers must grant permission for this change request to run. If they don't, the change won't be processed until the calendar state is again `OPEN`.

**To configure Change Manager change freeze event approvers and best practices**

1. In the navigation pane, choose **Change Manager**.

1. Choose the **Settings** tab, and then choose **Edit**.

1. In the **Approvers for change freeze events** section, select the users in your organization or account who can approve changes to run even when the calendar in use in Change Calendar is currently CLOSED.
**Note**  
To allow change freeze reviews, you must turn on the **Check Change Calendar for restricted change events** option in **Best practices**.

   Select approvers for change freeze events by doing the following:

   1. Choose **Add**.

   1. Select the check box next to the name of each user, group, or IAM role you want to assign as an approver for change freeze events.

   1. Choose **Add approvers**.

1. In the **Best practices** section near the bottom of the page, turn on the best practices you want to enforce for each of the following options.
   + Option: **Check Change Calendar for restricted change events**

     To specify that Change Manager checks a calendar in Change Calendar to make sure changes aren't blocked by scheduled events, first select the **Enabled** check box, and then select the calendar to check for restricted events from the **Change Calendar** list.

     For more information about Change Calendar, see [AWS Systems Manager Change Calendar](systems-manager-change-calendar.md).
   + Option: **SNS topic for approvers for closed events**

     1. Choose one of the following to specify the Amazon Simple Notification Service (Amazon SNS) topic in your account to use for sending notifications to approvers during change freeze events. (Note that you must also specify approvers in the **Approvers for change freeze events** section above **Best practices**.)
        + **Enter an SNS Amazon Resource Name (ARN)** – For **Topic ARN**, enter the ARN of an existing Amazon SNS topic. This topic can be in any of your organization's accounts.
        + **Select an existing SNS topic** – For **Target notification topic**, select the ARN of an existing Amazon SNS topic in your current AWS account. (This option isn't available if you haven't yet created any Amazon SNS topics in your current AWS account and AWS Region.)
**Note**  
The Amazon SNS topic you select must be configured to specify the notifications it sends and the subscribers they're sent to. Its access policy must also grant permissions to Systems Manager so Change Manager can send notifications. For information, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md). 

     1. Choose **Add notification**.
   + Option: **Require monitors for all templates**

     If you want to ensure that all templates for your organization or account specify an Amazon CloudWatch alarm to monitor your change operation, select the **Enabled** check box.
   + Option: **Require template review and approval before use**

     To ensure that no change requests are created, and no runbook workflows run, without being based on a template that has been reviewed and approved, select the **Enabled** check box.

1. Choose **Save**.

# Configuring Amazon SNS topics for Change Manager notifications


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

You can configure Change Manager, a tool in AWS Systems Manager, to send notifications to an Amazon Simple Notification Service (Amazon SNS) topic for events related to change requests and change templates. Complete the following tasks to receive notifications for the Change Manager events you add a topic to.

**Topics**
+ [

## Task 1: Create and subscribe to an Amazon SNS topic
](#change-manager-sns-setup-create-topic)
+ [

## Task 2: Update the Amazon SNS access policy
](#change-manager-sns-setup-encryption-policy)
+ [

## Task 3: (Optional) Update the AWS Key Management Service access policy
](#change-manager-sns-setup-KMS-policy)

## Task 1: Create and subscribe to an Amazon SNS topic


First, you must create and subscribe to an Amazon SNS topic. For more information, see [Creating a Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html) and [Subscribing to an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-subscribe-endpoint-to-topic.html) in the *Amazon Simple Notification Service Developer Guide*.

**Note**  
To receive notifications, you must specify the Amazon Resource Name (ARN) of an Amazon SNS topic that is in the same AWS Region and AWS account as the delegated administrator account. 

## Task 2: Update the Amazon SNS access policy


Use the following procedure to update the Amazon SNS access policy so that Systems Manager can publish Change Manager notifications to the Amazon SNS topic you created in Task 1. Without completing this task, Change Manager doesn't have permission to send notifications for the events you add the topic for.

1. Sign in to the AWS Management Console and open the Amazon SNS console at [https://console.aws.amazon.com/sns/v3/home](https://console.aws.amazon.com/sns/v3/home).

1. In the navigation pane, choose **Topics**.

1. Choose the topic you created in Task 1, and then choose **Edit**.

1. Expand **Access policy**.

1. Add and update the following `Sid` block to the existing policy and replace each *user input placeholder* with your own information .

   ```
   {
       "Sid": "Allow Change Manager to publish to this topic",
       "Effect": "Allow",
       "Principal": {
           "Service": "ssm.amazonaws.com"
       },
       "Action": "sns:Publish",
       "Resource": "arn:aws:sns:region:account-id:topic-name",
       "Condition": {
           "StringEquals": {
               "aws:SourceAccount": [
                   "account-id"
               ]
           }
       }
   }
   ```

   Enter this block after the existing `Sid` block, and replace *region*, *account-id*, and *topic-name* with the appropriate values for the topic you created.

1. Choose **Save changes**.

The system now sends notifications to the Amazon SNS topic when the event type you add to topic for occurs.

**Important**  
If you configured the Amazon SNS topic with an AWS Key Management Service (AWS KMS) server-side encryption key, then you must complete Task 3.

## Task 3: (Optional) Update the AWS Key Management Service access policy


If you turned on AWS Key Management Service (AWS KMS) server-side encryption for your Amazon SNS topic, then you must also update the access policy of the AWS KMS key you chose when you configured the topic. Use the following procedure to update the access policy so that Systems Manager can publish Change Manager approval notifications to the Amazon SNS topic you created in Task 1.

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. In the navigation pane, choose **Customer managed keys**.

1. Choose the ID of the customer managed key you chose when you created the topic.

1. In the **Key policy** section, choose **Switch to policy view**.

1. Choose **Edit**.

1. Enter the following `Sid` block after one of the existing `Sid` blocks in the existing policy. Replace each *user input placeholder* with your own information.

   ```
   {
       "Sid": "Allow Change Manager to decrypt the key",
       "Effect": "Allow",
       "Principal": {
           "Service": "ssm.amazonaws.com"
       },
       "Action": [
           "kms:Decrypt",
           "kms:GenerateDataKey*"
       ],
       "Resource": "arn:aws:kms:region:account-id:key/key-id",
       "Condition": {
           "StringEquals": {
               "aws:SourceAccount": [
                   "account-id"
               ]
           }
       }
   }
   ```

1. Now enter the following `Sid` block after one of the existing `Sid` blocks in the resource policy to help prevent the [cross-service confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html). 

   This block uses the [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn) and [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount) global condition context keys to limit the permissions that Systems Manager gives another service to the resource.

   Replace each *user input placeholder* with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Configure confused deputy protection for AWS KMS keys used in Amazon SNS topic when called from Systems Manager",
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": [
                   "sns:Publish"
               ],
               "Resource": "arn:aws:sns:us-east-1:111122223333:topic-name",
               "Condition": {
                   "ArnLike": {
                       "aws:SourceArn": "arn:aws:ssm:us-east-1:111122223333:*"
                   },
                   "StringEquals": {
                       "aws:SourceAccount": "111122223333"
                   }
               }
           }
       ]
   }
   ```

------

1. Choose **Save changes**.

# Configuring roles and permissions for Change Manager


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

By default, Change Manager doesn't have permission to perform actions on your resources. You must grant access by using an AWS Identity and Access Management (IAM) service role, or *assume role*. This role enables Change Manager to securely run the runbook workflows specified in an approved change request on your behalf. The role grants AWS Security Token Service (AWS STS) [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) trust to Change Manager.

By providing these permissions to a role to act on behalf of users in an organization, users don't need to be granted that array of permissions themselves. The actions allowed by the permissions are limited to approved operations only.

When users in your account or organization create a change request, they can select this assume role to perform the change operations.

You can create a new assume role for Change Manager or update an existing role with the needed permissions.

If you need to create a service role for Change Manager, complete the following tasks. 

**Topics**
+ [

## Task 1: Creating an assume role policy for Change Manager
](#change-manager-role-policy)
+ [

## Task 2: Creating an assume role for Change Manager
](#change-manager-role)
+ [

## Task 3: Attaching the `iam:PassRole` policy to other roles
](#change-manager-passpolicy)
+ [

## Task 4: Adding inline policies to an assume role to invoke other AWS services
](#change-manager-role-add-inline-policy)
+ [

## Task 5: Configuring user access to Change Manager
](#change-manager-passrole)

## Task 1: Creating an assume role policy for Change Manager


Use the following procedure to create the policy that you will attach to your Change Manager assume role.

**To create an assume role policy for Change Manager**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create Policy**.

1. On the **Create policy** page, choose the **JSON** tab and replace the default content with the following, which you will modify for your own Change Manager operations in following steps.
**Note**  
If you're creating a policy to use with a single AWS account, and not an organization with multiple accounts and AWS Regions, you can omit the first statement block. The `iam:PassRole` permission isn't required in the case of a single account using Change Manager.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "iam:PassRole",
               "Resource": "arn:aws:iam::111122223333:role/AWS-SystemsManager-job-functionAdministrationRole",
               "Condition": {
                   "StringEquals": {
                       "iam:PassedToService": "ssm.amazonaws.com"
                   }
               }
           },
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:DescribeDocument",
                   "ssm:GetDocument",
                   "ssm:StartChangeRequestExecution"
               ],
               "Resource": [
                   "arn:aws:ssm:us-east-1::document/template-name",
                   "arn:aws:ssm:us-east-1:111122223333:automation-execution/*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:ListOpsItemEvents",
                   "ssm:GetOpsItem",
                   "ssm:ListDocuments",
                   "ssm:DescribeOpsItems"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

1. For the `iam:PassRole` action, update the `Resource` value to include the ARNs of all job functions defined for your organization that you want to grant permissions to initiate runbook workflows.

1. Replace the *region*, *account-id*, *template-name*, *delegated-admin-account-id*, and *job-function* placeholders with values for your Change Manager operations.

1. For the second `Resource` statement, modify the list to include all change templates that you want to grant permissions for. Alternatively, specify `"Resource": "*"` to grant permissions for all change templates in your organization.

1. Choose **Next: Tags**.

1. (Optional) Add one or more tag-key value pairs to organize, track, or control access for this policy. 

1. Choose **Next: Review**.

1. On the **Review policy** page, enter a name in the **Name** box, such as **MyChangeManagerAssumeRole**, and then enter an optional description.

1. Choose **Create policy**, and continue to [Task 2: Creating an assume role for Change Manager](#change-manager-role).

## Task 2: Creating an assume role for Change Manager


Use the following procedure to create a Change Manager assume role, a type of service role, for Change Manager.

**To create an assume role for Change Manager**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. For **Select trusted entity**, make the following choices:

   1. For **Trusted entity type**, choose **AWS service**

   1. For **Use cases for other AWS services**, choose **Systems Manager**

   1. Choose **Systems Manager**, as shown in the following image.  
![\[Screenshot illustrating the Systems Manager option selected as a use case.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/iam_use_cases_for_MWs.png)

1. Choose **Next**.

1. On the **Attached permissions policy** page, search for the assume role policy you created in [Task 1: Creating an assume role policy for Change Manager](#change-manager-role-policy), such as **MyChangeManagerAssumeRole**. 

1. Select the check box next to the assume role policy name, and then choose **Next: Tags**.

1. For **Role name**, enter a name for your new instance profile, such as **MyChangeManagerAssumeRole**.

1. (Optional) For **Description**, update the description for this instance role.

1. (Optional) Add one or more tag-key value pairs to organize, track, or control access for this role. 

1. Choose **Next: Review**.

1. (Optional) For **Tags**, add one or more tag-key value pairs to organize, track, or control access for this role, and then choose **Create role**. The system returns you to the **Roles** page.

1. Choose **Create role**. The system returns you to the **Roles** page.

1. On the **Roles** page, choose the role you just created to open the **Summary** page. 

## Task 3: Attaching the `iam:PassRole` policy to other roles


Use the following procedure to attach the `iam:PassRole` policy to an IAM instance profile or IAM service role. (The Systems Manager service uses IAM instance profiles to communicate with EC2 instances. For non-EC2 managed nodes in a [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment, an IAM service role is used instead.)

By attaching the `iam:PassRole` policy, the Change Manager service can pass assume role permissions to other services or Systems Manager tools when running runbook workflows.

**To attach the `iam:PassRole` policy to an IAM instance profile or service role**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. Search for the Change Manager assume role you created, such as **MyChangeManagerAssumeRole**, and choose its name.

1. In the **Summary** page for the assume role, choose the **Permissions** tab.

1. Choose **Add permissions, Create inline policy**.

1. On the **Create policy** page, choose the **Visual editor** tab.

1. Choose **Service**, and then choose **IAM**.

1. In the **Filter actions** text box, enter **PassRole**, and then choose the **PassRole** option.

1. Expand **Resources**. Verify that **Specific** is selected, and then choose **Add ARN**.

1. In the **Specify ARN for role** field, enter the ARN of the IAM instance profile role or IAM service role to which you want to pass assume role permissions. The system populates the **Account** and **Role name with path** fields. 

1. Choose **Add**.

1. Choose **Review policy**.

1. For **Name**, enter a name to identify this policy, and then choose **Create policy**.

**More info**  
+ [Configure instance permissions required for Systems Manager](setup-instance-permissions.md)
+ [Create the IAM service role required for Systems Manager in hybrid and multicloud environments](hybrid-multicloud-service-role.md)

## Task 4: Adding inline policies to an assume role to invoke other AWS services


When a change request invokes other AWS services by using the Change Manager assume role, the assume role must be configured with permission to invoke those services. This requirement applies to all AWS Automation runbooks (AWS-\$1 runbooks) that might be used in a change request, such as the `AWS-ConfigureS3BucketLogging`, `AWS-CreateDynamoDBBackup`, and `AWS-RestartEC2Instance` runbooks. This requirement also applies to any custom runbooks you create that invoke other AWS services by using actions that call other services. For example, if you use the `aws:executeAwsApi`, `aws:CreateStack`, or `aws:copyImage` actions, then you must configure the service role with permission to invoke those services. You can enable permissions to other AWS services by adding an IAM inline policy to the role. 

**To add an inline policy to an assume role to invoke other AWS services (IAM console)**

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**.

1. In the list, choose the name of the assume role that you want to update, such as `MyChangeManagerAssumeRole`.

1. Choose the **Permissions** tab.

1. Choose **Add permissions, Create inline policy**.

1. Choose the **JSON** tab.

1. Enter a JSON policy document for the AWS services you want to invoke. Here are two example JSON policy documents.

   **Amazon S3 `PutObject` and `GetObject` example**

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
           }
       ]
   }
   ```

------

   **Amazon EC2 `CreateSnapshot` and `DescribeSnapShots` example**

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
         {
            "Effect":"Allow",
            "Action":"ec2:CreateSnapshot",
            "Resource":"*"
         },
         {
            "Effect":"Allow",
            "Action":"ec2:DescribeSnapshots",
            "Resource":"*"
         }
      ]
   }
   ```

------

    For details about the IAM policy language, see [IAM JSON policy reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies.html) in the *IAM User Guide*.

1. When you're finished, choose **Review policy**. The [Policy Validator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_policy-validator.html) reports any syntax errors.

1. For **Name**, enter a name to identify the policy that you're creating. Review the policy **Summary** to see the permissions that are granted by your policy. Then choose **Create policy** to save your work.

1. After you create an inline policy, it's automatically embedded in your role.

## Task 5: Configuring user access to Change Manager


If your user, group, or role is assigned administrator permissions, then you have access to Change Manager. If you don't have administrator permissions, then an administrator must assign the `AmazonSSMFullAccess` managed policy, or a policy that provides comparable permissions, to your user, group, or role.

Use the following procedure to configure a user to use Change Manager. The user you choose will have permission to configure and run Change Manager. 

Depending on the identity application that you are using in your organization, you can select any of the three options available to configure user access. While configuring the user access, assign or add the following: 

1. Assign the `AmazonSSMFullAccess` policy or a comparable policy that gives permission to access Systems Manager.

1. Assign the `iam:PassRole` policy.

1. Add the ARN for the Change Manager assume role you copied at the end of [Task 2: Creating an assume role for Change Manager](#change-manager-role).

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

You have finished configuring the required roles for Change Manager. You can now use the Change Manager assume role ARN in your Change Manager operations.

# Controlling access to auto-approval runbook workflows


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

In each change template created for your organization or account, you can specify whether change requests created from that template can run as auto-approved change requests, meaning that they run automatically without a review step (with the exception of change freeze events).

However, you might want to prevent certain users, groups, or AWS Identity and Access Management (IAM) roles from running auto-approved change requests even if a change template allows it. You can do this through the use of the `ssm:AutoApprove` condition key for the `StartChangeRequestExecution` operation in an IAM policy assigned to the user, group, or IAM role. 

You can add the following policy as an inline policy, where the condition is specified as `false`, to prevent users from running auto-approvable change requests.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
            {
            "Effect": "Allow",
            "Action": "ssm:StartChangeRequestExecution",
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "ssm:AutoApprove": "false"
                }
            }
        }
    ]
}
```

------

For information about specifying inline policies, see [Inline policies](https://docs.aws.amazon.com//IAM/latest/UserGuide/access_policies_managed-vs-inline.html#inline-policies) and [Adding and removing IAM identity permissions](https://docs.aws.amazon.com//IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

For more information about condition keys for Systems Manager policies, see [Condition keys for Systems Manager](security_iam_service-with-iam.md#policy-conditions).

# Working with Change Manager


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

With Change Manager, a tool in AWS Systems Manager, users across your organization or in a single AWS account can perform change-related tasks for which they have been granted the necessary permissions. Change Manager tasks include the following:
+ Create, review, and approve or reject change templates. 

  A change template is a collection of configuration settings in Change Manager that define such things as required approvals, available runbooks, and notification options for change requests.
+ Create, review, and approve or reject change requests.

  A change request is a request in Change Manager to run an Automation runbook that updates one or more resources in your AWS or on-premises environments. A change request is created using a change template.
+ Specify which users in your organization or account can be made reviewers for change templates and change requests.
+ Edit configuration settings, such as how user identities are managed in Change Manager and which of the available *best practice* options are enforced in your Change Manager operations. For information about configuring these settings, see [Configuring Change Manager options and best practices](change-manager-account-setup.md).

**Topics**
+ [

# Working with change templates
](change-templates.md)
+ [

# Working with change requests
](change-requests.md)
+ [

# Reviewing change request details, tasks, and timelines (console)
](reviewing-changes.md)
+ [

# Viewing aggregated counts of change requests (command line)
](change-requests-review-aggregate-command-line.md)

# Working with change templates


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

A change template is a collection of configuration settings in Change Manager that define such things as required approvals, available runbooks, and notification options for change requests.

**Note**  
AWS provides a sample [Hello World](change-templates-aws-managed.md) change template you can use to try out Change Manager, a tool in AWS Systems Manager. However, you create your own change templates to define the changes you want to allow to the resources in your organization or account. 

The changes that are made when a runbook workflow runs are based on the contents an Automation runbook. In each change template you create, you can include one or more Automation runbooks that the user making a change request can choose from to run during the update. You can also create change templates that allow requesters to choose any available Automation runbook for the change request.

To create a change template, you can use the **Builder** option in the **Create template** console page to build a change template. Alternatively, using the **Editor** option, you can manually author JSON or YAML content with the configuration you want for your runbook workflow. You can also use a command line tool to create a change template, with JSON content for the change template stored in an external file.

**Topics**
+ [

# Try out the AWS managed `Hello World` change template
](change-templates-aws-managed.md)
+ [

# Creating change templates
](change-templates-create.md)
+ [

# Reviewing and approving or rejecting change templates
](change-templates-review.md)
+ [

# Deleting change templates
](change-templates-delete.md)

# Try out the AWS managed `Hello World` change template


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

You can use the sample change template `AWS-HelloWorldChangeTemplate`, which uses the sample Automation runbook `AWS-HelloWorld`, to test the review and approval process after you have finished setting up Change Manager, a tool in AWS Systems Manager. This template is designed for testing or verifying your configured permissions, approver assignments, and approval process. Approval to use this change template in your organization or account has already been provided by AWS. Any change request based on this change template, however, must still be approved by reviewers in your organization or account.

Rather than make changes to a resource, the result of the runbook workflow associated with this template is to print a message in the output of an Automation step.

**Before you begin**  
Before you begin, ensure you have completed the following tasks:
+ If you're using AWS Organizations to manage change across an organization, complete the organization setup tasks described in [Setting up Change Manager for an organization (management account)](change-manager-organization-setup.md).
+ Configure Change Manager for your delegated administrator account or single account, as described in [Configuring Change Manager options and best practices](change-manager-account-setup.md). 
**Note**  
If you turned on the best practice option **Require monitors for all templates** in your Change Manager settings, turn it off temporarily while you test the Hello World change template.

**To try out the AWS managed Hello World change template**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. Choose **Create request**.

1. Choose the change template named `AWS-HelloWorldChangeTemplate`, and then choose **Next**.

1. For **Name**, enter a name for the change request that makes its purpose easy to identify, such as **MyChangeRequestTest**.

1. For the remainder of the steps to create your change request, see [Creating change requestsCreating change requests (console)](change-requests-create.md).

**Next steps**  
For information about approving change requests, see [Reviewing and approving or rejecting change requests](change-requests-review.md).

To view the status and results of your change request, choose the name of your change request on the **Requests** tab in Change Manager. 

# Creating change templates


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

A change template is a collection of configuration settings in Change Manager that define such things as required approvals, available runbooks, and notification options for change requests.

You can create change templates for your operations in Change Manager, a tool in AWS Systems Manager, using the console, which includes Builder and Editor options, or command line tools.

**Topics**
+ [

# About approvals in your change templates
](cm-approvals-templates.md)
+ [

# Creating change templates using Builder
](change-templates-custom-builder.md)
+ [

# Creating change templates using Editor
](change-templates-custom-editor.md)
+ [

# Creating change templates using command line tools
](change-templates-tools.md)

# About approvals in your change templates


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

For each change template that you create, you can specify up to five approval *levels* for change requests created from it. For each of those levels, you can designate up to five potential *approvers*. An approver isn't limited to a single user. You can also specify an IAM group or IAM role as an individual approver. For IAM groups and IAM roles, one or more users belonging to the group or role can provide approvals toward receiving the total number of approvals required for a change request. You can also specify more approvers than your change template requires.

Change Manager supports two main approaches to approvals: *per-level approvals* and *per-line approvals*. A combination of the two types is also possible in some situations. We recommend using only per-level approvals in your Change Manager operations.

------
#### [ Per-level approvals ]

*Recommended*. As of January 23, 2023, Change Manager supports per-level approvals. In this model, for each approval level in your change template, you first specify how many approvals are required for that level. Then you specify at least that many approvers for the level and can specify more approvers. However, only the number of per-level approvers that you specify need to approve the change request. For example, you could specify five approvers but require three approvals.

For console-view and JSON samples of this approval type, see [Sample per-level approval configuration](approval-type-samples.md#per-level-approvals).

------
#### [ Per-line approvals ]

*Supported for backward compatibility*. The original release of Change Manager supported only per-line approvals. In this model, every approver specified for an approval level is represented as an approval line. Each approver had to approve a change request for it to be approved at that level. Prior to January 23, 2023, this was the only supported model for approvals. Change templates created before this date continue to support per-line approvals, but we recommend using per-level approvals instead.

For console-view and JSON samples of this approval type, see [Sample per-line approval configuration](approval-type-samples.md#per-line-approvals).

------
#### [ Combined per-line and per-level approvals ]

*Not recommended*. In the console, the **Builder** tab no longer supports adding per-line approvals. However, in some cases you might end up with both per-line and per-level approvals in a change template. This can occur if you update a change template that was created before January 23, 2023, or if you create or update a change template by editing its YAML content manually,

For console-view and JSON samples of this approval type, see [Sample combined per-level and per-line approval configuration](approval-type-samples.md#combined-approval-levels).

------

**Important**  
Although it's possible to create a change template that combines per-line and per-level approvals, this configuration isn't recommended or necessary. Whichever approval type requires more approvals (per-line or per-level approvals) takes precedence. For example:  
If a change template specifies three per-level approvals but five per-line approvals, then five approvals are required.
If a change template specifies four per-level approvals but two per-line approvals, then four approvals are required.

You can create a level that includes both per-line and per-level approvals by editing the YAML or JSON content manually. Then, the **Builder** tab displays controls for specifying the required number of approvals for both the level and for individual lines. However, new levels that you add using the console still support only per-level approval configurations.

## Change request notifications and rejections


Amazon SNS notifications  
When a change request is created using your change template, notifications are sent to subscribers of the Amazon Simple Notification Service (Amazon SNS) topic that has been designated for approval notifications at that level. You can specify the notification topic in the change template or allow the user creating the change request to specify one.  
After the minimum number of required approvals is received at one level, notifications are sent to approvers subscribed to the Amazon SNS topic for the next level, and so on.  
Ensure that the IAM roles, groups, and users you designate together provide enough approvers to meet the required number of approvals you specify. For example, if you designate only a single IAM group as an approver that contains three users, you can't specify that five approvals are mandatory at that level, only three or less.

Change request rejections  
No matter how many approval levels and approvers you specify, only one rejection to a change request is required to prevent the runbook workflow for that request from occurring.

# Change Manager approval type examples


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

The following samples demonstrate the console view and JSON content for the three types of approval types in Change Manager.

**Topics**
+ [

## Sample per-level approval configuration
](#per-level-approvals)
+ [

## Sample per-line approval configuration
](#per-line-approvals)
+ [

## Sample combined per-level and per-line approval configuration
](#combined-approval-levels)

## Sample per-level approval configuration


In the per-level approval level setup shown in the following image, three approvals are required. Those approvals can come from any combination of IAM users, groups, and roles that are specified as approvers. Specified approvers include two IAM users (John Stiles and Ana Carolina Silva), a user group that contains three members (`GroupOfThree`), and a user role that represents ten users (`RoleOfTen`). 

If all three users in the `GroupOfThree` group approve the change request, it is approved for that level. It's not necessary to receive an approval from each user, group, or role. The minimum number of approvals can come from any combination of specified approvers. We recommend per-level approvals for your Change Manager operations.

![\[Approval level showing three approvals are required and four specified approvers.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/Add-approval-2.png)


The following sample illustrates part of the YAML code for this configuration. 

**Note**  
This version of the YAML code include an additional input, `MinRequiredApprovals` (with an initial capital `M`). The value for this input indicates how many approvals are required from among all available reviewers. Note also that the `minRequiredApprovals` (lowercase initial `m`) value for each approver in the `Approvers` list is `0` (zero). This indicates that the approver can contribute to the overall approvals but is not required to do so.

```
schemaVersion: "0.3"
emergencyChange: false
autoApprovable: false
mainSteps:
  - name: ApproveAction1
    action: aws:approve
    timeoutSeconds: 604800
    inputs:
      Message: Please approve this change request
      MinRequiredApprovals: 3
      EnhancedApprovals:
        Approvers:
          - approver: John Stiles
            type: IamUser
            minRequiredApprovals: 0
          - approver: Ana Carolina Silva
            type: IamUser
            minRequiredApprovals: 0
          - approver: GroupOfThree
            type: IamGroup
            minRequiredApprovals: 0
          - approver: RoleOfTen
            type: IamRole
            minRequiredApprovals: 0
templateInformation: >
  #### What is the purpose of this change?
    //truncated
```

## Sample per-line approval configuration


In the approval level setup shown in the following image, four approvers are specified. These include two IAM users (John Stiles and Ana Carolina Silva), a user group that contains three members (`GroupOfThree`), and a user role that represents ten users (`RoleOfTen`). Per-line approvals are supported for backwards compatibility but not recommended.

![\[Approval level showing four required per-line approvers.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/Add-approval-1.png)


For the change request to be approved in this per-line approval configuration, it must be approved by all approver lines: John Stiles, Ana Carolina Silva, one member of the `GroupOfThree` group, and one member of the `RoleOfTen` role.

The following sample illustrates part of the YAML code for this configuration.

**Note**  
Observe that the value for each `minRequiredApprovals` approver is `1`. This indicates that one approval is required from each approver.

```
schemaVersion: "0.3"
emergencyChange: false
autoApprovable: false
mainSteps:
  - name: ApproveAction1
    action: aws:approve
    timeoutSeconds: 10000
    inputs:
      Message: Please approve this change request
      EnhancedApprovals:
        Approvers:
          - approver: John Stiles
            type: IamUser
            minRequiredApprovals: 1
          - approver: Ana Carolina Silva
            type: IamUser
            minRequiredApprovals: 1
          - approver: GroupOfThree
            type: IamGroup
            minRequiredApprovals: 1
          - approver: RoleOfTen
            type: IamRole
            minRequiredApprovals: 1
executableRunBooks:
  - name: AWS-HelloWorld
    version: $DEFAULT
templateInformation: >
  #### What is the purpose of this change?
    //truncated
```

## Sample combined per-level and per-line approval configuration


In the combined per-level and per-line approval setup shown in the following image, three approvals are specified for the level, but four approvals are specified for the line-item approvals. Whichever approval type requires more approvals takes precedence over the other, so four approvals are required by this configuration. Combined per-level and per-line approval are not recommended.

![\[Approval level showing three approvals required for the level but four required at the line level.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/Add-approval-3.png)


```
schemaVersion: "0.3"
emergencyChange: false
autoApprovable: false
mainSteps:
  - name: ApproveAction1
    action: aws:approve
    timeoutSeconds: 604800
    inputs:
      Message: Please approve this change request
      MinRequiredApprovals: 3
      EnhancedApprovals:
        Approvers:
          - approver: John Stiles
            type: IamUser
            minRequiredApprovals: 1
          - approver: Ana Carolina Silva
            type: IamUser
            minRequiredApprovals: 1
          - approver: GroupOfThree
            type: IamGroup
            minRequiredApprovals: 1
          - approver: RoleOfTen
            type: IamRole
            minRequiredApprovals: 1
templateInformation: >
  #### What is the purpose of this change?
    //truncated
```

**Topics**
+ [

# About approvals in your change templates
](cm-approvals-templates.md)
+ [

# Creating change templates using Builder
](change-templates-custom-builder.md)
+ [

# Creating change templates using Editor
](change-templates-custom-editor.md)
+ [

# Creating change templates using command line tools
](change-templates-tools.md)

# Creating change templates using Builder


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

Using the Builder for change templates in Change Manager, a tool in AWS Systems Manager, you can configure the runbook workflow defined in your change template without having to use JSON or YAML syntax. After you specify your options, the system converts your input into the YAML format that Systems Manager can use to run runbook workflows.

**To create a change template using Builder**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. Choose **Create template**.

1. For **Name**, enter a name for the template that makes its purpose easy to identify, such as **UpdateEC2LinuxAMI**.

1. In the **Change template details** section, do the following:
   + For **Description**, provide a brief explanation of how and when the change template you're creating is to be used. 

     This description helps users who create change requests determine whether they're using the correct change template. It helps those who review change requests understand whether the request should be approved.
   + For **Change template type**, specify whether you're creating a standard change template or an emergency change template.

     An emergency change template is used for situations when a change must be made even if changes are otherwise blocked by an event in the calendar in use by AWS Systems Manager Change Calendar. Change requests created from an emergency change template must still be approved by its designated approvers, but the requested changes can still run even when the calendar is blocked.
   + For **Runbook options**, specify the runbooks that users can choose from when creating a change request. You can add a single runbook or multiple runbooks. Alternatively, you can allow requesters to specify which runbook to use. In any of these cases, only one runbook can be included in the change request.
   + For **Runbook**, select the names of the runbooks and the versions of those runbooks that users can choose from for their change requests. No matter how many runbooks you add to the change template, only one can be selected per change request.

     You don't specify a runbook if you chose **Any runbook can be used** earlier.
**Tip**  
Select a runbook and runbook version, and then choose **View** to examine the contents of the runbook in the Systems Manager Documents interface.

1. In the **Template information** section, use Markdown to enter information for users who create change requests from this change template. We have provided a set of questions that you can include for users who create change requests, or you can add other information and questions instead. 
**Note**  
Markdown is a markup language that allows you to add wiki-style descriptions to documents and individual steps within the document. For more information about using Markdown, see [Using Markdown in AWS](https://docs.aws.amazon.com/general/latest/gr/aws-markdown.html).

   We recommend providing questions for users to answer about their change requests to help approvers decide whether or not to grant each change request, such as listing any manual steps required to run as part of the change and a rollback plan. 
**Tip**  
Toggle between **Hide preview** and **Show preview** to see what your content looks like as you compose.

1. In the **Change request approvals** section, do the following:
   + (Optional) If you want to allow change requests that are created from this change template to run automatically, without review by any approvers (with the exception of change freeze events), select **Enable auto-approval**.
**Note**  
Enabling auto-approvals in a change template provides users with the *option* of bypassing reviewers. They can still choose to specify reviewers when creating a change request. Therefore, you must still specify reviewer options in the change template.
**Important**  
If you enable auto-approval for a change template, users can submit change requests using that template that do not require review by reviewers before they run (with the exception of change freeze event approvers). If you want to restrict a particular user, group, or IAM role from submitting auto-approval requests, you can use a condition in an IAM policy for this purpose. For more information, see [Controlling access to auto-approval runbook workflows](change-manager-auto-approval-access.md).
   + For **Number of approvals required at this level**, choose the number of approvals that change requests created from this change template must receive for this level.
   + To add mandatory first-level approvers, choose **Add approver**, and then choose from the following:
     + **Template specified approvers** – Choose one or more users, groups, or AWS Identity and Access Management (IAM) roles from your account to approve change requests created from this change template. Any change requests that are created using this template must be reviewed and approved by each approver you specify.
     + **Request specified approvers** – The user who makes the change request specifies reviewers at the time they make the request and can choose from a list of users in your account. 

       The number you enter in the **Required** column determines how many reviewers must be specified by a change request that uses this change template. 
**Important**  
Prior to January 23, 2023, the **Builder** tab supported specifying per-line approvals only. New change templates and new levels you add to existing change templates using the **Builder** tab support per-level approvals only. We recommend using only per-level approvals in your Change Manager operations.  
For more information, see [About approvals in your change templates](cm-approvals-templates.md).
   + For **SNS topic to notify approvers**, do the following:

     1. Choose one of the following to specify the Amazon Simple Notification Service (Amazon SNS) topic in your account to use for sending notifications to approvers that a change request is ready for their review:
        + **Enter an SNS Amazon Resource Name (ARN)** – For **Topic ARN**, enter the ARN of an existing Amazon SNS topic. This topic can be in any of your organization's accounts.
        + **Select an existing SNS topic** – For **Target notification topic**, select the ARN of an existing Amazon SNS topic in your current AWS account. (This option isn't available if you haven't yet created any Amazon SNS topics in your current AWS account and AWS Region.)
        + **Specify SNS topic when the change request is created **– The user who creates a change request can specify the Amazon SNS topic to use for notifications.
**Note**  
The Amazon SNS topic you select must be configured to specify the notifications it sends and the subscribers they're sent to. Its access policy must also grant permissions to Systems Manager so Change Manager can send notifications. For information, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md). 

     1. Choose **Add notification**.

1. (Optional) To add an additional level of approvers, choose **Add approval level** and choose between template-specified approvers and request-specified approvers for this level. Then choose an SNS topic to notify this level of approvers.

   After all approvals have been received by first-level approvers, second-level approvers are notified, and so on.

   You can add a maximum of five levels of approvers in each template. You might, for example, require approvals from users in technical roles for the first level, then managerial approval for the second level.

1. In the **Monitoring** section, for **CloudWatch alarm to monitor**, enter the name of an Amazon CloudWatch alarm in the current account to monitor the progress of runbook workflows that are based on this template. 
**Tip**  
To create a new alarm, or to review the settings of an alarm you want to specify, choose **Open the Amazon CloudWatch console**. For information about working with CloudWatch alarms, see [Using CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) in the *Amazon CloudWatch User Guide*.

1. In the **Notifications** section, do the following:

   1. Choose one of the following to specify the Amazon SNS topic in your account to use for sending notifications about change requests that are created using this change template: 
      + **Enter an SNS Amazon Resource Name (ARN)** – For **Topic ARN**, enter the ARN of an existing Amazon SNS topic. This topic can be in any of your organization's accounts.
      + **Select an existing SNS topic** – For **Target notification topic**, select the ARN of an existing Amazon SNS topic in your current AWS account. (This option isn't available if you haven't yet created any Amazon SNS topics in your current AWS account and AWS Region.)
**Note**  
The Amazon SNS topic you select must be configured to specify the notifications it sends and the subscribers they're sent to. Its access policy must also grant permissions to Systems Manager so Change Manager can send notifications. For information, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md). 

   1. Choose **Add notification**.

1. (Optional) In the **Tags** section, apply one or more tag key name/value pairs to the change template.

   Tags are optional metadata that you assign to a resource. By using tags, you can categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a change template to identify the type of change it makes and the environment it runs in. In this case, you could specify the following key name/value pairs:
   + `Key=TaskType,Value=InstanceRepair`
   + `Key=Environment,Value=Production`

1. Choose **Save and preview**.

1. Review the details of the change template you're creating.

   If you want to make change to the change template before submitting it for review, choose **Actions, Edit**.

   If you're satisfied with the contents of the change template, choose **Submit for review**. The users in your organization or account who have been specified as template reviewers on the **Settings** tab in Change Manager are notified that a new change template is pending their review. 

   If an Amazon SNS topic has been specified for change templates, notifications are sent when the change template is rejected or approved. If you don't receive notifications related to this change template, you can return to Change Manager later to check on its status.

# Creating change templates using Editor


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

Use the steps in this topic to configure a change template in Change Manager, a tool in AWS Systems Manager, by entering JSON or YAML instead of using the console controls.

**To create a change template using Editor**

1. In the navigation pane, choose **Change Manager**.

1. Choose **Create template**.

1. For **Name**, enter a name for the template that makes its purpose easy to identify, such as **RestartEC2LinuxInstance**.

1. Above **Change template details**, choose **Editor**.

1. In the **Document editor** section, choose **Edit**, and then enter the JSON or YAML content for your change template. 

   The following is an example.
**Note**  
The parameter `minRequiredApprovals` is used to specify how many reviewers at a specified level must approve a change request that is created using this template.  
This example demonstrates two levels of approvals. You can specify up to five levels of approvals, but only one level is required.   
In the first level, the specific user "John-Doe" must approve each change request. After that, any three members of the IAM role `Admin` must approve the change request.  
For more information about approvals for change templates, see [About approvals in your change templates](cm-approvals-templates.md).

------
#### [ YAML ]

   ```
   description: >-
     This change template demonstrates the feature set available for creating
     change templates for Change Manager. This template starts a Runbook workflow
     for the Automation runbook called AWS-HelloWorld.
   templateInformation: >
     ### Document Name: HelloWorldChangeTemplate
   
     ## What does this document do?
   
     This change template demonstrates the feature set available for creating
     change templates for Change Manager. This template starts a Runbook workflow
     for the Automation runbook called AWS-HelloWorld.
   
     ## Input Parameters
   
     * ApproverSnsTopicArn: (Required) Amazon Simple Notification Service ARN for
     approvers.
   
     * Approver: (Required) The name of the approver to send this request to.
   
     * ApproverType: (Required) The type of reviewer.
       * Allowed Values: IamUser, IamGroup, IamRole, SSOGroup, SSOUser
   
     ## Output Parameters
   
     This document has no outputs
   schemaVersion: '0.3'
   parameters:
     ApproverSnsTopicArn:
       type: String
       description: Amazon Simple Notification Service ARN for approvers.
     Approver:
       type: String
       description: IAM approver
     ApproverType:
       type: String
       description: >-
         Approver types for the request. Allowed values include IamUser, IamGroup,
         IamRole, SSOGroup, and SSOUser.
   executableRunBooks:
     - name: AWS-HelloWorld
       version: '1'
   emergencyChange: false
   autoApprovable: false
   mainSteps:
     - name: ApproveAction1
       action: 'aws:approve'
       timeoutSeconds: 3600
       inputs:
         Message: >-
           A sample change request has been submitted for your review in Change
           Manager. You can approve or reject this request.
         EnhancedApprovals:
           NotificationArn: '{{ ApproverSnsTopicArn }}'
           Approvers:
             - approver: John-Doe
               type: IamUser
               minRequiredApprovals: 1
     - name: ApproveAction2
       action: 'aws:approve'
       timeoutSeconds: 3600
       inputs:
         Message: >-
           A sample change request has been submitted for your review in Change
           Manager. You can approve or reject this request.
         EnhancedApprovals:
           NotificationArn: '{{ ApproverSnsTopicArn }}'
           Approvers:
             - approver: Admin
               type: IamRole
               minRequiredApprovals: 3
   ```

------
#### [ JSON ]

   ```
   {
      "description": "This change template demonstrates the feature set available for creating
     change templates for Change Manager. This template starts a Runbook workflow
     for the Automation runbook called AWS-HelloWorld",
      "templateInformation": "### Document Name: HelloWorldChangeTemplate\n\n
       ## What does this document do?\n
       This change template demonstrates the feature set available for creating change templates for Change Manager. 
       This template starts a Runbook workflow for the Automation runbook called AWS-HelloWorld.\n\n
       ## Input Parameters\n* ApproverSnsTopicArn: (Required) Amazon Simple Notification Service ARN for approvers.\n
       * Approver: (Required) The name of the approver to send this request to.\n
       * ApproverType: (Required) The type of reviewer.  * Allowed Values: IamUser, IamGroup, IamRole, SSOGroup, SSOUser\n\n
       ## Output Parameters\nThis document has no outputs\n",
      "schemaVersion": "0.3",
      "parameters": {
         "ApproverSnsTopicArn": {
            "type": "String",
            "description": "Amazon Simple Notification Service ARN for approvers."
         },
         "Approver": {
            "type": "String",
            "description": "IAM approver"
         },
         "ApproverType": {
            "type": "String",
            "description": "Approver types for the request. Allowed values include IamUser, IamGroup, IamRole, SSOGroup, and SSOUser."
         }
      },
      "executableRunBooks": [
         {
            "name": "AWS-HelloWorld",
            "version": "1"
         }
      ],
      "emergencyChange": false,
      "autoApprovable": false,
      "mainSteps": [
         {
            "name": "ApproveAction1",
            "action": "aws:approve",
            "timeoutSeconds": 3600,
            "inputs": {
               "Message": "A sample change request has been submitted for your review in Change Manager. You can approve or reject this request.",
               "EnhancedApprovals": {
                  "NotificationArn": "{{ ApproverSnsTopicArn }}",
                  "Approvers": [
                     {
                        "approver": "John-Doe",
                        "type": "IamUser",
                        "minRequiredApprovals": 1
                     }
                  ]
               }
            }
         },
           {
            "name": "ApproveAction2",
            "action": "aws:approve",
            "timeoutSeconds": 3600,
            "inputs": {
               "Message": "A sample change request has been submitted for your review in Change Manager. You can approve or reject this request.",
               "EnhancedApprovals": {
                  "NotificationArn": "{{ ApproverSnsTopicArn }}",
                  "Approvers": [
                     {
                        "approver": "Admin",
                        "type": "IamRole",
                        "minRequiredApprovals": 3                  
                     }
                  ]
               }
            }
         }
      ]
   }
   ```

------

1. Choose **Save and preview**.

1. Review the details of the change template you're creating.

   If you want to make change to the change template before submitting it for review, choose **Actions, Edit**.

   If you're satisfied with the contents of the change template, choose **Submit for review**. The users in your organization or account who have been specified as template reviewers on the **Settings** tab in Change Manager are notified that a new change template is pending their review. 

   If an Amazon Simple Notification Service (Amazon SNS) topic has been specified for change templates, notifications are sent when the change template is rejected or approved. If you don't receive notifications related to this change template, you can return to Change Manager later to check on its status.

# Creating change templates using command line tools


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

The following procedures describe how to use the AWS Command Line Interface (AWS CLI) (on Linux, macOS, or Windows Server) or AWS Tools for Windows PowerShell to create a change request in Change Manager, a tool in AWS Systems Manager. 

**To create a change template**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Create a JSON file on your local machine with a name such as `MyChangeTemplate.json`, and then paste the content for your change template into it.
**Note**  
Change templates use a version of schema 0.3 that doesn't include all the same support as for Automation runbooks.

   The following is an example.
**Note**  
The parameter `minRequiredApprovals` is used to specify how many reviewers at a specified level must approve a change request that is created using this template.  
This example demonstrates two levels of approvals. You can specify up to five levels of approvals, but only one level is required.   
In the first level, the specific user "John-Doe" must approve each change request. After that, any three members of the IAM role `Admin` must approve the change request.  
For more information about approvals for change templates, see [About approvals in your change templates](cm-approvals-templates.md).

   ```
   {
      "description": "This change template demonstrates the feature set available for creating
     change templates for Change Manager. This template starts a Runbook workflow
     for the Automation runbook called AWS-HelloWorld",
      "templateInformation": "### Document Name: HelloWorldChangeTemplate\n\n
       ## What does this document do?\n
       This change template demonstrates the feature set available for creating change templates for Change Manager. 
       This template starts a Runbook workflow for the Automation runbook called AWS-HelloWorld.\n\n
       ## Input Parameters\n* ApproverSnsTopicArn: (Required) Amazon Simple Notification Service ARN for approvers.\n
       * Approver: (Required) The name of the approver to send this request to.\n
       * ApproverType: (Required) The type of reviewer.  * Allowed Values: IamUser, IamGroup, IamRole, SSOGroup, SSOUser\n\n
       ## Output Parameters\nThis document has no outputs\n",
      "schemaVersion": "0.3",
      "parameters": {
         "ApproverSnsTopicArn": {
            "type": "String",
            "description": "Amazon Simple Notification Service ARN for approvers."
         },
         "Approver": {
            "type": "String",
            "description": "IAM approver"
         },
         "ApproverType": {
            "type": "String",
            "description": "Approver types for the request. Allowed values include IamUser, IamGroup, IamRole, SSOGroup, and SSOUser."
         }
      },
      "executableRunBooks": [
         {
            "name": "AWS-HelloWorld",
            "version": "1"
         }
      ],
      "emergencyChange": false,
      "autoApprovable": false,
      "mainSteps": [
         {
            "name": "ApproveAction1",
            "action": "aws:approve",
            "timeoutSeconds": 3600,
            "inputs": {
               "Message": "A sample change request has been submitted for your review in Change Manager. You can approve or reject this request.",
               "EnhancedApprovals": {
                  "NotificationArn": "{{ ApproverSnsTopicArn }}",
                  "Approvers": [
                     {
                        "approver": "John-Doe",
                        "type": "IamUser",
                        "minRequiredApprovals": 1
                     }
                  ]
               }
            }
         },
           {
            "name": "ApproveAction2",
            "action": "aws:approve",
            "timeoutSeconds": 3600,
            "inputs": {
               "Message": "A sample change request has been submitted for your review in Change Manager. You can approve or reject this request.",
               "EnhancedApprovals": {
                  "NotificationArn": "{{ ApproverSnsTopicArn }}",
                  "Approvers": [
                     {
                        "approver": "Admin",
                        "type": "IamRole",
                        "minRequiredApprovals": 3                  
                     }
                  ]
               }
            }
         }
      ]
   }
   ```

1. Run the following command to create the change template. 

------
#### [ Linux & macOS ]

   ```
   aws ssm create-document \
       --name MyChangeTemplate \
       --document-format JSON \
       --document-type Automation.ChangeTemplate \
       --content file://MyChangeTemplate.json \
       --tags Key=tag-key,Value=tag-value
   ```

------
#### [ Windows ]

   ```
   aws ssm create-document ^
       --name MyChangeTemplate ^
       --document-format JSON ^
       --document-type Automation.ChangeTemplate ^
       --content file://MyChangeTemplate.json ^
       --tags Key=tag-key,Value=tag-value
   ```

------
#### [ PowerShell ]

   ```
   $json = Get-Content -Path "C:\path\to\file\MyChangeTemplate.json" | Out-String
   New-SSMDocument `
       -Content $json `
       -Name "MyChangeTemplate" `
       -DocumentType "Automation.ChangeTemplate" `
       -Tags "Key=tag-key,Value=tag-value"
   ```

------

   For information about other options you can specify, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-document.html).

   The system returns information like the following.

   ```
   {
      "DocumentDescription":{
         "CreatedDate":1.585061751738E9,
         "DefaultVersion":"1",
         "Description":"Use this template to update an EC2 Linux AMI. Requires one
         approver specified in the template and an approver specified in the request.",
         "DocumentFormat":"JSON",
         "DocumentType":"Automation",
         "DocumentVersion":"1",
         "Hash":"0d3d879b3ca072e03c12638d0255ebd004d2c65bd318f8354fcde820dEXAMPLE",
         "HashType":"Sha256",
         "LatestVersion":"1",
         "Name":"MyChangeTemplate",
         "Owner":"123456789012",
         "Parameters":[
            {
               "DefaultValue":"",
               "Description":"Level one approvers",
               "Name":"LevelOneApprovers",
               "Type":"String"
            },
            {
               "DefaultValue":"",
               "Description":"Level one approver type",
               "Name":"LevelOneApproverType",
               "Type":"String"
            },
      "cloudWatchMonitors": {
         "monitors": [
            "my-cloudwatch-alarm"
         ]
      }
         ],
         "PlatformTypes":[
            "Windows",
            "Linux"
         ],
         "SchemaVersion":"0.3",
         "Status":"Creating",
         "Tags":[
   
         ]
      }
   }
   ```

The users in your organization or account who have been specified as template reviewers on the **Settings** tab in Change Manager are notified that a new change template is pending their review. 

If an Amazon Simple Notification Service (Amazon SNS) topic has been specified for change templates, notifications are sent when the change template is rejected or approved. If you don't receive notifications related to this change template, you can return to Change Manager later to check on its status.

# Reviewing and approving or rejecting change templates


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

If you're specified as a reviewer for change templates in Change Manager, a tool in AWS Systems Manager, you're notified when a new change template, or new version of a change template, is awaiting your review. An Amazon Simple Notification Service (Amazon SNS) topic sends the notifications.

**Note**  
This functionality depends on whether your account has been configured to use an Amazon SNS topic to send change template review notifications. For information about specifying a template reviewer notification topic, see [Task 1: Configuring Change Manager user identity management and template reviewers](change-manager-account-setup.md#cm-configure-account-task-1).

To review the change template, follow the link in your notification, sign in to the AWS Management Console, and follow the steps in this procedure.

**To review and approve or reject a change template**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. In the **Change templates** section at the bottom of the **Overview** tab, choose the number in **Pending review**.

1. In the **Change templates** list, locate and choose the name of change template to review.

1. In the summary page, review the proposed content of the change template and do one of the following:
   + To approve the change template, which allows it to be used in change requests, choose **Approve**.
   + To reject the change template, which prevents it from being used in change requests, choose **Reject**.

# Deleting change templates


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

This topic describes how to delete templates that you have created in Change Manager, a tool in Systems Manager. If you are using Change Manager for an organization, this procedure is performed in your delegated administrator account.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. Choose the **Templates** tab.

1. Choose the name of the template to delete.

1. Choose **Actions, Delete template**.

1. In the confirmation dialog, enter the word **DELETE**, and then choose **Delete**.

# Working with change requests


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

A change request is a request in Change Manager to run an Automation runbook that updates one or more resources in your AWS or on-premises environments. A change request is created using a change template.

When you create a change request in Change Manager, a tool in AWS Systems Manager, one or more approvers in your organization or account must review and approve the request. Without the required approvals, the runbook workflow, which makes the changes you request, isn't permitted to run.

**Topics**
+ [

# Creating change requests
](change-requests-create.md)
+ [

# Reviewing and approving or rejecting change requests
](change-requests-review.md)

# Creating change requests


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

When you create a change request in Change Manager, a tool in AWS Systems Manager, the change template you select typically does the following:
+ Designates approvers for the change request or specifies how many approvals are required
+ Specifies the Amazon Simple Notification Service (Amazon SNS) topic to use to notify approvers about your change request
+ Specifies an Amazon CloudWatch alarm to monitor the runbook workflow for the change request
+ Identifies which Automation runbooks you can choose from to make the requested change

In some cases, a change template might be configured so you specify your own Automation runbook to use, and to specify who should review and approve the request.

**Important**  
If you use Change Manager across an organization, we recommend always making changes from the delegated administrator account. Although you can make changes from other accounts in the organization, those changes won't be reported in or viewable from the delegated administrator account.

**Topics**
+ [

## About change request approvals
](#cm-approvals-requests)
+ [

## Creating change requests (console)
](#change-requests-create-console)
+ [

## Creating change requests (AWS CLI)
](#change-requests-create-cli)

## About change request approvals


Depending on the requirements specified in a change template, change requests that you create from it can require approvals from up to five *levels* before the runbook workflow for the request can occur. For each of those levels, the template creator could specify up to five potential *approvers*. An approver isn't limited to a single user. An approver in this sense can also be an IAM group or IAM role. For IAM groups and IAM roles, one or more users belonging to the group or role can provide approvals toward receiving the total number of approvals required for a change request. Template creators can also specify more approvers than the change template requires.

**Original approval workflows and updated and/or approvals**  
Using change templates created before January 23, 2023, an approval must be received from each specified approver for the change request to be approved at that level. For example, in the approval level setup shown in the following image, four approvers are specified. Specified approvers include two users (John Stiles and Ana Carolina Silva), a user group that contains three members (GroupOfThree), and a user role that represents ten users (RoleOfTen).

![\[Approval level showing four required per-line approvers.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/Add-approval-1.png)


For the change request to be approved at this level, it must be approved by John Stiles, Ana Carolina Silva, one member of the `GroupOfThree` group, and one member of the `RoleOfTen` role.

Using change templates created on or after January 23, 2023, for each approval level, template creators can specify an overall total number of required approvals. Those approvals can come from any combination of users, groups, and roles that have been specified as approvers. A change template could require only one approval for a level but specify, for example, two individual users, two groups, and one role as potential approvers.

For example, in the approval level area shown in the following image, three approvals are required. The template-specified approvers include two users (John Stiles and Ana Carolina Silva), a user group that contains three members (`GroupOfThree`), and a user role that represents ten users (`RoleOfTen`).

![\[Approval level showing three approvals are required and four specified approvers.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/Add-approval-2.png)


If all three users in the `GroupOfThree` group approve your change request, it is approved for that level. It's not necessary to receive an approval from each user, group, or role. The minimum number of approvals can come from any combination of potential approvers.

When your change request is created, notifications are sent to subscribers of the Amazon SNS topic that has been specified for approval notifications at that level. The change template creator might have specified the notification topic that must be used or allowed you to specify one.

After the minimum number of required approvals is received at one level, notifications are sent to approvers that are subscribed to the Amazon SNS topic for the next level, and so on.

No matter how many approval levels and approvers are specified, only one rejection to a change request is required to prevent the runbook workflow for that request from occurring.

## Creating change requests (console)


The following procedure describes how to create a change request by using the Systems Manager console.

**To create a change request (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. Choose **Create request**.

1. Search for and select a change template that you want to use for this change request.

1. Choose **Next**.

1. For **Name**, enter a name for the change request that makes its purpose easy to identify, such as **UpdateEC2LinuxAMI-us-east-2**.

1. For **Runbook**, select the runbook you want to use to make your requested change.
**Note**  
If the option to select a runbook isn't available, the change template author has specified which runbook must be used.

1. For **Change request information**, use Markdown to provide additional information about the change request to help reviewers decide whether to approve or reject the change request. The author of the template you're using might have provided instructions or questions for you to answer.
**Note**  
Markdown is a markup language that allows you to add wiki-style descriptions to documents and individual steps within the document. For more information about using Markdown, see [Using Markdown in AWS](https://docs.aws.amazon.com/general/latest/gr/aws-markdown.html).

1. In the **Workflow start time** section, choose one of the following:
   + **Run the operation at a scheduled time** – For **Requested start time**, enter the date and time you propose for running the runbook workflow for this request. For **Estimated end time**, enter the date and time that you expect the runbook workflow to complete. (This time is an estimate only that you're providing for reviewers.)
**Tip**  
Choose **View Change Calendar** to check for any blocking events for the time you specify.
   + **Run the operation as soon as possible after approval** – If the change request is approved, the runbook workflow runs as soon as there is a non-restricted period when changes can be made.

1. In the **Change request approvals** section, do the following:

   1. If **Approval type** options are presented, choose one of the following:
      + **Automatic approval **– The change template you selected is configured to allow change requests to run automatically without review by any approvers. Continue to Step 11.
**Note**  
The permissions specified in the IAM policies that govern your use of Systems Manager must not restrict you from submitting auto-approval change requests in order for them to run automatically.
      + **Specify approvers** – You must add one or more users, groups, or IAM roles to review and approve this change request.
**Note**  
You can choose to specify reviewers even if the permissions specified in the IAM policies that govern your use of Systems Manager allow you to run auto-approval change requests.

   1. Choose **Add approver**, and then select one or more users, groups, or AWS Identity and Access Management (IAM) roles from the lists of available reviewers.
**Note**  
One or more approvers might already be specified. This means that mandatory approvers are already specified in the change template you have selected. These approvers can't be removed from the request. If the **Add approver** button isn't available, the template you have chosen doesn't allow additional reviewers to be added to requests.

      For more information about approvals for change requests, see [About change request approvals](#cm-approvals-requests).

   1. Under **SNS topic to notify approvers**, choose one of the following to specify the Amazon SNS topic in your account to use for sending notifications to the approvers you are adding to this change request.
**Note**  
If the option to specify an Amazon SNS topic isn't available, the change template you selected already specifies the Amazon SNS topic to use.
      + **Enter an SNS Amazon Resource Name (ARN)** – For **Topic ARN**, enter the ARN of an existing Amazon SNS topic. This topic can be in any of your organization's accounts.
      + **Select an existing SNS topic** – For **Target notification topic**, select the ARN of an existing Amazon SNS topic in your current account. (This option isn't available if you haven't yet created any Amazon SNS topics in your current AWS account and AWS Region.)
**Note**  
The Amazon SNS topic you select must be configured to specify the notifications it sends and the subscribers they're sent to. Its access policy must also grant permissions to Systems Manager so Change Manager can send notifications. For information, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md). 

   1. Choose **Add notification**.

1. Choose **Next**.

1. For **IAM role**, select an IAM role *in your current account * that has the permissions needed to run the runbooks that are specified for this change request.

   This role is also referred to as the service role, or assume role, for Automation. For more information about this role, see [Setting up Automation](automation-setup.md).

1. In the **Deployment location** section, choose one of the following:
**Note**  
If you're using Change Manager with a single AWS account only and not with an organization set up in AWS Organizations, you don't need to specify a deployment location.
   + **Apply change to this account** – The runbook workflow runs in the current account only. For an organization, this means the delegated administrator account.
   + **Apply change to multiple organizational units (OUs)** – Do the following: 

     1. For **Accounts and organizational units (OUs)**, enter the ID of a member account in your organization, in the format **123456789012**, or the ID of an organizational unit, in the format **o-o96EXAMPLE**. 

     1. (Optional) For **Execution role name**, enter the name of the IAM role *in the target account* or OU that has the permissions needed to run the runbooks that are specified for this change request. All accounts in any OU you specify should use the same name for this role.

     1. (Optional) Choose **Add another target location** for each additional account or OU you want to specify and repeat steps a and b. 

     1. For **Target AWS Region**, select the Region to make the change in, such as `Ohio (us-east-2)` for the US East (Ohio) Region.

     1. Expand **Rate control**. 

        For **Concurrency**, enter a number, then from the list select whether this represents the number or percentage of accounts the runbook workflow can run in at the same time. 

        For **Error threshold**, enter a number, then from the list select whether this represents the number or percentage of accounts where runbook workflow can fail before the operation is stopped. 

1. In the **Deployment targets** section, do the following:

   1. Choose one of the following:
      + **Single resource** – The change is to be made for just one resource. For example, a single node or a single Amazon Machine Image (AMI), depending on the operation defined in the runbooks for this change request.
      + **Multiple resources** – For **Parameter**, select from the available parameters from the runbooks for this change request. This selection reflects the type of resource being updated.

        For example, if the runbook for this change request is `AWS-RetartEC2Instance`, you might choose `InstanceId`, and then define which instances are updated by selecting from the following:
        + **Specify tags** – Enter a key-value pair that all resources to be updated are tagged with.
        + **Choose a resource group** – Choose the name of the resource group that all resources to be updated belong to.
        + **Specify parameter values** – Identify the resources to update in the **Runbook parameters** section.
        + **Target all instances** – Make the change on all managed nodes in the target locations.

   1. If you chose **Multiple resources**, expand **Rate control**. 

      For **Concurrency**, enter a number, then from the list select whether this represents the number or percentage of targets the runbook workflow can update at the same time. 

      For **Error threshold**, enter a number, then from the list select whether this represents the number or percentage of targets where the update can fail before the operation is stopped. 

1. If you chose **Specify parameter values** to update multiple resources in the previous step: In the **Runbook parameters** section, specify values for the required input parameters. The parameter values you must supply are based on the contents of the Automation runbooks associated with the change template you chose. 

   For example, if the change template uses the `AWS-RetartEC2Instance` runbook, then you must enter one or more instance IDs for the **InstanceId** parameter. Alternatively, choose **Show interactive instance picker** and select available instances one by one. 

1. Choose **Next**.

1. In the **Review and submit** page, double-check the resources and options you have specified for this change request.

   Choose the **Edit** button for any section you want to make changes to.

   When you're satisfied with the change request details, choose **Submit for approval**.

If an Amazon SNS topic has been specified in the change template you chose for the request, notifications are sent when the request is rejected or approved. If you don't receive notifications for the request, you can return to Change Manager to check the status of your request. 

## Creating change requests (AWS CLI)


You can create a change request using the AWS Command Line Interface (AWS CLI) by specifying options and parameters for the change request in a JSON file and using the `--cli-input-json` option to include it in your command.

**To create a change request (AWS CLI)**

1. Install and configure the AWS CLI or the AWS Tools for PowerShell, if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Create a JSON file on your local machine with a name such as `MyChangeRequest.json` and paste the following content into it.

   Replace *placeholders* with values for your change request.
**Note**  
This sample JSON creates a change request using the `AWS-HelloWorldChangeTemplate` change template and `AWS-HelloWorld` runbook. To help you adapt this sample for your own change requests, see [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartChangeRequestExecution.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_StartChangeRequestExecution.html) in the *AWS Systems Manager API Reference* for information about all available parameters  
For more information about approvals for change requests, see [About change request approvals](#cm-approvals-requests).

   ```
   {
       "ChangeRequestName": "MyChangeRequest",
       "DocumentName": "AWS-HelloWorldChangeTemplate",
       "DocumentVersion": "$DEFAULT",
       "ScheduledTime": "2021-12-30T03:00:00",
       "ScheduledEndTime": "2021-12-30T03:05:00",
       "Tags": [
           {
               "Key": "Purpose",
               "Value": "Testing"
           }
       ],
       "Parameters": {
           "Approver": [
               "JohnDoe"
           ],
           "ApproverType": [
               "IamUser"
           ],
           "ApproverSnsTopicArn": [
               "arn:aws:sns:us-east-2:123456789012:MyNotificationTopic"
           ]
       },
       "Runbooks": [
           {
               "DocumentName": "AWS-HelloWorld",
               "DocumentVersion": "1",
               "MaxConcurrency": "1",
               "MaxErrors": "1",
               "Parameters": {
                   "AutomationAssumeRole": [
                       "arn:aws:iam::123456789012:role/MyChangeManagerAssumeRole"
                   ]
               }
           }
       ],
       "ChangeDetails": "### Document Name: HelloWorldChangeTemplate\n\n## What does this document do?\nThis change template demonstrates the feature set available for creating change templates for Change Manager. This template starts a Runbook workflow for the Automation document called AWS-HelloWorld.\n\n## Input Parameters\n* ApproverSnsTopicArn: (Required) Amazon Simple Notification Service ARN for approvers.\n* Approver: (Required) The name of the approver to send this request to.\n* ApproverType: (Required) The type of reviewer.\n  * Allowed Values: IamUser, IamGroup, IamRole, SSOGroup, SSOUser\n\n## Output Parameters\nThis document has no outputs \n"
   }
   ```

1. In the directory where you created the JSON file, run the following command.

   ```
   aws ssm start-change-request-execution --cli-input-json file://MyChangeRequest.json
   ```

   The system returns information like the following.

   ```
   {
       "AutomationExecutionId": "b3c1357a-5756-4839-8617-2d2a4EXAMPLE"
   }
   ```

# Reviewing and approving or rejecting change requests


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

If you're specified as a reviewer for a change request in Change Manager, a tool in AWS Systems Manager, you're notified through an Amazon Simple Notification Service (Amazon SNS) topic when a new change request is awaiting your review. 

**Note**  
This functionality depends on whether an Amazon SNS was specified in the change template for sending review notifications. For information, see [Configuring Amazon SNS topics for Change Manager notifications](change-manager-sns-setup.md). 

To review the change request, you can follow the link in your notification, or sign in to the AWS Management Console directly and follow the steps in this procedure.

**Note**  
If an Amazon SNS topic is assigned for reviewers in a change template, notifications are sent to the topic's subscribers when the change request changes status.  
For more information about approvals for change requests, see [About change request approvals](change-requests-create.md#cm-approvals-requests).

## Reviewing and approving or rejecting change requests (console)


The following procedures describe how to use the Systems Manager console to review and approve or reject change requests.

**To review and approve or reject a single change request**

1. Open the link in the email notification you received and sign in to the AWS Management Console, which directs you to the change request for your review.

1. In the summary page, review the proposed content of the change request.

   To approve the change request, choose **Approve**. In the dialog box, provide any comments you want to add for this approval, and then choose **Approve**. The runbook workflow represented by this request starts to run either when scheduled, or as soon as changes aren't blocked by any restrictions.

   -or-

   To reject the change request, choose **Reject**. In the dialog box, provide any comments you want to add for this rejection, and then choose **Reject**.

**To review and approve or reject change requests in bulk**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Change Manager**.

1. Choose the **Approvals** tab.

1. (Optional) Review the details of requests pending your approval by choosing the name of each request, and then return to the **Approvals** tab.

1. Select the check box of each change request that you want to approve.

   -or-

   Select the check box of each change request that you want to reject.

1. In the dialog box, provide any comments you want to add for this approval or rejection.

1. Depending on whether you're approving or rejecting the selected change requests, choose **Approve** or **Reject**.

## Reviewing and approving or rejecting a change request (command line)


The following procedure describes how to use the AWS Command Line Interface (AWS CLI) (on Linux, macOS, or Windows Server) to review and approve or reject a change request.

**To review and approve or reject a change request**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Create a JSON file on your local machine that specifies the parameters for your AWS CLI call. 

   ```
   {
     "OpsItemFilters": 
     [
       {
         "Key": "OpsItemType",
         "Values": ["/aws/changerequest"],
         "Operator": "Equal"
       }
     ],
     "MaxResults": number
   }
   ```

   You can filter the results for a specific approver by specifying the approver's Amazon Resource Name (ARN) in the JSON file. Here is an example.

   ```
   {
     "OpsItemFilters": 
     [
       {
         "Key": "OpsItemType",
         "Values": ["/aws/changerequest"],
         "Operator": "Equal"
       },
       {
         "Key": "ChangeRequestByApproverArn",
         "Values": ["arn:aws:iam::account-id:user/user-name"],
         "Operator": "Equal"
       }
     ],
     "MaxResults": number
   }
   ```

1. Run the following command to view the maximum number of change requests you specified in the JSON file.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-ops-items \
   --cli-input-json file://filename.json
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-ops-items ^
   --cli-input-json file://filename.json
   ```

------

1. Run the following command to approve or reject a change request.

------
#### [ Linux & macOS ]

   ```
   aws ssm send-automation-signal \
       --automation-execution-id ID \
       --signal-type Approve_or_Reject \
       --payload Comment="message"
   ```

------
#### [ Windows ]

   ```
   aws ssm send-automation-signal ^
   --automation-execution-id ID ^
       --signal-type Approve_or_Reject ^
       --payload Comment="message"
   ```

------

   If an Amazon SNS topic has been specified in the change template you chose for the request, notifications are sent when the request is rejected or approved. If you don't receive notifications for the request, you can return to Change Manager to check the status of your request. For information about other options when using this command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/send-automation-signal.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/send-automation-signal.html) in the AWS Systems Manager section of the *AWS CLI Command Reference*.

# Reviewing change request details, tasks, and timelines (console)


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

You can view information about a change request, including requests for which changes have already been processed, in the dashboard of Change Manager, a tool in AWS Systems Manager. These details include a link to the Automation operation that runs the runbooks that make the change. An Automation execution ID is generated when the request is created, but the process doesn't run until all approvals have been given and no restrictions are in place to block the change.

**To review change request details, tasks, and timelines**

1. In the navigation pane, choose **Change Manager**.

1. Choose the **Requests** tab.

1. In the **Change requests** section, search for the change request you want to review. 

   You can use the **Create date range** options to limit results to a specific time period.

   You can filter requests by the following properties:
   + `Status`
   + `Request ID`
   + `Approver`
   + `Requester`

   For example, to view details about all change requests that have completed successfully in the past 24 hours, do the following:

   1. For **Create date range**, choose **1d**.

   1. In the search box, select **Status, CompletedWithSuccess**. 

   1. In the results, choose the name of the successfully completed change request to review results for.

1. View information about the change request on the following tabs:
   + **Request details** – View basic details about the change request, including the requester, the change template, and the Automation runbooks selected for the change. You can also follow a link to the Automation operation details and view information about any runbook parameters specified in the request, Amazon CloudWatch alarms assigned to the change request, and approvals and comments provided for the request.
   + **Task** – View information about the task in the change, including task status for completed change requests, the targeted resources, the steps in the associated Automation runbooks, and concurrency and error threshold details.
   + **Timeline** – View a summary of all events associated with the change request, listed by date and time. The summary indicates when the change request was created, actions by assigned approvers, a note of when approved change requests are scheduled to run, runbook workflow details, and status changes for the overall change process and each step in the runbook.
   + **Associated events** – View auditable details about change requests that are recorded in [AWS CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html). Details include which API actions were run, the request parameters included for those actions, the user account that ran the action, the resources updated during the process, and more.

     When you enable CloudTrail Lake event tracking, CloudTrail Lake creates an event data store for events related to your change requests. The event details are available for the account or organization where the change request ran. You can turn on CloudTrail Lake event tracking from any change request in your account or organization. For information about enabling CloudTrail Lake integration and creating an event data store, see [Monitoring your change request events](monitoring-change-request-events.md).
**Note**  
There is a charge to use **CloudTrail Lake**. For details, see [AWS CloudTrail pricing](https://aws.amazon.com/cloudtrail/pricing/).

# Viewing aggregated counts of change requests (command line)


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

You can view aggregated counts of change requests in Change Manager, a tool in AWS Systems Manager, by using the [GetOpsSummary](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsSummary.html) API operation. This API operation can return counts for a single AWS account in a single AWS Region or for multiple accounts and multiple Regions.

**Note**  
If you want to view aggregated counts of change requests for multiple AWS accounts and multiple AWS Regions, you must set up and configure a resource data sync. For more information, see [Creating a resource data sync for Inventory](inventory-create-resource-data-sync.md).

The following procedure describes how to use the AWS Command Line Interface (AWS CLI) (on Linux, macOS, or Windows Server) to view aggregated counts of change requests. 

**To view aggregated counts of change requests**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run one of the following commands. 

   **Single account and Region**

   This command returns a count of all change requests for the AWS account and AWS Region for which your AWS CLI session is configured.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-ops-summary \
   --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal \
   --aggregators AggregatorType=count,AttributeName=Status,TypeName=AWS:OpsItem
   ```

------
#### [ Windows ]

   ```
   aws ssm get-ops-summary ^
   --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal ^
   --aggregators AggregatorType=count,AttributeName=Status,TypeName=AWS:OpsItem
   ```

------

   The call returns information like the following.

   ```
   {
       "Entities": [
           {
               "Data": {
                   "AWS:OpsItem": {
                       "Content": [
                           {
                               "Count": "38",
                               "Status": "Open"
                           }
                       ]
                   }
               }
           }
       ]
   }
   ```

   **Multiple accounts and/or Regions**

   This command returns a count of all change requests for the AWS accounts and AWS Regions specified in the resource data sync.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-ops-summary \
       --sync-name resource_data_sync_name \
       --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal \
       --aggregators AggregatorType=count,AttributeName=Status,TypeName=AWS:OpsItem
   ```

------
#### [ Windows ]

   ```
   aws ssm get-ops-summary ^
       --sync-name resource_data_sync_name ^
       --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal ^
       --aggregators AggregatorType=count,AttributeName=Status,TypeName=AWS:OpsItem
   ```

------

   The call returns information like the following.

   ```
   {
       "Entities": [
           {
               "Data": {
                   "AWS:OpsItem": {
                       "Content": [
                           {
                               "Count": "43",
                               "Status": "Open"
                           },
                           {
                               "Count": "2",
                               "Status": "Resolved"
                           }
                       ]
                   }
               }
           }
       ]
   }
   ```

   **Multiple accounts and a specific Region**

   This command returns a count of all change requests for the AWS accounts specified in the resource data sync. However, it only returns data from the Region specified in the command.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-ops-summary \
       --sync-name resource_data_sync_name \
       --filters Key=AWS:OpsItem.SourceRegion,Values='Region',Type=Equal Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal \
       --aggregators AggregatorType=count,AttributeName=Status,TypeName=AWS:OpsItem
   ```

------
#### [ Windows ]

   ```
   aws ssm get-ops-summary ^
       --sync-name resource_data_sync_name ^
       --filters Key=AWS:OpsItem.SourceRegion,Values='Region',Type=Equal Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal ^
       --aggregators AggregatorType=count,AttributeName=Status,TypeName=AWS:OpsItem
   ```

------

   **Multiple accounts and Regions with output grouped by Region**

   This command returns a count of all change requests for the AWS accounts and AWS Regions specified in the resource data sync. The output displays count information per Region.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-ops-summary \
       --sync-name resource_data_sync_name \
       --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal \
       --aggregators '[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"Status","Aggregators":[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"SourceRegion"}]}]'
   ```

------
#### [ Windows ]

   ```
   aws ssm get-ops-summary ^
       --sync-name resource_data_sync_name ^
       --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal ^
       --aggregators '[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"Status","Aggregators":[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"SourceRegion"}]}]'
   ```

------

   The call returns information like the following.

   ```
   {
           "Entities": [
               {
                   "Data": {
                       "AWS:OpsItem": {
                           "Content": [
                               {
                                   "Count": "38",
                                   "SourceRegion": "us-east-1",
                                   "Status": "Open"
                               },
                               {
                                   "Count": "4",
                                   "SourceRegion": "us-east-2",
                                   "Status": "Open"
                               },
                               {
                                   "Count": "1",
                                   "SourceRegion": "us-west-1",
                                   "Status": "Open"
                               },
                               {
                                   "Count": "2",
                                   "SourceRegion": "us-east-2",
                                   "Status": "Resolved"
                               }
                           ]
                       }
                   }
               }
           ]
       }
   ```

   **Multiple accounts and Regions with output grouped by accounts and Regions**

   This command returns a count of all change requests for the AWS accounts and AWS Regions specified in the resource data sync. The output groups the count information by accounts and Regions.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-ops-summary \
       --sync-name resource_data_sync_name \
       --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal \
       --aggregators '[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"Status","Aggregators":[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"SourceAccountId","Aggregators":[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"SourceRegion"}]}]}]'
   ```

------
#### [ Windows ]

   ```
   aws ssm get-ops-summary ^
       --sync-name resource_data_sync_name ^
       --filters Key=AWS:OpsItem.OpsItemType,Values="/aws/changerequests",Type=Equal ^
       --aggregators '[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"Status","Aggregators":[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"SourceAccountId","Aggregators":[{"AggregatorType":"count","TypeName":"AWS:OpsItem","AttributeName":"SourceRegion"}]}]}]'
   ```

------

   The call returns information like the following.

   ```
   {
       "Entities": [
           {
               "Data": {
                   "AWS:OpsItem": {
                       "Content": [
                           {
                               "Count": "38",
                               "SourceAccountId": "123456789012",
                               "SourceRegion": "us-east-1",
                               "Status": "Open"
                           },
                           {
                               "Count": "4",
                               "SourceAccountId": "111122223333",
                               "SourceRegion": "us-east-2",
                               "Status": "Open"
                           },
                           {
                               "Count": "1",
                               "SourceAccountId": "111122223333",
                               "SourceRegion": "us-west-1",
                               "Status": "Open"
                           },
                           {
                               "Count": "2",
                               "SourceAccountId": "444455556666",
                               "SourceRegion": "us-east-2",
                               "Status": "Resolved"
                           },
                           {
                               "Count": "1",
                               "SourceAccountId": "222222222222",
                               "SourceRegion": "us-east-1",
                               "Status": "Open"
                           }
                       ]
                   }
               }
           }
       ]
   }
   ```

# Auditing and logging Change Manager activity


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

You can audit activity in Change Manager, a tool in AWS Systems Manager, by using Amazon CloudWatch and AWS CloudTrail alarms.

For more information about auditing and logging options for Systems Manager, see [Logging and monitoring in AWS Systems Manager](monitoring.md).

## Audit Change Manager activity using CloudWatch alarms


You can configure and assign a CloudWatch alarm to a change template. If any conditions defined in the alarm are met, the actions specified for the alarm are taken. In the alarm configuration, you can specify an Amazon Simple Notification Service (Amazon SNS) topic to notify when an alarm condition is met. 

For information about creating a Change Manager template, see [Working with change templates](change-templates.md).

For information about creating CloudWatch alarms, see [Using CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) in the *Amazon CloudWatch User Guide*.

## Audit Change Manager activity using CloudTrail


CloudTrail captures API calls made in the Systems Manager console, the AWS Command Line Interface (AWS CLI), and the Systems Manager SDK. You can view the information in the CloudTrail console or in an Amazon Simple Storage Service (Amazon S3) bucket, where it's stored. One bucket is used for all CloudTrail logs for your account.

Logs of Change Manager actions show change template document creation, change template and change request approvals and rejections, activity generated by Automation runbooks, and more. For more information about viewing and using CloudTrail logs of Systems Manager activity, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

# Troubleshooting Change Manager


**Change Manager availability change**  
AWS Systems Manager Change Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Change Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Change Manager availability change](https://docs.aws.amazon.com/systems-manager/latest/userguide/change-manager-availability-change.html). 

Use the following information to help you troubleshoot problems with Change Manager, a tool in AWS Systems Manager.

**Topics**
+ [

## “Group *\$1GUID\$1* not found” error during change request approvals when using Active Directory (groups
](#change-manager-troubleshooting-sso)

## “Group *\$1GUID\$1* not found” error during change request approvals when using Active Directory (groups


**Problem**: When AWS IAM Identity Center (IAM Identity Center) is used for user identity management, a member of an Active Directory group who is granted approval permissions in Change Manager receives a “not authorized” or “group not found” error.
+ **Solution**: When you select Active Directory groups in IAM Identity Center for access to the AWS Management Console, the system schedules a periodic synchronization that copies information from those Active Directory groups into IAM Identity Center. This process must complete before users authorized through Active Directory group membership can successfully approve a request. For more information, see [Connect to your Microsoft AD directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-ad.html) in the *AWS IAM Identity Center User Guide*.

# AWS Systems Manager Documents
Documents

An AWS Systems Manager document (SSM document) defines the actions that Systems Manager performs on your managed instances. Systems Manager includes more than 100 pre-configured documents that you can use by specifying parameters at runtime. You can find pre-configured documents in the Systems Manager Documents console by choosing the **Owned by Amazon** tab, or by specifying Amazon for the `Owner` filter when calling the `ListDocuments` API operation. Documents use JavaScript Object Notation (JSON) or YAML, and they include steps and parameters that you specify. 

For enhanced security, as of July 14th, 2025, SSM documents support environment variable interpolation when processing parameters. This feature, available in schema version 2.2 and with SSM Agent version 3.3.2746.0 or higher, helps prevent command injection attacks.

To get started with SSM documents, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/documents). In the navigation pane, choose **Documents**.

**Important**  
In Systems Manager, an *Amazon-owned* SSM document is a document created and managed by Amazon Web Services itself. *Amazon-owned* documents include a prefix like `AWS-*` in the document name. The owner of the document is considered to be Amazon, not a specific user account within AWS. These documents are publicly available for all to use.

## How can the Documents tool benefit my organization?


Documents, a tool in AWS Systems Manager, offers these benefits:
+ **Document categories**

  To help you find the documents you need, choose a category depending on the type of document you're searching for. To broaden your search, you can choose multiple categories of the same document type. Choosing categories of different document types is not supported. Categories are only supported for documents owned by Amazon.
+  **Document versions** 

  You can create and save different versions of documents. You can then specify a default version for each document. The default version of a document can be updated to a newer version or reverted to an older version of the document. When you change the content of a document, Systems Manager automatically increments the version of the document. You can retrieve or use any version of a document by specifying the document version in the console, AWS Command Line Interface (AWS CLI) commands, or API calls.
+  **Customize documents for your needs** 

  If you want to customize the steps and actions in a document, you can create your own. The system stores the document with your AWS account in the AWS Region you create it in. For more information about how to create an SSM document, see [Creating SSM document content](documents-creating-content.md).
+  **Tag documents** 

  You can tag your documents to help you quickly identify one or more documents based on the tags you've assigned to them. For example, you can tag documents for specific environments, departments, users, groups, or periods. You can also restrict access to documents by creating an AWS Identity and Access Management (IAM) policy that specifies the tags that a user or group can access.
+  **Share documents** 

  You can make your documents public or share them with specific AWS accounts in the same AWS Region. Sharing documents between accounts can be useful if, for example, you want all of the Amazon Elastic Compute Cloud (Amazon EC2) instances that you supply to customers or employees to have the same configuration. In addition to keeping applications or patches on the instances up to date, you might want to restrict customer instances from certain activities. Or you might want to ensure that the instances used by employee accounts throughout your organization are granted access to specific internal resources. For more information, see [Sharing SSM documents](documents-ssm-sharing.md).

## Who should use Documents?

+ Any AWS customer who wants to use Systems Manager tools to improve their operational efficiency at scale, reduce errors associated with manual intervention, and reduce time to resolution of common issues.
+ Infrastructure experts who want to automate deployment and configuration tasks.
+ Administrators who want to reliably resolve common issues, improve troubleshooting efficiency, and reduce repetitive operations.
+ Users who want to automate a task they normally perform manually.

## What are the types of SSM documents?


The following table describes the different types of SSM documents and their uses.


****  

| Type | Use with | Details | 
| --- | --- | --- | 
|  ApplicationConfiguration ApplicationConfigurationSchema  |   [AWS AppConfig](https://docs.aws.amazon.com/appconfig/latest/userguide/what-is-appconfig.html)   |  AWS AppConfig, a tool in AWS Systems Manager, enables you to create, manage, and quickly deploy application configurations. You can store configuration data in an SSM document by creating a document that uses the `ApplicationConfiguration` document type. For more information, see [Freeform configurations](https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile.html#free-form-configurations) in the *AWS AppConfig User Guide*. If you create a configuration in an SSM document, then you must specify a corresponding JSON Schema. The schema uses the `ApplicationConfigurationSchema` document type and, like a set of rules, defines the allowable properties for each application configuration setting. For more information, see [About validators](https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-creating-configuration-and-profile-validators.html) in the *AWS AppConfig User Guide*.  | 
|  Automation runbook  |   [Automation](systems-manager-automation.md)   [State Manager](systems-manager-state.md)   [Maintenance Windows](maintenance-windows.md)   |  Use Automation runbooks when performing common maintenance and deployment tasks such as creating or updating an Amazon Machine Image (AMI). State Manager uses Automation runbooks to apply a configuration. These actions can be run on one or more targets at any point during the lifecycle of an instance. Maintenance Windows uses Automation runbooks to perform common maintenance and deployment tasks based on the specified schedule. All Automation runbooks that are supported for Linux-based operating systems are also supported on EC2 instances for macOS.  | 
|  Change Calendar document  |   [Change Calendar](systems-manager-change-calendar.md)   |  Change Calendar, a tool in AWS Systems Manager, uses the `ChangeCalendar` document type. A Change Calendar document stores a calendar entry and associated events that can allow or prevent Automation actions from changing your environment. In Change Calendar, a document stores [iCalendar 2.0](https://icalendar.org/) data in plaintext format. Change Calendar isn't supported on EC2 instances for macOS.  | 
|  AWS CloudFormation template  |   [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)   |  AWS CloudFormation templates describe the resources that you want to provision in your CloudFormation stacks. By storing CloudFormation templates as Systems Manager documents, you can benefit from Systems Manager document features. These include creating and comparing multiple versions of your template, and sharing your template with other accounts in the same AWS Region. You can create and edit CloudFormation templates and stacks by using Application Manager, a tool in Systems Manager. For more information, see [Working with CloudFormation templates and stacks in Application Manager](application-manager-working-stacks.md).  | 
|  Command document  |   [Run Command](run-command.md)   [State Manager](systems-manager-state.md)   [Maintenance Windows](maintenance-windows.md)   |  Run Command, a tool in AWS Systems Manager, uses Command documents to run commands. State Manager, a tool in AWS Systems Manager, uses command documents to apply a configuration. These actions can be run on one or more targets at any point during the lifecycle of an instance. Maintenance Windows, a tool in AWS Systems Manager, uses Command documents to apply a configuration based on the specified schedule. Most Command documents are supported on all Linux and Windows Server operating systems supported by Systems Manager. The following Command documents are supported on EC2 instances for macOS: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html)  | 
|  AWS Config conformance pack template  |   [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html)   |  AWS Config conformance pack templates are YAML formatted documents used to create conformance packs that contains the list of AWS Config managed or custom rules and remediation actions. For more information, see [Conformance Packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html).  | 
|  Package document  |   [Distributor](distributor.md)   |  In Distributor, a tool in AWS Systems Manager, a package is represented by an SSM document. A package document includes attached ZIP archive files that contain software or assets to install on managed instances. Creating a package in Distributor creates the package document. Distributor isn't supported on Oracle Linux and macOS managed instances.  | 
|  Policy document  |   [State Manager](systems-manager-state.md)   |  Inventory, a tool in AWS Systems Manager, uses the `AWS-GatherSoftwareInventory` Policy document with a State Manager association to collect inventory data from managed instances. When creating your own SSM documents, Automation runbooks and Command documents are the preferred method for enforcing a policy on a managed instance. Systems Manager Inventory and the `AWS-GatherSoftwareInventory` Policy document are supported on all operating systems supported by Systems Manager.  | 
|  Post-incident analysis template  |   [Incident Manager post-incident analysis](https://docs.aws.amazon.com/incident-manager/latest/userguide/analysis.html)   |  Incident Manager uses the post-incident analysis template to create an analysis based on AWS operations management best practices. Use the template to create an analysis that your team can use to identify improvements to your incident response.   | 
|  Session document  |   [Session Manager](session-manager.md)   |  Session Manager, a tool in AWS Systems Manager, uses Session documents to determine which type of session to start, such as a port forwarding session, a session to run an interactive command, or a session to create an SSH tunnel. Session documents are supported on all Linux and Windows Server operating systems supported by Systems Manager. The following Command documents are supported on EC2 instances for macOS: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html)  | 

**SSM document quotas**  
For information about SSM document quotas, see [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*.

**Topics**
+ [

## How can the Documents tool benefit my organization?
](#ssm-docs-benefits)
+ [

## Who should use Documents?
](#documents-who)
+ [

## What are the types of SSM documents?
](#what-are-document-types)
+ [

# Document components
](documents-components.md)
+ [

# Creating SSM document content
](documents-creating-content.md)
+ [

# Working with documents
](documents-using.md)
+ [

# Troubleshooting parameter handling issues
](parameter-troubleshooting.md)

# Document components


This section includes information about the components that make up SSM documents.

**Topics**
+ [

# Schemas, features, and examples
](documents-schemas-features.md)
+ [

# Data elements and parameters
](documents-syntax-data-elements-parameters.md)
+ [

# Command document plugin reference
](documents-command-ssm-plugin-reference.md)

# Schemas, features, and examples


AWS Systems Manager (SSM) documents use the following schema versions.
+ Documents of type `Command` can use schema version 1.2, 2.0, and 2.2. If you use schema 1.2 documents, we recommend that you create documents that use schema version 2.2.
+ Documents of type `Policy` must use schema version 2.0 or later.
+ Documents of type `Automation` must use schema version 0.3.
+ Documents of type `Session` must use schema version 1.0.
+ You can create documents in JSON or YAML.

For more information about `Session` document schema, see [Session document schema](session-manager-schema.md).

By using the latest schema version for `Command` and `Policy` documents, you can take advantage of the following features.


**Schema version 2.2 document features**  

| Feature | Details | 
| --- | --- | 
|  Document editing  |  Documents can now be updated. With version 1.2, any update to a document required that you save it with a different name.  | 
|  Automatic versioning  |  Any update to a document creates a new version. This isn't a schema version, but a version of the document.  | 
|  Default version  |  If you have multiple versions of a document, you can specify which version is the default document.  | 
|  Sequencing  |  Plugins or *steps* in a document run in the order that you specified.  | 
|  Cross-platform support  |  Cross-platform support allows you to specify different operating systems for different plugins within the same SSM document. Cross-platform support uses the `precondition` parameter within a step.   | 
| Parameter interpolation | Interpolation means to insert or substitute a variable value into a string. Think of it as filling in a blank space with actual values before the string is used. In the context of SSM documents, parameter interpolation allows string parameters to be interpolated into environment variables before command execution, providing better security against command injections. When set to `ENV_VAR`, the agent creates an environment variable named `SSM_parameter-name` that contains the parameter's value. | 

**Note**  
You must keep AWS Systems Manager SSM Agent on your instances updated with the latest version to use new Systems Manager features and SSM document features. For more information, see [Updating the SSM Agent using Run Command](run-command-tutorial-update-software.md#rc-console-agentexample).

The following table lists the differences between major schema versions.


****  

| Version 1.2 | Version 2.2 (latest version) | Details | 
| --- | --- | --- | 
|  runtimeConfig  |  mainSteps  |  In version 2.2, the `mainSteps` section replaces `runtimeConfig`. The `mainSteps` section allows Systems Manager to run steps in sequence.  | 
|  properties  |  inputs  |  In version 2.2, the `inputs` section replaces the `properties` section. The `inputs` section accepts parameters for steps.  | 
|  commands  |  runCommand  |  In version 2.2, the `inputs` section takes the `runCommand` parameter instead of the `commands` parameter.  | 
|  id  |  action  |  In version 2.2, `Action` replaces `ID`. This is just a name change.  | 
|  not applicable  |  name  |  In version 2.2, `name` is any user-defined name for a step.  | 

**Using the precondition parameter**  
With schema version 2.2 or later, you can use the `precondition` parameter to specify the target operating system for each plugin or to validate input parameters you've defined in your SSM document. The `precondition` parameter supports referencing your SSM document's input parameters, and `platformType` using values of `Linux`, `MacOS`, and `Windows`. Only the `StringEquals` operator is supported.

For documents that use schema version 2.2 or later, if `precondition` isn't specified, each plugin is either run or skipped based on the plugin’s compatibility with the operating system. Plugin compatibility with the operating system is evaluated before the `precondition`. For documents that use schema 2.0 or earlier, incompatible plugins throw an error.

For example, in a schema version 2.2 document, if `precondition` isn't specified and the `aws:runShellScript` plugin is listed, then the step runs on Linux instances, but the system skips it on Windows Server instances because the `aws:runShellScript` isn't compatible with Windows Server instances. However, for a schema version 2.0 document, if you specify the `aws:runShellScript` plugin, and then run the document on a Windows Server instances, the execution fails. You can see an example of the precondition parameter in an SSM document later in this section.

## Schema version 2.2


**Top-level elements**  
The following example shows the top-level elements of an SSM document using schema version 2.2.

------
#### [ YAML ]

```
---
schemaVersion: "2.2"
description: A description of the document.
parameters:
  parameter 1:
    property 1: "value"
    property 2: "value"
  parameter 2:
    property 1: "value"
    property 2: "value"
mainSteps:
  - action: Plugin name
    name: A name for the step.
    inputs:
      input 1: "value"
      input 2: "value"
      input 3: "{{ parameter 1 }}"
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "A description of the document.",
   "parameters": {
       "parameter 1": {
           "property 1": "value",
           "property 2": "value"
        },
        "parameter 2":{
           "property 1": "value",
           "property 2": "value"
        } 
    },
   "mainSteps": [
      {
         "action": "Plugin name",
         "name": "A name for the step.",
         "inputs": {
            "input 1": "value",
            "input 2": "value",
            "input 3": "{{ parameter 1 }}"
         }
      }
   ]
}
```

------

**Schema version 2.2 example**  
The following example uses the `aws:runPowerShellScript` plugin to run a PowerShell command on the target instances.

------
#### [ YAML ]

```
---
schemaVersion: "2.2"
description: "Example document"
parameters:
  Message:
    type: "String"
    description: "Example parameter"
    default: "Hello World"
    allowedValues: 
    - "Hello World"
mainSteps:
  - action: "aws:runPowerShellScript"
    name: "example"
    inputs:
      timeoutSeconds: '60'
      runCommand:
      - "Write-Output {{Message}}"
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "Example document",
   "parameters": {
      "Message": {
         "type": "String",
         "description": "Example parameter",
         "default": "Hello World",
         "allowedValues": ["Hello World"]
      }
   },
   "mainSteps": [
      {
         "action": "aws:runPowerShellScript",
         "name": "example",
         "inputs": {
            "timeoutSeconds": "60",
            "runCommand": [
               "Write-Output {{Message}}"
            ]
         }
      }
   ]
}
```

------

**Schema version 2.2 precondition parameter examples**  
Schema version 2.2 provides cross-platform support. This means that within a single SSM document you can specify different operating systems for different plugins. Cross-platform support uses the `precondition` parameter within a step, as shown in the following example. You can also use the `precondition` parameter to validate input parameters you've defined in your SSM document. You can see this in the second of the following examples.

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: cross-platform sample
mainSteps:
- action: aws:runPowerShellScript
  name: PatchWindows
  precondition:
    StringEquals:
    - platformType
    - Windows
  inputs:
    runCommand:
    - cmds
- action: aws:runShellScript
  name: PatchLinux
  precondition:
    StringEquals:
    - platformType
    - Linux
  inputs:
    runCommand:
    - cmds
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "cross-platform sample",
   "mainSteps": [
      {
         "action": "aws:runPowerShellScript",
         "name": "PatchWindows",
         "precondition": {
            "StringEquals": [
               "platformType",
               "Windows"
            ]
         },
         "inputs": {
            "runCommand": [
               "cmds"
            ]
         }
      },
      {
         "action": "aws:runShellScript",
         "name": "PatchLinux",
         "precondition": {
            "StringEquals": [
               "platformType",
               "Linux"
            ]
         },
         "inputs": {
            "runCommand": [
               "cmds"
            ]
         }
      }
   ]
}
```

------

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
parameters:
  action:
    type: String
    allowedValues:
    - Install
    - Uninstall
  confirmed:
    type: String
    allowedValues:
    - True
    - False
mainSteps:
- action: aws:runShellScript
  name: InstallAwsCLI
  precondition:
    StringEquals:
    - "{{ action }}"
    - "Install"
  inputs:
    runCommand:
    - sudo apt install aws-cli
- action: aws:runShellScript
  name: UninstallAwsCLI
  precondition:
    StringEquals:
    - "{{ action }} {{ confirmed }}"
    - "Uninstall True"
  inputs:
    runCommand:
    - sudo apt remove aws-cli
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "parameters": {
      "action": {
         "type": "String",
         "allowedValues": [
            "Install",
            "Uninstall"
         ]
      },
      "confirmed": {
         "type": "String",
         "allowedValues": [
            true,
            false
         ]
      }
   },
   "mainSteps": [
      {
         "action": "aws:runShellScript",
         "name": "InstallAwsCLI",
         "precondition": {
            "StringEquals": [
               "{{ action }}",
               "Install"
            ]
         },
         "inputs": {
            "runCommand": [
               "sudo apt install aws-cli"
            ]
         }
      },
      {
         "action": "aws:runShellScript",
         "name": "UninstallAwsCLI",
         "precondition": {
            "StringEquals": [
               "{{ action }} {{ confirmed }}",
               "Uninstall True"
            ]
         },
         "inputs": {
            "runCommand": [
               "sudo apt remove aws-cli"
            ]
         }
      }
   ]
}
```

------

**Schema version 2.2 interpolation example with SSM Agent versions before 3.3.2746.0**  
On SSM Agent versions prior to 3.3.2746.0, the agent ignores the `interpolationType` parameter and instead performs a raw string substitution. If you are referencing `SSM_parameter-name` explicitly, you must set this explicitly. In the following example for Linux, the `SSM_Message` environment variable is referenced explicitly.

```
{
    "schemaVersion": "2.2",
    "description": "An example document",
    "parameters": {
        "Message": {
            "type": "String",
            "description": "Message to be printed",
            "default": "Hello",
            "interpolationType" : "ENV_VAR",
	     "allowedPattern: "^[^"]*$"

        }
    },
    "mainSteps": [{
        "action": "aws:runShellScript",
        "name": "printMessage",
        "inputs": {
            "runCommand": [
              "if [ -z "${SSM_Message+x}" ]; then",
              "    export SSM_Message=\"{{Message}}\"",
              "fi",
              "",
              "echo $SSM_Message"
            ]
        }
    }
}
```

**Note**  
`allowedPattern` isn’t technically required if an SSM document doesn’t use double braces: `{{ }}`

**Schema version 2.2 State Manager example**  
You can use the following SSM document with State Manager, a tool in Systems Manager, to download and install the ClamAV antivirus software. State Manager enforces a specific configuration, which means that each time the State Manager association is run, the system checks to see if the ClamAV software is installed. If not, State Manager reruns this document.

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: State Manager Bootstrap Example
parameters: {}
mainSteps:
- action: aws:runShellScript
  name: configureServer
  inputs:
    runCommand:
    - sudo yum install -y httpd24
    - sudo yum --enablerepo=epel install -y clamav
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "State Manager Bootstrap Example",
   "parameters": {},
   "mainSteps": [
      {
         "action": "aws:runShellScript",
         "name": "configureServer",
         "inputs": {
            "runCommand": [
               "sudo yum install -y httpd24",
               "sudo yum --enablerepo=epel install -y clamav"
            ]
         }
      }
   ]
}
```

------

**Schema version 2.2 Inventory example**  
You can use the following SSM document with State Manager to collect inventory metadata about your instances.

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: Software Inventory Policy Document.
parameters:
  applications:
    type: String
    default: Enabled
    description: "(Optional) Collect data for installed applications."
    allowedValues:
    - Enabled
    - Disabled
  awsComponents:
    type: String
    default: Enabled
    description: "(Optional) Collect data for AWS Components like amazon-ssm-agent."
    allowedValues:
    - Enabled
    - Disabled
  networkConfig:
    type: String
    default: Enabled
    description: "(Optional) Collect data for Network configurations."
    allowedValues:
    - Enabled
    - Disabled
  windowsUpdates:
    type: String
    default: Enabled
    description: "(Optional) Collect data for all Windows Updates."
    allowedValues:
    - Enabled
    - Disabled
  instanceDetailedInformation:
    type: String
    default: Enabled
    description: "(Optional) Collect additional information about the instance, including
      the CPU model, speed, and the number of cores, to name a few."
    allowedValues:
    - Enabled
    - Disabled
  customInventory:
    type: String
    default: Enabled
    description: "(Optional) Collect data for custom inventory."
    allowedValues:
    - Enabled
    - Disabled
mainSteps:
- action: aws:softwareInventory
  name: collectSoftwareInventoryItems
  inputs:
    applications: "{{ applications }}"
    awsComponents: "{{ awsComponents }}"
    networkConfig: "{{ networkConfig }}"
    windowsUpdates: "{{ windowsUpdates }}"
    instanceDetailedInformation: "{{ instanceDetailedInformation }}"
    customInventory: "{{ customInventory }}"
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "Software Inventory Policy Document.",
   "parameters": {
      "applications": {
         "type": "String",
         "default": "Enabled",
         "description": "(Optional) Collect data for installed applications.",
         "allowedValues": [
            "Enabled",
            "Disabled"
         ]
      },
      "awsComponents": {
         "type": "String",
         "default": "Enabled",
         "description": "(Optional) Collect data for AWS Components like amazon-ssm-agent.",
         "allowedValues": [
            "Enabled",
            "Disabled"
         ]
      },
      "networkConfig": {
         "type": "String",
         "default": "Enabled",
         "description": "(Optional) Collect data for Network configurations.",
         "allowedValues": [
            "Enabled",
            "Disabled"
         ]
      },
      "windowsUpdates": {
         "type": "String",
         "default": "Enabled",
         "description": "(Optional) Collect data for all Windows Updates.",
         "allowedValues": [
            "Enabled",
            "Disabled"
         ]
      },
      "instanceDetailedInformation": {
         "type": "String",
         "default": "Enabled",
         "description": "(Optional) Collect additional information about the instance, including\nthe CPU model, speed, and the number of cores, to name a few.",
         "allowedValues": [
            "Enabled",
            "Disabled"
         ]
      },
      "customInventory": {
         "type": "String",
         "default": "Enabled",
         "description": "(Optional) Collect data for custom inventory.",
         "allowedValues": [
            "Enabled",
            "Disabled"
         ]
      }
   },
   "mainSteps": [
      {
         "action": "aws:softwareInventory",
         "name": "collectSoftwareInventoryItems",
         "inputs": {
            "applications": "{{ applications }}",
            "awsComponents": "{{ awsComponents }}",
            "networkConfig": "{{ networkConfig }}",
            "windowsUpdates": "{{ windowsUpdates }}",
            "instanceDetailedInformation": "{{ instanceDetailedInformation }}",
            "customInventory": "{{ customInventory }}"
         }
      }
   ]
}
```

------

**Schema version 2.2 `AWS-ConfigureAWSPackage` example**  
The following example shows the `AWS-ConfigureAWSPackage` document. The `mainSteps` section includes the `aws:configurePackage` plugin in the `action` step.

**Note**  
On Linux operating systems, only the `AmazonCloudWatchAgent` and `AWSSupport-EC2Rescue` packages are supported.

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: 'Install or uninstall the latest version or specified version of an AWS
  package. Available packages include the following: AWSPVDriver, AwsEnaNetworkDriver,
  AwsVssComponents, and AmazonCloudWatchAgent, and AWSSupport-EC2Rescue.'
parameters:
  action:
    description: "(Required) Specify whether or not to install or uninstall the package."
    type: String
    allowedValues:
    - Install
    - Uninstall
  name:
    description: "(Required) The package to install/uninstall."
    type: String
    allowedPattern: "^arn:[a-z0-9][-.a-z0-9]{0,62}:[a-z0-9][-.a-z0-9]{0,62}:([a-z0-9][-.a-z0-9]{0,62})?:([a-z0-9][-.a-z0-9]{0,62})?:package\\/[a-zA-Z][a-zA-Z0-9\\-_]{0,39}$|^[a-zA-Z][a-zA-Z0-9\\-_]{0,39}$"
  version:
    type: String
    description: "(Optional) A specific version of the package to install or uninstall."
mainSteps:
- action: aws:configurePackage
  name: configurePackage
  inputs:
    name: "{{ name }}"
    action: "{{ action }}"
    version: "{{ version }}"
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "Install or uninstall the latest version or specified version of an AWS package. Available packages include the following: AWSPVDriver, AwsEnaNetworkDriver, AwsVssComponents, and AmazonCloudWatchAgent, and AWSSupport-EC2Rescue.",
   "parameters": {
      "action": {
         "description":"(Required) Specify whether or not to install or uninstall the package.",
         "type":"String",
         "allowedValues":[
            "Install",
            "Uninstall"
         ]
      },
      "name": {
         "description": "(Required) The package to install/uninstall.",
         "type": "String",
         "allowedPattern": "^arn:[a-z0-9][-.a-z0-9]{0,62}:[a-z0-9][-.a-z0-9]{0,62}:([a-z0-9][-.a-z0-9]{0,62})?:([a-z0-9][-.a-z0-9]{0,62})?:package\\/[a-zA-Z][a-zA-Z0-9\\-_]{0,39}$|^[a-zA-Z][a-zA-Z0-9\\-_]{0,39}$"
      },
      "version": {
         "type": "String",
         "description": "(Optional) A specific version of the package to install or uninstall."
      }
   },
   "mainSteps":[
      {
         "action": "aws:configurePackage",
         "name": "configurePackage",
         "inputs": {
            "name": "{{ name }}",
            "action": "{{ action }}",
            "version": "{{ version }}"
         }
      }
   ]
}
```

------

## Schema version 1.2


The following example shows the top-level elements of a schema version 1.2 document.

```
{
   "schemaVersion":"1.2",
   "description":"A description of the SSM document.",
   "parameters":{
      "parameter 1":{
         "one or more parameter properties"
      },
      "parameter 2":{
         "one or more parameter properties"
      },
      "parameter 3":{
         "one or more parameter properties"
      }
   },
   "runtimeConfig":{
      "plugin 1":{
         "properties":[
            {
               "one or more plugin properties"
            }
         ]
      }
   }
}
```

**Schema version 1.2 `aws:runShellScript` example**  
The following example shows the `AWS-RunShellScript` SSM document. The **runtimeConfig** section includes the `aws:runShellScript` plugin.

```
{
    "schemaVersion":"1.2",
    "description":"Run a shell script or specify the commands to run.",
    "parameters":{
        "commands":{
            "type":"StringList",
            "description":"(Required) Specify a shell script or a command to run.",
            "minItems":1,
            "displayType":"textarea"
        },
        "workingDirectory":{
            "type":"String",
            "default":"",
            "description":"(Optional) The path to the working directory on your instance.",
            "maxChars":4096
        },
        "executionTimeout":{
            "type":"String",
            "default":"3600",
            "description":"(Optional) The time in seconds for a command to complete before it is considered to have failed. Default is 3600 (1 hour). Maximum is 172800 (48 hours).",
            "allowedPattern":"([1-9][0-9]{0,3})|(1[0-9]{1,4})|(2[0-7][0-9]{1,3})|(28[0-7][0-9]{1,2})|(28800)"
        }
    },
    "runtimeConfig":{
        "aws:runShellScript":{
            "properties":[
                {
                    "id":"0.aws:runShellScript",
                    "runCommand":"{{ commands }}",
                    "workingDirectory":"{{ workingDirectory }}",
                    "timeoutSeconds":"{{ executionTimeout }}"
                }
            ]
        }
    }
}
```

## Schema version 0.3


**Top-level elements**  
The following example shows the top-level elements of a schema version 0.3 Automation runbook in JSON format.

```
{
    "description": "document-description",
    "schemaVersion": "0.3",
    "assumeRole": "{{assumeRole}}",
    "parameters": {
        "parameter1": {
            "type": "String",
            "description": "parameter-1-description",
            "default": ""
        },
        "parameter2": {
            "type": "String",
            "description": "parameter-2-description",
            "default": ""
        }
    },
    "variables": {
        "variable1": {
            "type": "StringMap",
            "description": "variable-1-description",
            "default": {}
        },
        "variable2": {
            "type": "String",
            "description": "variable-2-description",
            "default": "default-value"
        }
    },
    "mainSteps": [
        {
            "name": "myStepName",
            "action": "action-name",
            "maxAttempts": 1,
            "inputs": {
                "Handler": "python-only-handler-name",
                "Runtime": "runtime-name",
                "Attachment": "script-or-zip-name"
            },
            "outputs": {
                "Name": "output-name",
                "Selector": "selector.value",
                "Type": "data-type"
            }
        }
    ],
    "files": {
        "script-or-zip-name": {
            "checksums": {
                "sha256": "checksum"
            },
            "size": 1234
        }
    }
}
```

**YAML Automation runbook example**  
The following sample shows the contents of an Automation runbook, in YAML format. This working example of version 0.3 of the document schema also demonstrates the use of Markdown to format document descriptions.

```
description: >-
  ##Title: LaunchInstanceAndCheckState

  -----

  **Purpose**: This Automation runbook first launches an EC2 instance
  using the AMI ID provided in the parameter ```imageId```. The second step of
  this document continuously checks the instance status check value for the
  launched instance until the status ```ok``` is returned.


  ##Parameters:

  -----

  Name | Type | Description | Default Value

  ------------- | ------------- | ------------- | -------------

  assumeRole | String | (Optional) The ARN of the role that allows Automation to
  perform the actions on your behalf. | -

  imageId  | String | (Optional) The AMI ID to use for launching the instance.
  The default value uses the latest Amazon Linux AMI ID available. | {{
  ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64 }}
schemaVersion: '0.3'
assumeRole: 'arn:aws:iam::111122223333::role/AutomationServiceRole'
parameters:
  imageId:
    type: String
    default: '{{ ssm:/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64 }}'
    description: >-
      (Optional) The AMI ID to use for launching the instance. The default value
      uses the latest released Amazon Linux AMI ID.
  tagValue:
    type: String
    default: ' LaunchedBySsmAutomation'
    description: >-
      (Optional) The tag value to add to the instance. The default value is
      LaunchedBySsmAutomation.
  instanceType:
    type: String
    default: t2.micro
    description: >-
      (Optional) The instance type to use for the instance. The default value is
      t2.micro.
mainSteps:
  - name: LaunchEc2Instance
    action: 'aws:executeScript'
    outputs:
      - Name: payload
        Selector: $.Payload
        Type: StringMap
    inputs:
      Runtime: python3.11
      Handler: launch_instance
      Script: ''
      InputPayload:
        image_id: '{{ imageId }}'
        tag_value: '{{ tagValue }}'
        instance_type: '{{ instanceType }}'
      Attachment: launch.py
    description: >-
      **About This Step**


      This step first launches an EC2 instance using the ```aws:executeScript```
      action and the provided python script.
  - name: WaitForInstanceStatusOk
    action: 'aws:executeScript'
    inputs:
      Runtime: python3.11
      Handler: poll_instance
      Script: |-
        def poll_instance(events, context):
          import boto3
          import time

          ec2 = boto3.client('ec2')

          instance_id = events['InstanceId']

          print('[INFO] Waiting for instance status check to report ok', instance_id)

          instance_status = "null"

          while True:
            res = ec2.describe_instance_status(InstanceIds=[instance_id])

            if len(res['InstanceStatuses']) == 0:
              print("Instance status information is not available yet")
              time.sleep(5)
              continue

            instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']

            print('[INFO] Polling to get status of the instance', instance_status)

            if instance_status == 'ok':
              break

            time.sleep(10)

          return {'Status': instance_status, 'InstanceId': instance_id}
      InputPayload: '{{ LaunchEc2Instance.payload }}'
    description: >-
      **About This Step**


      The python script continuously polls the instance status check value for
      the instance launched in Step 1 until the ```ok``` status is returned.
files:
  launch.py:
    checksums:
      sha256: 18871b1311b295c43d0f...[truncated]...772da97b67e99d84d342ef4aEXAMPLE
```

## Secure parameter handling examples


The following examples demonstrate secure parameter handling using environment variable `interpolationType`.

### Basic secure command execution


This example shows how to securely handle a command parameter:

**Note**  
`allowedPattern` isn’t technically required in SSM documents that don’t use double braces: `{{ }}` 

------
#### [ YAML ]

```
---

schemaVersion: '2.2'
description: An example document.
parameters:
  Message:
    type: String
    description: "Message to be printed"
    default: Hello
    interpolationType: ENV_VAR
    allowedPattern: "^[^"]*$"
mainSteps:
  - action: aws:runShellScript
    name: printMessage
    precondition:
      StringEquals:
        - platformType
        - Linux
    inputs:
      runCommand:
        - echo {{Message}}
```

------
#### [ JSON ]

```
{
    "schemaVersion": "2.2",
    "description": "An example document.",
    "parameters": {
        "Message": {
            "type": "String",
            "description": "Message to be printed",
            "default": "Hello",
            "interpolationType": "ENV_VAR",
            "allowedPattern": "^[^"]*$"
        }
    },
    "mainSteps": [{
        "action": "aws:runShellScript",
        "name": "printMessage",
        "precondition": {
           "StringEquals": ["platformType", "Linux"]
        },
        "inputs": {
            "runCommand": [
              "echo {{Message}}"
            ]
        }
    }]
}
```

------

### Using parameters in interpreted languages


This example demonstrates secure parameter handling in Python:

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: 'Secure Python script execution'
parameters:
  inputData:
    type: String
    description: 'Input data for processing'
    interpolationType: 'ENV_VAR'
mainSteps:
  - action: aws:runPowerShellScript
    name: runPython
    inputs:
      runCommand:
        - |
          python3 -c '
          import os
          import json
          
          # Safely access parameter through environment variable
          input_data = os.environ.get("SSM_inputData", "")
          
          # Process the data
          try:
              processed_data = json.loads(input_data)
              print(f"Successfully processed: {processed_data}")
          except json.JSONDecodeError:
              print("Invalid JSON input")
          '
```

------

### Backwards compatibility example


This example shows how to handle parameters securely while maintaining backwards compatibility:

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: 'Backwards compatible secure parameter handling'
parameters:
  userInput:
    type: String
    description: 'User input to process'
    interpolationType: 'ENV_VAR'
    allowedPattern: '^[^"]*$'

mainSteps:
  - action: aws:runShellScript
    name: processInput
    inputs:
      runCommand:
        - |
          # Handle both modern and legacy agent versions
          if [ -z "${SSM_userInput+x}" ]; then
              # Legacy agent - fall back to direct parameter reference
              export SSM_userInput="{{userInput}}"
          fi
          
          # Process the input securely
          echo "Processing input: $SSM_userInput"
```

------

**Note**  
`allowedPattern` isn’t technically required in SSM documents that don’t use double braces: `{{ }}` 

## Parameter security best practices


Follow these best practices when handling parameters in SSM documents:
+ **Use environment variable interpolation** - Always use `interpolationType: "ENV_VAR"` for string parameters that will be used in command execution.
+ **Implement input validation** - Use `allowedPattern` to restrict parameter values to safe patterns.
+ **Handle legacy systems** - Include fallback logic for older SSM Agent versions that don't support environment variable interpolation.
+ **Escape special characters** - When using parameter values in commands, properly escape special characters to prevent interpretation by the shell.
+ **Limit parameter scope** - Use the most restrictive parameter patterns possible for your use case.

# Data elements and parameters


This topic describes the data elements used in SSM documents. The schema version used to create a document defines the syntax and data elements that the document accepts. We recommend that you use schema version 2.2 or later for Command documents. Automation runbooks use schema version 0.3. Additionally, Automation runbooks support the use of Markdown, a markup language, which allows you to add wiki-style descriptions to documents and individual steps within the document. For more information about using Markdown, see [Using Markdown in the Console](https://docs.aws.amazon.com/general/latest/gr/aws-markdown.html) in the *AWS Management Console Getting Started Guide*.

The following section describes the data elements that you can include in a SSM document.

## Top-level data elements


**schemaVersion**  
The schema version to use.  
Type: Version  
Required: Yes

**description**  
Information you provide to describe the purpose of the document. You can also use this field to specify whether a parameter requires a value for a document to run, or if providing a value for the parameter is optional. Required and optional parameters can be seen in the examples throughout this topic.  
Type: String  
Required: No

**parameters**  
A structure that defines the parameters the document accepts.   
For enhanced security when handling string parameters, you can use environment variable interpolation by specifying the `interpolationType` property. When set to `ENV_VAR`, the system creates an environment variable named `SSM_parameter-name` containing the parameter value.  
The following includes an example of a parameter using environment variable `interpolationType`:  

```
{
    "schemaVersion": "2.2",
    "description": "An example document.",
    "parameters": {
        "Message": {
            "type": "String",
            "description": "Message to be printed",
            "default": "Hello",
            "interpolationType" : "ENV_VAR",
            "allowedPattern": "^[^"]*$"

        }
    },
    "mainSteps": [{
        "action": "aws:runShellScript",
        "name": "printMessage",
        "precondition" : {
           "StringEquals" : ["platformType", "Linux"]
        },
        "inputs": {
            "runCommand": [
              "echo {{Message}}"
            ]
        }
    }
}
```
`allowedPattern` isn’t technically required in SSM documents that don’t use double braces: `{{ }}` 
For parameters that you use often, we recommend that you store those parameters in Parameter Store, a tool in AWS Systems Manager. Then, you can define parameters in your document that reference Parameter Store parameters as their default value. To reference a Parameter Store parameter, use the following syntax.   

```
{{ssm:parameter-name}}
```
You can use a parameter that references a Parameter Store parameter the same way as any other document parameters. In the following example, the default value for the `commands` parameter is the Parameter Store parameter `myShellCommands`. By specifying the `commands` parameter as a `runCommand` string, the document runs the commands stored in the `myShellCommands` parameter.  

```
---
schemaVersion: '2.2'
description: runShellScript with command strings stored as Parameter Store parameter
parameters:
  commands:
    type: StringList
    description: "(Required) The commands to run on the instance."
    default: ["{{ ssm:myShellCommands }}"],
            interpolationType : 'ENV_VAR'
            allowedPattern: '^[^"]*$'

mainSteps:
- action: aws:runShellScript
  name: runShellScriptDefaultParams
  inputs:
    runCommand:"{{ commands }}"
```

```
{
    "schemaVersion": "2.2",
    "description": "runShellScript with command strings stored as Parameter Store parameter",
    "parameters": {
      "commands": {
        "type": "StringList",
        "description": "(Required) The commands to run on the instance.",
        "default": ["{{ ssm:myShellCommands }}"],
        "interpolationType" : "ENV_VAR"
      }
    },
    "mainSteps": [
      {
        "action": "aws:runShellScript",
        "name": "runShellScriptDefaultParams",
        "inputs": {
            "runCommand": [
              "{{ commands }}"
          ]
        }
      }
    ]
  }
```
You can reference `String` and `StringList` Parameter Store parameters in the `parameters` section of your document. You can't reference `SecureString` Parameter Store parameters.
For more information about Parameter Store, see [AWS Systems Manager Parameter Store](systems-manager-parameter-store.md).  
Type: Structure  
The `parameters` structure accepts the following fields and values:  
+ `type`: (Required) Allowed values include the following: `String`, `StringList`, `Integer` `Boolean`, `MapList`, and `StringMap`. To view examples of each type, see [SSM document parameter `type` examples](#top-level-properties-type) in the next section.
**Note**  
Command type documents only support the `String` and `StringList` parameter types.
+ `description`: (Optional) A description of the parameter.
+ `default`: (Optional) The default value of the parameter or a reference to a parameter in Parameter Store.
+ `allowedValues`: (Optional) An array of values allowed for the parameter. Defining allowed values for the parameter validates the user input. If a user inputs a value that isn't allowed, the execution fails to start.

------
#### [ YAML ]

  ```
  DirectoryType:
    type: String
    description: "(Required) The directory type to launch."
    default: AwsMad
    allowedValues:
    - AdConnector
    - AwsMad
    - SimpleAd
  ```

------
#### [ JSON ]

  ```
  "DirectoryType": {
    "type": "String",
    "description": "(Required) The directory type to launch.",
    "default": "AwsMad",
    "allowedValues": [
      "AdConnector",
      "AwsMad",
      "SimpleAd"
    ]
  }
  ```

------
+ `allowedPattern`: (Optional) A regular expression that validates whether the user input matches the defined pattern for the parameter. If the user input doesn't match the allowed pattern, the execution fails to start.
**Note**  
Systems Manager performs two validations for `allowedPattern`. The first validation is performed using the [Java regex library](https://docs.oracle.com/javase/8/docs/api/java/util/regex/package-summary.html) at the API level when you use a document. The second validation is performed on SSM Agent by using the [GO regexp library](https://pkg.go.dev/regexp) before processing the document. 

------
#### [ YAML ]

  ```
  InstanceId:
    type: String
    description: "(Required) The instance ID to target."
    allowedPattern: "^i-(?:[a-f0-9]{8}|[a-f0-9]{17})$"
    default: ''
  ```

------
#### [ JSON ]

  ```
  "InstanceId": {
    "type": "String",
    "description": "(Required) The instance ID to target.",
    "allowedPattern": "^i-(?:[a-f0-9]{8}|[a-f0-9]{17})$",
    "default": ""
  }
  ```

------
+ `displayType`: (Optional) Used to display either a `textfield` or a `textarea` in the AWS Management Console. `textfield` is a single-line text box. `textarea` is a multi-line text area.
+ `minItems`: (Optional) The minimum number of items allowed.
+ `maxItems`: (Optional) The maximum number of items allowed.
+ `minChars`: (Optional) The minimum number of parameter characters allowed.
+ `maxChars`: (Optional) The maximum number of parameter characters allowed.
+ `interpolationType`: (Optional) Defines how parameter values are processed before command execution. When set to `ENV_VAR`, the parameter value is made available as an environment variable named `SSM_parameter-name`. This feature helps prevent command injection by treating parameter values as literal strings.

  Type: String

  Valid values: `ENV_VAR`
Required: No

**variables**  
(Schema version 0.3 only) Values you can reference or update throughout the steps in an Automation runbook. Variables are similar to parameters, but differ in a very important way. Parameter values are static in the context of a runbook, but the values of variables can be changed in the context of the runbook. When updating the value of a variable, the data type must match the defined data type. For information about updating variables values in an automation, see [`aws:updateVariable` – Updates a value for a runbook variable](automation-action-update-variable.md)  
Type: Boolean \$1 Integer \$1 MapList \$1 String \$1 StringList \$1 StringMap  
Required: No  

```
variables:
    payload:
        type: StringMap
        default: "{}"
```

```
{
    "variables": [
        "payload": {
            "type": "StringMap",
            "default": "{}"
        }
    ]
}
```

**runtimeConfig**  
(Schema version 1.2 only) The configuration for the instance as applied by one or more Systems Manager plugins. Plugins aren't guaranteed to run in sequence.   
Type: Dictionary<string,PluginConfiguration>  
Required: No

**mainSteps**  
(Schema version 0.3, 2.0, and 2.2 only) An object that can include multiple steps (plugins). Plugins are defined within steps. Steps run in sequential order as listed in the document.   
Type: Dictionary<string,PluginConfiguration>  
Required: Yes

**outputs**  
(Schema version 0.3 only) Data generated by the execution of this document that can be used in other processes. For example, if your document creates a new AMI, you might specify "CreateImage.ImageId" as the output value, and then use this output to create new instances in a subsequent automation execution. For more information about outputs, see [Using action outputs as inputs](automation-action-outputs-inputs.md).  
Type: Dictionary<string,OutputConfiguration>  
Required: No

**files**  
(Schema version 0.3 only) The script files (and their checksums) attached to the document and run during an automation execution. Applies only to documents that include the `aws:executeScript` action and for which attachments have been specified in one or more steps.   
To learn about the runtimes supported by Automation runbooks, see [`aws:executeScript` – Run a script](automation-action-executeScript.md). For more information about including scripts in Automation runbooks, see [Using scripts in runbooks](automation-document-script-considerations.md) and [Visual design experience for Automation runbooks](automation-visual-designer.md).  
When creating an Automation runbook with attachments, you must also specify attachment files using the `--attachments` option (for AWS CLI) or `Attachments` (for API and SDK). You can specify the file location for SSM documents and files stored in Amazon Simple Storage Service (Amazon S3) buckets. For more information, see [Attachments](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateDocument.html#systemsmanager-CreateDocument-request-Attachments) in the AWS Systems Manager API Reference.  

```
---
files:
  launch.py:
    checksums:
      sha256: 18871b1311b295c43d0f...[truncated]...772da97b67e99d84d342ef4aEXAMPLE
```

```
"files": {
    "launch.py": {
        "checksums": {
            "sha256": "18871b1311b295c43d0f...[truncated]...772da97b67e99d84d342ef4aEXAMPLE"
        }
    }
}
```
Type: Dictionary<string,FilesConfiguration>  
Required: No

## SSM document parameter `type` examples


Parameter types in SSM documents are static. This means the parameter type can't be changed after it's defined. When using parameters with SSM document plugins, the type of a parameter can't be dynamically changed within a plugin's input. For example, you can't reference an `Integer` parameter within the `runCommand` input of the `aws:runShellScript` plugin because this input accepts a string or list of strings. To use a parameter for a plugin input, the parameter type must match the accepted type. For example, you must specify a `Boolean` type parameter for the `allowDowngrade` input of the `aws:updateSsmAgent` plugin. If your parameter type doesn't match the input type for a plugin, the SSM document fails to validate and the system doesn't create the document. This is also true when using parameters downstream within inputs for other plugins or AWS Systems Manager Automation actions. For example, you can't reference a `StringList` parameter within the `documentParameters` input of the `aws:runDocument` plugin. The `documentParameters` input accepts a map of strings even if the downstream SSM document parameter type is a `StringList` parameter and matches the parameter you're referencing.

When using parameters with Automation actions, parameter types aren't validated when you create the SSM document in most cases. Only when you use the `aws:runCommand` action are parameter types validated when you create the SSM document. In all other cases, the parameter validation occurs during the automation execution when an action's input is verified before running the action. For example, if your input parameter is a `String` and you reference it as the value for the `MaxInstanceCount` input of the `aws:runInstances` action, the SSM document is created. However, when running the document, the automation fails while validating the `aws:runInstances` action because the `MaxInstanceCount` input requires an `Integer`.

The following are examples of each parameter `type`.

String  
A sequence of zero or more Unicode characters wrapped in quotation marks. For example, "i-1234567890abcdef0". Use backslashes to escape.  
String parameters can include an optional `interpolationType` field with the value `ENV_VAR` to enable environment variable interpolation for improved security.  

```
---
InstanceId:
  type: String
  description: "(Optional) The target EC2 instance ID."
  interpolationType: ENV_VAR
```

```
"InstanceId":{
  "type":"String",
  "description":"(Optional) The target EC2 instance ID.",
  "interpolationType": "ENV_VAR"
}
```

StringList  
A list of String items separated by commas. For example, ["cd \$1", "pwd"].  

```
---
commands:
  type: StringList
  description: "(Required) Specify a shell script or a command to run."
  default: ""
  minItems: 1
  displayType: textarea
```

```
"commands":{
  "type":"StringList",
  "description":"(Required) Specify a shell script or a command to run.",
  "minItems":1,
  "displayType":"textarea"
}
```

Boolean  
Accepts only `true` or `false`. Doesn't accept "true" or 0.  

```
---
canRun:
  type: Boolean
  description: ''
  default: true
```

```
"canRun": {
  "type": "Boolean",
  "description": "",
  "default": true
}
```

Integer  
Integral numbers. Doesn't accept decimal numbers, for example 3.14159, or numbers wrapped in quotation marks, for example "3".  

```
---
timeout:
  type: Integer
  description: The type of action to perform.
  default: 100
```

```
"timeout": {
  "type": "Integer",
  "description": "The type of action to perform.",
  "default": 100    
}
```

StringMap  
A mapping of keys to values. Keys and values must be strings. For example, \$1"Env": "Prod"\$1.  

```
---
notificationConfig:
  type: StringMap
  description: The configuration for events to be notified about
  default:
    NotificationType: 'Command'
    NotificationEvents:
    - 'Failed'
    NotificationArn: "$dependency.topicArn"
  maxChars: 150
```

```
"notificationConfig" : {
  "type" : "StringMap",
  "description" : "The configuration for events to be notified about",
  "default" : {
    "NotificationType" : "Command",
    "NotificationEvents" : ["Failed"],
    "NotificationArn" : "$dependency.topicArn"
  },
  "maxChars" : 150
}
```

MapList  
A list of StringMap objects.  

```
blockDeviceMappings:
  type: MapList
  description: The mappings for the create image inputs
  default:
  - DeviceName: "/dev/sda1"
    Ebs:
      VolumeSize: "50"
  - DeviceName: "/dev/sdm"
    Ebs:
      VolumeSize: "100"
  maxItems: 2
```

```
"blockDeviceMappings":{
  "type":"MapList",
  "description":"The mappings for the create image inputs",
  "default":[
    {
      "DeviceName":"/dev/sda1",
      "Ebs":{
        "VolumeSize":"50"
      }
    },
    {
      "DeviceName":"/dev/sdm",
      "Ebs":{
        "VolumeSize":"100"
      }
    }
  ],
  "maxItems":2
}
```

## Viewing SSM Command document content


To preview the required and optional parameters for an AWS Systems Manager (SSM) Command document, in addition to the actions the document runs, you can view the content of the document in the Systems Manager console.

**To view SSM Command document content**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the search box, select **Document type**, and then select **Command**.

1. Choose the name of a document, and then choose the **Content** tab. 

1. In the content field, review the available parameters and action steps for the document.

   For example, the following image shows that (1) `version` and (2) `allowDowngrade` are optional parameters for the `AWS-UpdateSSMAgent` document, and that the first action run by the document is (3) `aws:updateSsmAgent`.  
![\[View SSM document content in the Systems Manager console\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/view-document-content.png)

# Command document plugin reference


This reference describes the plugins that you can specify in an AWS Systems Manager (SSM) Command type document. These plugins can't be used in SSM Automation runbooks, which use Automation actions. For information about AWS Systems Manager Automation actions, see [Systems Manager Automation actions reference](automation-actions.md).

Systems Manager determines the actions to perform on a managed instance by reading the contents of an SSM document. Each document includes a code-execution section. Depending on the schema version of your document, this code-execution section can include one or more plugins or steps. For the purpose of this Help topic, plugins and steps are called *plugins*. This section includes information about each of the Systems Manager plugins. For more information about documents, including information about creating documents and the differences between schema versions, see [AWS Systems Manager Documents](documents.md).

For plugins that accept String parameters, such as `aws:runShellScript` and `aws:runPowerShellScript`, the `interpolationType` parameter can be used to enhance security by treating parameter inputs as string literals rather than potentially executable commands. For example:

```
{
    "schemaVersion": "2.2",
    "description": "runShellScript with command strings stored as Parameter Store parameter",
    "parameters": {
      "commands": {
        "type": "StringList",
        "description": "(Required) The commands to run on the instance.",
        "default": ["{{ ssm:myShellCommands }}"],
        "interpolationType" : "ENV_VAR"
      }
    },
    //truncated
 }
```

**Note**  
Some of the plugins described here run only on either Windows Server instances or Linux instances. Platform dependencies are noted for each plugin.   
The following document plugins are supported on Amazon Elastic Compute Cloud (Amazon EC2) instances for macOS:  
`aws:refreshAssociation`
`aws:runShellScript`
`aws:runPowerShellScript`
`aws:softwareInventory`
`aws:updateSsmAgent`

**Topics**
+ [

## Shared inputs
](#shared-inputs)
+ [

## `aws:applications`
](#aws-applications)
+ [

## `aws:cloudWatch`
](#aws-cloudWatch)
+ [

## `aws:configureDocker`
](#aws-configuredocker)
+ [

## `aws:configurePackage`
](#aws-configurepackage)
+ [

## `aws:domainJoin`
](#aws-domainJoin)
+ [

## `aws:downloadContent`
](#aws-downloadContent)
+ [

## `aws:psModule`
](#aws-psModule)
+ [

## `aws:refreshAssociation`
](#aws-refreshassociation)
+ [

## `aws:runDockerAction`
](#aws-rundockeraction)
+ [

## `aws:runDocument`
](#aws-rundocument)
+ [

## `aws:runPowerShellScript`
](#aws-runPowerShellScript)
+ [

## `aws:runShellScript`
](#aws-runShellScript)
+ [

## `aws:softwareInventory`
](#aws-softwareinventory)
+ [

## `aws:updateAgent`
](#aws-updateagent)
+ [

## `aws:updateSsmAgent`
](#aws-updatessmagent)

## Shared inputs


With SSM Agent version 3.0.502 and later only, all plugins can use the following inputs:

**finallyStep**  
The last step you want the document to run. If this input is defined for a step, it takes precedence over an `exit` value specified in the `onFailure` or `onSuccess` inputs. In order for a step with this input to run as expected, the step must be the last one defined in the `mainSteps` of your document.  
Type: Boolean  
Valid values: `true` \$1 `false`  
Required: No

**onFailure**  
If you specify this input for a plugin with the `exit` value and the step fails, the step status reflects the failure and the document doesn't run any remaining steps unless a `finallyStep` has been defined. If you specify this input for a plugin with the `successAndExit` value and the step fails, the step status shows successful and the document doesn't run any remaining steps unless a `finallyStep` has been defined.  
Type: String  
Valid values: `exit` \$1 `successAndExit`  
Required: No

**onSuccess**  
If you specify this input for a plugin and the step runs successfully, the document doesn't run any remaining steps unless a `finallyStep` has been defined.  
Type: String  
Valid values: `exit`  
Required: No

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: Shared inputs example
parameters:
  customDocumentParameter:
    type: String
    description: Example parameter for a custom Command-type document.
mainSteps:
- action: aws:runDocument
  name: runCustomConfiguration
  inputs:
    documentType: SSMDocument
    documentPath: "yourCustomDocument"
    documentParameters: '"documentParameter":{{customDocumentParameter}}'
    onSuccess: exit
- action: aws:runDocument
  name: ifConfigurationFailure
  inputs:
    documentType: SSMDocument
    documentPath: "yourCustomRepairDocument"
    onFailure: exit
- action: aws:runDocument
  name: finalConfiguration
  inputs:
    documentType: SSMDocument
    documentPath: "yourCustomFinalDocument"
    finallyStep: true
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "Shared inputs example",
   "parameters": {
      "customDocumentParameter": {
         "type": "String",
         "description": "Example parameter for a custom Command-type document."
      }
   },
   "mainSteps":[
      {
         "action": "aws:runDocument",
         "name": "runCustomConfiguration",
         "inputs": {
            "documentType": "SSMDocument",
            "documentPath": "yourCustomDocument",
            "documentParameters": "\"documentParameter\":{{customDocumentParameter}}",
            "onSuccess": "exit"
         }
      },
      {
         "action": "aws:runDocument",
         "name": "ifConfigurationFailure",
         "inputs": {
            "documentType": "SSMDocument",
            "documentPath": "yourCustomRepairDocument",
            "onFailure": "exit"
         }
      },
      {
         "action": "aws:runDocument",
         "name":"finalConfiguration",
         "inputs": {
            "documentType": "SSMDocument",
            "documentPath": "yourCustomFinalDocument",
            "finallyStep": true
         }
      }
   ]
}
```

------

## `aws:applications`


Install, repair, or uninstall applications on an EC2 instance. This plugin only runs on Windows Server operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:applications plugin
parameters:
  source:
    description: "(Required) Source of msi."
    type: String
mainSteps:
- action: aws:applications
  name: example
  inputs:
    action: Install
    source: "{{ source }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion":"2.2",
  "description":"aws:applications",
  "parameters":{
    "source":{
    "description":"(Required) Source of msi.",
    "type":"String"
    }
  },
  "mainSteps":[
    {
      "action":"aws:applications",
      "name":"example",
      "inputs":{
        "action":"Install",
        "source":"{{ source }}"
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:applications:
    properties:
    - id: 0.aws:applications
      action: "{{ action }}"
      parameters: "{{ parameters }}"
      source: "{{ source }}"
      sourceHash: "{{ sourceHash }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:applications":{
         "properties":[
            {
               "id":"0.aws:applications",
               "action":"{{ action }}",
               "parameters":"{{ parameters }}",
               "source":"{{ source }}",
               "sourceHash":"{{ sourceHash }}"
            }
         ]
      }
   }
}
```

------

### Properties


**action**  
The action to take.  
Type: Enum  
Valid values: `Install` \$1 `Repair` \$1 `Uninstall`  
Required: Yes

**parameters**  
The parameters for the installer.  
Type: String  
Required: No

**source**  
The URL of the `.msi` file for the application.  
Type: String  
Required: Yes

**sourceHash**  
The SHA256 hash of the `.msi` file.  
Type: String  
Required: No

## `aws:cloudWatch`


Export data from Windows Server to Amazon CloudWatch or Amazon CloudWatch Logs and monitor the data using CloudWatch metrics. This plugin only runs on Windows Server operating systems. For more information about configuring CloudWatch integration with Amazon Elastic Compute Cloud (Amazon EC2), see [Collecting metrics, logs, and traces with the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) in the *Amazon CloudWatch User Guide*.

**Important**  
The unified CloudWatch agent has replaced SSM Agent as the tool for sending log data to Amazon CloudWatch Logs. The SSM Agent aws:cloudWatch plugin is not supported. We recommend using only the unified CloudWatch agent for your log collection processes. For more information, see the following topics:  
[Sending node logs to unified CloudWatch Logs (CloudWatch agent)](monitoring-cloudwatch-agent.md)
[Migrate Windows Server node log collection to the CloudWatch agent](monitoring-cloudwatch-agent.md#monitoring-cloudwatch-agent-migrate)
[Collecting metrics, logs, and traces with the CloudWatch agent](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html) in the *Amazon CloudWatch User Guide*.

You can export and monitor the following data types:

**ApplicationEventLog**  
Sends application event log data to CloudWatch Logs.

**CustomLogs**  
Sends any text-based log file to Amazon CloudWatch Logs. The CloudWatch plugin creates a fingerprint for log files. The system then associates a data offset with each fingerprint. The plugin uploads files when there are changes, records the offset, and associates the offset with a fingerprint. This method is used to avoid a situation where a user turns on the plugin, associates the service with a directory that contains a large number of files, and the system uploads all of the files.  
Be aware that if your application truncates or attempts to clean logs during polling, any logs specified for `LogDirectoryPath` can lose entries. If, for example, you want to limit log file size, create a new log file when that limit is reached, and then continue writing data to the new file.

**ETW**  
Sends Event Tracing for Windows (ETW) data to CloudWatch Logs.

**IIS**  
Sends IIS log data to CloudWatch Logs.

**PerformanceCounter**  
Sends Windows performance counters to CloudWatch. You can select different categories to upload to CloudWatch as metrics. For each performance counter that you want to upload, create a **PerformanceCounter** section with a unique ID (for example, "PerformanceCounter2", "PerformanceCounter3", and so on) and configure its properties.  
If the AWS Systems Manager SSM Agent or the CloudWatch plugin is stopped, performance counter data isn't logged in CloudWatch. This behavior is different than custom logs or Windows Event logs. Custom logs and Windows Event logs preserve performance counter data and upload it to CloudWatch after SSM Agent or the CloudWatch plugin is available.

**SecurityEventLog**  
Sends security event log data to CloudWatch Logs.

**SystemEventLog**  
Sends system event log data to CloudWatch Logs.

You can define the following destinations for the data:

**CloudWatch**  
The destination where your performance counter metric data is sent. You can add more sections with unique IDs (for example, "CloudWatch2", CloudWatch3", and so on), and specify a different Region for each new ID to send the same data to different locations.

**CloudWatchLogs**  
The destination where your log data is sent. You can add more sections with unique IDs (for example, "CloudWatchLogs2", CloudWatchLogs3", and so on), and specify a different Region for each new ID to send the same data to different locations.

### Syntax


```
"runtimeConfig":{
        "aws:cloudWatch":{
            "settings":{
                "startType":"{{ status }}"
            },
            "properties":"{{ properties }}"
        }
    }
```

### Settings and properties


**AccessKey**  
Your access key ID. This property is required unless you launched your instance using an IAM role. This property can't be used with SSM.  
Type: String  
Required: No

**CategoryName**  
The performance counter category from Performance Monitor.  
Type: String  
Required: Yes

**CounterName**  
The name of the performance counter from Performance Monitor.  
Type: String  
Required: Yes

**CultureName**  
The locale where the timestamp is logged. If **CultureName** is blank, it defaults to the same locale used by your Windows Server instance.  
Type: String  
Valid values: For a list of supported values, see [National Language Support (NLS)](https://msdn.microsoft.com/en-us/library/cc233982.aspx) on the Microsoft website. The **div**, **div-MV**, **hu**, and **hu-HU** values aren't supported.  
Required: No

**DimensionName**  
A dimension for your Amazon CloudWatch metric. If you specify `DimensionName`, you must specify `DimensionValue`. These parameters provide another view when listing metrics. You can use the same dimension for multiple metrics so that you can view all metrics belonging to a specific dimension.  
Type: String  
Required: No

**DimensionValue**  
A dimension value for your Amazon CloudWatch metric.  
Type: String  
Required: No

**Encoding**  
The file encoding to use (for example, UTF-8). Use the encoding name, not the display name.  
Type: String  
Valid values: For a list of supported values, see [Encoding Class](https://learn.microsoft.com/en-us/dotnet/api/system.text.encoding?view=net-7.0) in the Microsoft Learn Library.  
Required: Yes

**Filter**  
The prefix of log names. Leave this parameter blank to monitor all files.  
Type: String  
Valid values: For a list of supported values, see the [FileSystemWatcherFilter Property](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.filter.aspx) in the MSDN Library.  
Required: No

**Flows**  
Each data type to upload, along with the destination for the data (CloudWatch or CloudWatch Logs). For example, to send a performance counter defined under `"Id": "PerformanceCounter"` to the CloudWatch destination defined under `"Id": "CloudWatch"`, enter **"PerformanceCounter,CloudWatch"**. Similarly, to send the custom log, ETW log, and system log to the CloudWatch Logs destination defined under `"Id": "ETW"`, enter **"(ETW),CloudWatchLogs"**. In addition, you can send the same performance counter or log file to more than one destination. For example, to send the application log to two different destinations that you defined under `"Id": "CloudWatchLogs"` and `"Id": "CloudWatchLogs2"`, enter **"ApplicationEventLog,(CloudWatchLogs, CloudWatchLogs2)"**.  
Type: String  
Valid values (source): `ApplicationEventLog` \$1 `CustomLogs` \$1 `ETW` \$1 `PerformanceCounter` \$1 `SystemEventLog` \$1 `SecurityEventLog`   
Valid values (destination): `CloudWatch` \$1 `CloudWatchLogs` \$1 `CloudWatch`*n* \$1 `CloudWatchLogs`*n*   
Required: Yes

**FullName**  
The full name of the component.  
Type: String  
Required: Yes

**Id**  
Identifies the data source or destination. This identifier must be unique within the configuration file.  
Type: String  
Required: Yes

**InstanceName**  
The name of the performance counter instance. Don't use an asterisk (\$1) to indicate all instances because each performance counter component only supports one metric. You can, however use **\$1Total**.  
Type: String  
Required: Yes

**Levels**  
The types of messages to send to Amazon CloudWatch.  
Type: String  
Valid values:   
+ **1** - Only error messages uploaded.
+ **2** - Only warning messages uploaded.
+ **4** - Only information messages uploaded.
You can add values together to include more than one type of message. For example, **3** means that error messages (**1**) and warning messages (**2**) are included. A value of **7** means that error messages (**1**), warning messages (**2**), and informational messages (**4**) are included.  
Required: Yes  
Windows Security Logs should set Levels to 7.

**LineCount**  
The number of lines in the header to identify the log file. For example, IIS log files have virtually identical headers. You could enter **3**, which would read the first three lines of the log file's header to identify it. In IIS log files, the third line is the date and time stamp, which is different between log files.  
Type: Integer  
Required: No

**LogDirectoryPath**  
For CustomLogs, the path where logs are stored on your EC2 instance. For IIS logs, the folder where IIS logs are stored for an individual site (for example, **C:\$1\$1inetpub\$1\$1logs\$1\$1LogFiles\$1\$1W3SVC*n***). For IIS logs, only W3C log format is supported. IIS, NCSA, and Custom formats aren't supported.   
Type: String  
Required: Yes

**LogGroup**  
The name for your log group. This name is displayed on the **Log Groups** screen in the CloudWatch console.  
Type: String  
Required: Yes

**LogName**  
The name of the log file.  

1. To find the name of the log, in Event Viewer, in the navigation pane, select **Applications and Services Logs**.

1. In the list of logs, right-click the log you want to upload (for example, `Microsoft` > `Windows` > `Backup` > `Operational`), and then select **Create Custom View**.

1. In the **Create Custom View** dialog box, select the **XML** tab. The **LogName** is in the <Select Path=> tag (for example, `Microsoft-Windows-Backup`). Copy this text into the **LogName** parameter.
Type: String  
Valid values: `Application` \$1 `Security` \$1 `System` \$1 `Microsoft-Windows-WinINet/Analytic`  
Required: Yes

**LogStream**  
The destination log stream. If you use **\$1instance\$1id\$1**, the default, the instance ID of this instance is used as the log stream name.  
Type: String  
Valid values: `{instance_id}` \$1 `{hostname}` \$1 `{ip_address}` *<log\$1stream\$1name>*  
If you enter a log stream name that doesn't already exist, CloudWatch Logs automatically creates it for you. You can use a literal string or predefined variables (**\$1instance\$1id\$1**, **\$1hostname\$1**, **\$1ip\$1address\$1**, or a combination of all three to define a log stream name.  
The log stream name specified in this parameter is displayed on the **Log Groups > Streams for *<YourLogStream>*** screen in the CloudWatch console.  
Required: Yes

**MetricName**  
The CloudWatch metric that you want performance data to be included under.  
Don't use special characters in the name. If you do, the metric and associated alarms might not work.
Type: String  
Required: Yes

**NameSpace**  
The metric namespace where you want performance counter data to be written.  
Type: String  
Required: Yes

**PollInterval**  
How many seconds must elapse before new performance counter and log data is uploaded.  
Type: Integer  
Valid values: Set this to 5 or more seconds. Fifteen seconds (00:00:15) is recommended.  
Required: Yes

**Region**  
The AWS Region where you want to send log data. Although you can send performance counters to a different Region from where you send your log data, we recommend that you set this parameter to the same Region where your instance is running.  
Type: String  
Valid values: Regions IDs of the AWS Regions supported by both Systems Manager and CloudWatch Logs, such as `us-east-2`, `eu-west-1`, and `ap-southeast-1`. For lists of AWS Regions supported by each service, see [Amazon CloudWatch Logs Service Endpoints](https://docs.aws.amazon.com/general/latest/gr/cwl_region.html#cwl_region) and [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.   
Required: Yes

**SecretKey**  
Your secret access key. This property is required unless you launched your instance using an IAM role.  
Type: String  
Required: No

**startType**  
Turn on or turn off CloudWatch on the instance.  
Type: String  
Valid values: `Enabled` \$1 `Disabled`  
Required: Yes

**TimestampFormat**  
The timestamp format you want to use. For a list of supported values, see [Custom Date and Time Format Strings](http://msdn.microsoft.com/en-us/library/8kb3ddd4.aspx) in the MSDN Library.  
Type: String  
Required: Yes

**TimeZoneKind**  
Provides time zone information when no time zone information is included in your log’s timestamp. If this parameter is left blank and if your timestamp doesn’t include time zone information, CloudWatch Logs defaults to the local time zone. This parameter is ignored if your timestamp already contains time zone information.  
Type: String  
Valid values: `Local` \$1 `UTC`  
Required: No

**Unit**  
The appropriate unit of measure for the metric.  
Type: String  
Valid values: Seconds \$1 Microseconds \$1 Milliseconds \$1 Bytes \$1 Kilobytes \$1 Megabytes \$1 Gigabytes \$1 Terabytes \$1 Bits \$1 Kilobits \$1 Megabits \$1 Gigabits \$1 Terabits \$1 Percent \$1 Count \$1 Bytes/Second \$1 Kilobytes/Second \$1 Megabytes/Second \$1 Gigabytes/Second \$1 Terabytes/Second \$1 Bits/Second \$1 Kilobits/Second \$1 Megabits/Second \$1 Gigabits/Second \$1 Terabits/Second \$1 Count/Second \$1 None  
Required: Yes

## `aws:configureDocker`


(Schema version 2.0 or later) Configure an instance to work with containers and Docker. This plugin is supported on most Linux variants and Windows Server operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:configureDocker
parameters:
  action:
    description: "(Required) The type of action to perform."
    type: String
    default: Install
    allowedValues:
    - Install
    - Uninstall
mainSteps:
- action: aws:configureDocker
  name: configureDocker
  inputs:
    action: "{{ action }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:configureDocker plugin",
  "parameters": {
    "action": {
      "description": "(Required) The type of action to perform.",
      "type": "String",
      "default": "Install",
      "allowedValues": [
        "Install",
        "Uninstall"
      ]
    }
  },
  "mainSteps": [
    {
      "action": "aws:configureDocker",
      "name": "configureDocker",
      "inputs": {
        "action": "{{ action }}"
      }
    }
  ]
}
```

------

### Inputs


**action**  
The type of action to perform.  
Type: Enum  
Valid values: `Install` \$1 `Uninstall`  
Required: Yes

## `aws:configurePackage`


(Schema version 2.0 or later) Install or uninstall an AWS Systems Manager Distributor package. You can install the latest version, default version, or a version of the package you specify. Packages provided by AWS are also supported. This plugin runs on Windows Server and Linux operating systems, but not all the available packages are supported on Linux operating systems.

Available AWS packages for Windows Server include the following: `AWSPVDriver`, `AWSNVMe`, `AwsEnaNetworkDriver`, `AwsVssComponents`, `AmazonCloudWatchAgent`, `CodeDeployAgent`, and `AWSSupport-EC2Rescue.`

Available AWS packages for Linux operating systems include the following: `AmazonCloudWatchAgent`, `CodeDeployAgent`, and `AWSSupport-EC2Rescue`.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:configurePackage
parameters:
  name:
    description: "(Required) The name of the AWS package to install or uninstall."
    type: String
  action:
    description: "(Required) The type of action to perform."
    type: String
    default: Install
    allowedValues:
    - Install
    - Uninstall
  ssmParameter:
    description: "(Required) Argument stored in Parameter Store."
    type: String
    default: "{{ ssm:parameter_store_arg }}"
mainSteps:
- action: aws:configurePackage
  name: configurePackage
  inputs:
    name: "{{ name }}"
    action: "{{ action }}"
    additionalArguments: 
      "{\"SSM_parameter_store_arg\": \"{{ ssmParameter }}\", \"SSM_custom_arg\": \"myValue\"}"
```

------
#### [ JSON ]

```
{
   "schemaVersion": "2.2",
   "description": "aws:configurePackage",
   "parameters": {
      "name": {
         "description": "(Required) The name of the AWS package to install or uninstall.",
         "type": "String"
      },
      "action": {
         "description": "(Required) The type of action to perform.",
         "type": "String",
         "default": "Install",
         "allowedValues": [
            "Install",
            "Uninstall"
         ]
      },
      "ssmParameter": {
         "description": "(Required) Argument stored in Parameter Store.",
         "type": "String",
         "default": "{{ ssm:parameter_store_arg }}"
      }
   },
   "mainSteps": [
      {
         "action": "aws:configurePackage",
         "name": "configurePackage",
         "inputs": {
            "name": "{{ name }}",
            "action": "{{ action }}",
            "additionalArguments": "{\"SSM_parameter_store_arg\": \"{{ ssmParameter }}\", \"SSM_custom_arg\": \"myValue\"}"
         }
      }
   ]
}
```

------

### Inputs


**name**  
The name of the AWS package to install or uninstall. Available packages include the following: `AWSPVDriver`, `AwsEnaNetworkDriver`, `AwsVssComponents`, and `AmazonCloudWatchAgent`.  
Type: String  
Required: Yes

**action**  
Install or uninstall a package.  
Type: Enum  
Valid values: `Install` \$1 `Uninstall`  
Required: Yes

**installationType**  
The type of installation to perform. If you specify `Uninstall and reinstall`, the package is completely uninstalled, and then reinstalled. The application is unavailable until the reinstallation is complete. If you specify `In-place update`, only new or changed files are added to the existing installation according you instructions you provide in an update script. The application remains available throughout the update process. The `In-place update` option isn't supported for AWS-published packages. `Uninstall and reinstall` is the default value.  
Type: Enum  
Valid values: `Uninstall and reinstall` \$1 `In-place update`  
Required: No

**additionalArguments**  
A JSON string of the additional parameters to provide to your install, uninstall, or update scripts. Each parameter must be prefixed with `SSM_`. You can reference a Parameter Store parameter in your additional arguments by using the convention `{{ssm:parameter-name}}`. To use the additional parameter in your install, uninstall, or update script, you must reference the parameter as an environment variable using the syntax appropriate for the operating system. For example, in PowerShell, you reference the `SSM_arg` argument as `$Env:SSM_arg`. There is no limit to the number of arguments you define, but the additional argument input has a 4096 character limit. This limit includes all of the keys and values you define.  
Type: StringMap  
Required: No

**version**  
A specific version of the package to install or uninstall. If installing, the system installs the latest published version, by default. If uninstalling, the system uninstalls the currently installed version, by default. If no installed version is found, the latest published version is downloaded, and the uninstall action is run.  
Type: String  
Required: No

## `aws:domainJoin`


Join an EC2 instance to a domain. This plugin runs on Linux and Windows Server operating systems. This plugin changes the hostname for Linux instances to the format EC2AMAZ-*XXXXXXX*. For more information about joining EC2 instances, see [Join an EC2 Instance to Your AWS Managed Microsoft AD Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_join_instance.html) in the *AWS Directory Service Administration Guide*.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:domainJoin
parameters:
  directoryId:
    description: "(Required) The ID of the directory."
    type: String
  directoryName:
    description: "(Required) The name of the domain."
    type: String
  directoryOU:
    description: "(Optional) The organizational unit to assign the computer object to."
    type: String
  dnsIpAddresses:
    description: "(Required) The IP addresses of the DNS servers for your directory."
    type: StringList
  hostname:
    description: "(Optional) The hostname you want to assign to the node."
    type: String
mainSteps:
- action: aws:domainJoin
  name: domainJoin
  inputs:
    directoryId: "{{ directoryId }}"
    directoryName: "{{ directoryName }}"
    directoryOU: "{{ directoryOU }}"
    dnsIpAddresses: "{{ dnsIpAddresses }}"
    hostname: "{{ hostname }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:domainJoin",
  "parameters": {
    "directoryId": {
      "description": "(Required) The ID of the directory.",
      "type": "String"
    },
    "directoryName": {
      "description": "(Required) The name of the domain.",
      "type": "String"
    },
    "directoryOU": {
        "description": "(Optional) The organizational unit to assign the computer object to.",
        "type": "String"
      },
    "dnsIpAddresses": {
      "description": "(Required) The IP addresses of the DNS servers for your directory.",
      "type": "StringList"
    },
    "hostname": {
        "description": "(Optional) The hostname you want to assign to the node.",
        "type": "String"
      }
  },
  "mainSteps": [
    {
      "action": "aws:domainJoin",
      "name": "domainJoin",
      "inputs": {
        "directoryId": "{{ directoryId }}",
        "directoryName": "{{ directoryName }}",
        "directoryOU":"{{ directoryOU }}",
        "dnsIpAddresses":"{{ dnsIpAddresses }}",
        "hostname":"{{ hostname }}"
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:domainJoin:
    properties:
      directoryId: "{{ directoryId }}"
      directoryName: "{{ directoryName }}"
      directoryOU: "{{ directoryOU }}"
      dnsIpAddresses: "{{ dnsIpAddresses }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:domainJoin":{
         "properties":{
            "directoryId":"{{ directoryId }}",
            "directoryName":"{{ directoryName }}",
            "directoryOU":"{{ directoryOU }}",
            "dnsIpAddresses":"{{ dnsIpAddresses }}"
         }
      }
   }
}
```

------

### Properties


**directoryId**  
The ID of the directory.  
Type: String  
Required: Yes  
Example: "directoryId": "d-1234567890"

**directoryName**  
The name of the domain.  
Type: String  
Required: Yes  
Example: "directoryName": "example.com"

**directoryOU**  
The organizational unit (OU).  
Type: String  
Required: No  
Example: "directoryOU": "OU=test,DC=example,DC=com"

**dnsIpAddresses**  
The IP addresses of the DNS servers.  
Type: StringList  
Required: Yes  
Example: "dnsIpAddresses": ["198.51.100.1","198.51.100.2"]

**hostname**  
The hostname you want to assign to the node. If not provided, there is no name change for Windows Server instances, while Linux instances will use the default naming pattern. If provided, Windows Server instances will use the exact provided value, while for Linux instances it serves as a prefix (unless `keepHostName` is set to "true").  
Type: String  
Required: No

**keepHostName**  
Determines whether the hostname is changed for Linux instances when joined to the domain. This is a Linux-only parameter. By default (without inputs to `hostname`, `hostnameNumAppendDigits`, and with `keepHostName` as "false"), Linux hosts will be renamed to the pattern EC2AMAZ-XXXXXX. When set to "true", it keeps the original hostname and ignores inputs to `hostname` and `hostnameNumAppendDigits`.  
Type: Boolean  
Required: No

**hostnameNumAppendDigits**  
Defines the number of random numeric digits to append after the hostname value. This is a Linux-only parameter and is used together with the `hostname` parameter. It is ignored if `hostname` is not provided.  
Type: String  
Allowed values: 1 to 5  
Required: No

### Examples


For examples, see [Join an Amazon EC2 Instance to your AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ec2-join-aws-domain.html) in the *AWS Directory Service Administration Guide*.

## `aws:downloadContent`


(Schema version 2.0 or later) Download SSM documents and scripts from remote locations. GitHub Enterprise repositories are not supported. This plugin is supported on Linux and Windows Server operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:downloadContent
parameters:
  sourceType:
    description: "(Required) The download source."
    type: String
  sourceInfo:
    description: "(Required) The information required to retrieve the content from
      the required source."
    type: StringMap
mainSteps:
- action: aws:downloadContent
  name: downloadContent
  inputs:
    sourceType: "{{ sourceType }}"
    sourceInfo: "{{ sourceInfo }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:downloadContent",
  "parameters": {
    "sourceType": {
    "description": "(Required) The download source.",
    "type": "String"
  },
  "sourceInfo": {
    "description": "(Required) The information required to retrieve the content from the required source.",
    "type": "StringMap"
    }
  },
  "mainSteps": [
    {
      "action": "aws:downloadContent",
      "name": "downloadContent",
      "inputs": {
        "sourceType":"{{ sourceType }}",
        "sourceInfo":"{{ sourceInfo }}"
      }
    }
  ]
}
```

------

### Inputs


**sourceType**  
The download source. Systems Manager supports the following source types for downloading scripts and SSM documents: `GitHub`, `Git`, `HTTP`, `S3`, and `SSMDocument`.  
Type: String  
Required: Yes

**sourceInfo**  
The information required to retrieve the content from the required source.  
Type: StringMap  
Required: Yes  
 **For sourceType `GitHub,` specify the following:**   
+ owner: The repository owner.
+ repository: The name of the repository.
+ path: The path to the file or directory you want to download.
+ getOptions: Extra options to retrieve content from a branch other than master or from a specific commit in the repository. getOptions can be omitted if you're using the latest commit in the master branch. If your repository was created after October 1, 2020 the default branch might be named main instead of master. In this case, you will need to specify values for the getOptions parameter.

  This parameter uses the following format:
  + branch:refs/heads/*branch\$1name*

    The default is `master`.

    To specify a non-default branch use the following format:

    branch:refs/heads/*branch\$1name*
  + commitID:*commitID*

    The default is `head`.

    To use the version of your SSM document in a commit other than the latest, specify the full commit ID. For example:

    ```
    "getOptions": "commitID:bbc1ddb94...b76d3bEXAMPLE",
    ```
+ tokenInfo: The Systems Manager parameter (a SecureString parameter) where you store your GitHub access token information, in the format `{{ssm-secure:secure-string-token-name}}`.
**Note**  
This `tokenInfo` field is the only SSM document plugin field that supports a SecureString parameter. SecureString parameters aren't supported for any other fields, nor for any other SSM document plugins.

```
{
    "owner":"TestUser",
    "repository":"GitHubTest",
    "path":"scripts/python/test-script",
    "getOptions":"branch:master",
    "tokenInfo":"{{ssm-secure:secure-string-token}}"
}
```
 **For sourceType `Git`, you must specify the following:**   
+ repository

  The Git repository URL to the file or directory you want to download.

  Type: String
Additionally, you can specify the following optional parameters:  
+ getOptions

  Extra options to retrieve content from a branch other than master or from a specific commit in the repository. getOptions can be omitted if you're using the latest commit in the master branch.

  Type: String

  This parameter uses the following format:
  + branch:refs/heads/*branch\$1name*

    The default is `master`.

    `"branch"` is required only if your SSM document is stored in a branch other than `master`. For example:

    ```
    "getOptions": "branch:refs/heads/main"
    ```
  + commitID:*commitID*

    The default is `head`.

    To use the version of your SSM document in a commit other than the latest, specify the full commit ID. For example:

    ```
    "getOptions": "commitID:bbc1ddb94...b76d3bEXAMPLE",
    ```
+ privateSSHKey

  The SSH key to use when connecting to the `repository` you specify. You can use the following format to reference a `SecureString` parameter for the value of your SSH key: `{{ssm-secure:your-secure-string-parameter}}`.

  Type: String
+ skipHostKeyChecking

  Determines the value of the StrictHostKeyChecking option when connecting to the `repository` you specify. The default value is `false`.

  Type: Boolean
+ username

  The username to use when connecting to the `repository` you specify using HTTP. You can use the following format to reference a `SecureString` parameter for the value of your username: `{{ssm-secure:your-secure-string-parameter}}`.

  Type: String
+ password

  The password to use when connecting to the `repository` you specify using HTTP. You can use the following format to reference a `SecureString` parameter for the value of your password: `{{ssm-secure:your-secure-string-parameter}}`.

  Type: String
 **For sourceType `HTTP`, you must specify the following:**   
+ url

  The URL to the file or directory you want to download.

  Type: String
Additionally, you can specify the following optional parameters:  
+ allowInsecureDownload

  Determines whether a download can be performed over a connection that isn't encrypted with Secure Socket Layer (SSL) or Transport Layer Security (TLS). The default value is `false`. We don't recommend performing downloads without encryption. If you choose to do so, you assume all associated risks. Security is a shared responsibility between AWS and you. This is described as the shared responsibility model. To learn more, see the [shared responsibility model](https://aws.amazon.com/compliance/shared-responsibility-model/).

  Type: Boolean
+ authMethod

  Determines whether a username and password are used for authentication when connecting to the `url` you specify. If you specify `Basic` or `Digest`, you must provide values for the `username` and `password` parameters. To use the `Digest` method, SSM Agent version 3.0.1181.0 or later must be installed on your instance. The `Digest` method supports MD5 and SHA256 encryption.

  Type: String

  Valid values: `None` \$1 `Basic` \$1 `Digest`
+ username

  The username to use when connecting to the `url` you specify using `Basic` authentication. You can use the following format to reference a `SecureString` parameter for the value of your username: `{{ssm-secure:your-secure-string-parameter}}`.

  Type: String
+ password

  The password to use when connecting to the `url` you specify using `Basic` authentication. You can use the following format to reference a `SecureString` parameter for the value of your password: `{{ssm-secure:your-secure-string-parameter}}`.

  Type: String
 **For sourceType `S3`, specify the following:**   
+ path: The URL to the file or directory you want to download from Amazon S3.
When downloading a file from an S3 bucket, the .etag files are generated in the download directory.

```
{
    "path": "https://s3.amazonaws.com/amzn-s3-demo-bucket/powershell/helloPowershell.ps1" 
}
```
 **For sourceType `SSMDocument`, specify *one* of the following:**   
+ name: The name and version of the document in the following format: `name:version`. Version is optional. 

  ```
  {
      "name": "Example-RunPowerShellScript:3" 
  }
  ```
+ name: The ARN for the document in the following format: `arn:aws:ssm:region:account_id:document/document_name`

  ```
  {
     "name":"arn:aws:ssm:us-east-2:3344556677:document/MySharedDoc"
  }
  ```

**destinationPath**  
An optional local path on the instance where you want to download the file. If you don't specify a path, the content is downloaded to a path relative to your command ID.  
Type: String  
Required: No

## `aws:psModule`


Install PowerShell modules on an Amazon EC2 instance. This plugin only runs on Windows Server operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:psModule
parameters:
  source:
    description: "(Required) The URL or local path on the instance to the application
      .zip file."
    type: String
mainSteps:
- action: aws:psModule
  name: psModule
  inputs:
    source: "{{ source }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:psModule",
  "parameters": {
    "source": {
      "description": "(Required) The URL or local path on the instance to the application .zip file.",
      "type": "String"
    }
  },
  "mainSteps": [
    {
      "action": "aws:psModule",
      "name": "psModule",
      "inputs": {
        "source": "{{ source }}"
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:psModule:
    properties:
    - runCommand: "{{ commands }}"
      source: "{{ source }}"
      sourceHash: "{{ sourceHash }}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:psModule":{
         "properties":[
            {
               "runCommand":"{{ commands }}",
               "source":"{{ source }}",
               "sourceHash":"{{ sourceHash }}",
               "workingDirectory":"{{ workingDirectory }}",
               "timeoutSeconds":"{{ executionTimeout }}"
            }
         ]
      }
   }
}
```

------

### Properties


**runCommand**  
The PowerShell command to run after the module is installed.  
Type: StringList  
Required: No

**source**  
The URL or local path on the instance to the application `.zip` file.  
Type: String  
Required: Yes

**sourceHash**  
The SHA256 hash of the `.zip` file.  
Type: String  
Required: No

**timeoutSeconds**  
The time in seconds for a command to be completed before it's considered to have failed.  
Type: String  
Required: No

**workingDirectory**  
The path to the working directory on your instance.  
Type: String  
Required: No

## `aws:refreshAssociation`


(Schema version 2.0 or later) Refresh (force apply) an association on demand. This action will change the system state based on what is defined in the selected association or all associations bound to the targets. This plugin runs on Linux and Microsoft Windows Server operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:refreshAssociation
parameters:
  associationIds:
    description: "(Optional) List of association IDs. If empty, all associations bound
      to the specified target are applied."
    type: StringList
mainSteps:
- action: aws:refreshAssociation
  name: refreshAssociation
  inputs:
    associationIds:
    - "{{ associationIds }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:refreshAssociation",
  "parameters": {
    "associationIds": {
      "description": "(Optional) List of association IDs. If empty, all associations bound to the specified target are applied.",
      "type": "StringList"
    }
  },
  "mainSteps": [
    {
      "action": "aws:refreshAssociation",
      "name": "refreshAssociation",
      "inputs": {
        "associationIds": [
          "{{ associationIds }}"
        ]
      }
    }
  ]
}
```

------

### Inputs


**associationIds**  
List of association IDs. If empty, all associations bound to the specified target are applied.  
Type: StringList  
Required: No

## `aws:runDockerAction`


(Schema version 2.0 or later) Run Docker actions on containers. This plugin runs on Linux and Microsoft Windows Server operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
mainSteps:
- action: aws:runDockerAction
  name: RunDockerAction
  inputs:
    action: "{{ action }}"
    container: "{{ container }}"
    image: "{{ image }}"
    memory: "{{ memory }}"
    cpuShares: "{{ cpuShares }}"
    volume: "{{ volume }}"
    cmd: "{{ cmd }}"
    env: "{{ env }}"
    user: "{{ user }}"
    publish: "{{ publish }}"
    workingDirectory: "{{ workingDirectory }}"
    timeoutSeconds: "{{ timeoutSeconds }}"
```

------
#### [ JSON ]

```
{
   "mainSteps":[
      {
         "action":"aws:runDockerAction",
         "name":"RunDockerAction",
         "inputs":{
            "action":"{{ action }}",
            "container":"{{ container }}",
            "image":"{{ image }}",
            "memory":"{{ memory }}",
            "cpuShares":"{{ cpuShares }}",
            "volume":"{{ volume }}",
            "cmd":"{{ cmd }}",
            "env":"{{ env }}",
            "user":"{{ user }}",
            "publish":"{{ publish }}",
            "workingDirectory": "{{ workingDirectory }}",
            "timeoutSeconds": "{{ timeoutSeconds }}"
         }
      }
   ]
}
```

------

### Inputs


**action**  
The type of action to perform.  
Type: String  
Required: Yes

**container**  
The Docker container ID.  
Type: String  
Required: No

**image**  
The Docker image name.  
Type: String  
Required: No

**cmd**  
The container command.  
Type: String  
Required: No

**memory**  
The container memory limit.  
Type: String  
Required: No

**cpuShares**  
The container CPU shares (relative weight).  
Type: String  
Required: No

**volume**  
The container volume mounts.  
Type: StringList  
Required: No

**env**  
The container environment variables.  
Type: String  
Required: No

**user**  
The container user name.  
Type: String  
Required: No

**publish**  
The container published ports.  
Type: String  
Required: No

**workingDirectory**  
The path to the working directory on your managed node.  
Type: String  
Required: No

**timeoutSeconds**  
The time in seconds for a command to be completed before it's considered to have failed.  
Type: String  
Required: No

## `aws:runDocument`


(Schema version 2.0 or later) Runs SSM documents stored in Systems Manager or on a local share. You can use this plugin with the [`aws:downloadContent`](#aws-downloadContent) plugin to download an SSM document from a remote location to a local share, and then run it. This plugin is supported on Linux and Windows Server operating systems. This plugin doesn't support running the `AWS-UpdateSSMAgent` document or any document that uses the `aws:updateSsmAgent` plugin.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:runDocument
parameters:
  documentType:
    description: "(Required) The document type to run."
    type: String
    allowedValues:
    - LocalPath
    - SSMDocument
mainSteps:
- action: aws:runDocument
  name: runDocument
  inputs:
    documentType: "{{ documentType }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:runDocument",
  "parameters": {
    "documentType": {
      "description": "(Required) The document type to run.",
      "type": "String",
      "allowedValues": [
        "LocalPath",
        "SSMDocument"
      ]
    }
  },
  "mainSteps": [
    {
      "action": "aws:runDocument",
      "name": "runDocument",
      "inputs": {
        "documentType": "{{ documentType }}"
      }
    }
  ]
}
```

------

### Inputs


**documentType**  
The document type to run. You can run local documents (`LocalPath`) or documents stored in Systems Manager (`SSMDocument`).  
Type: String  
Required: Yes

**documentPath**  
The path to the document. If `documentType` is `LocalPath`, then specify the path to the document on the local share. If `documentType` is `SSMDocument`, then specify the name of the document.  
Type: String  
Required: No

**documentParameters**  
Parameters for the document.  
Type: StringMap  
Required: No

## `aws:runPowerShellScript`


Run PowerShell scripts or specify the path to a script to run. This plugin runs on Microsoft Windows Server and Linux operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:runPowerShellScript
parameters:
  commands:
    type: String
    description: "(Required) The commands to run or the path to an existing script
      on the instance."
    default: Write-Host "Hello World"
mainSteps:
- action: aws:runPowerShellScript
  name: runPowerShellScript
  inputs:
    timeoutSeconds: '60'
    runCommand:
    - "{{ commands }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:runPowerShellScript",
  "parameters": {
    "commands": {
      "type": "String",
      "description": "(Required) The commands to run or the path to an existing script on the instance.",
      "default": "Write-Host \"Hello World\""
    }
  },
  "mainSteps": [
    {
      "action": "aws:runPowerShellScript",
      "name": "runPowerShellScript",
      "inputs": {
        "timeoutSeconds": "60",
        "runCommand": [
          "{{ commands }}"
        ]
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:runPowerShellScript:
    properties:
    - id: 0.aws:runPowerShellScript
      runCommand: "{{ commands }}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:runPowerShellScript":{
         "properties":[
            {
               "id":"0.aws:runPowerShellScript",
               "runCommand":"{{ commands }}",
               "workingDirectory":"{{ workingDirectory }}",
               "timeoutSeconds":"{{ executionTimeout }}"
            }
         ]
      }
   }
}
```

------

### Properties


**runCommand**  
Specify the commands to run or the path to an existing script on the instance.  
Type: StringList  
Required: Yes

**timeoutSeconds**  
The time in seconds for a command to be completed before it's considered to have failed. When the timeout is reached, Systems Manager stops the command execution.  
Type: String  
Required: No

**workingDirectory**  
The path to the working directory on your instance.  
Type: String  
Required: No

## `aws:runShellScript`


Run Linux shell scripts or specify the path to a script to run. This plugin only runs on Linux operating systems.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:runShellScript
parameters:
  commands:
    type: String
    description: "(Required) The commands to run or the path to an existing script
      on the instance."
    default: echo Hello World
mainSteps:
- action: aws:runShellScript
  name: runShellScript
  inputs:
    timeoutSeconds: '60'
    runCommand:
    - "{{ commands }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:runShellScript",
  "parameters": {
    "commands": {
      "type": "String",
      "description": "(Required) The commands to run or the path to an existing script on the instance.",
      "default": "echo Hello World"
    }
  },
  "mainSteps": [
    {
      "action": "aws:runShellScript",
      "name": "runShellScript",
      "inputs": {
        "timeoutSeconds": "60",
        "runCommand": [
          "{{ commands }}"
        ]
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:runShellScript:
    properties:
    - runCommand: "{{ commands }}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:runShellScript":{
         "properties":[
            {
               "runCommand":"{{ commands }}",
               "workingDirectory":"{{ workingDirectory }}",
               "timeoutSeconds":"{{ executionTimeout }}"
            }
         ]
      }
   }
}
```

------

### Properties


**runCommand**  
Specify the commands to run or the path to an existing script on the instance.  
Type: StringList  
Required: Yes

**timeoutSeconds**  
The time in seconds for a command to be completed before it's considered to have failed. When the timeout is reached, Systems Manager stops the command execution.  
Type: String  
Required: No

**workingDirectory**  
The path to the working directory on your instance.  
Type: String  
Required: No

## `aws:softwareInventory`


(Schema version 2.0 or later) Gather metadata about applications, files, and configurations on your managed instances. This plugin runs on Linux and Microsoft Windows Server operating systems. When you configure inventory collection, you start by creating an AWS Systems Manager State Manager association. Systems Manager collects the inventory data when the association is run. If you don't create the association first, and attempt to invoke the `aws:softwareInventory` plugin the system returns the following error:

```
The aws:softwareInventory plugin can only be invoked via ssm-associate.
```

An instance can have only one inventory association configured at a time. If you configure an instance with two or more associations, the inventory association doesn't run and no inventory data is collected. For more information about collecting inventory, see [AWS Systems Manager Inventory](systems-manager-inventory.md).

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
mainSteps:
- action: aws:softwareInventory
  name: collectSoftwareInventoryItems
  inputs:
    applications: "{{ applications }}"
    awsComponents: "{{ awsComponents }}"
    networkConfig: "{{ networkConfig }}"
    files: "{{ files }}"
    services: "{{ services }}"
    windowsRoles: "{{ windowsRoles }}"
    windowsRegistry: "{{ windowsRegistry}}"
    windowsUpdates: "{{ windowsUpdates }}"
    instanceDetailedInformation: "{{ instanceDetailedInformation }}"
    customInventory: "{{ customInventory }}"
```

------
#### [ JSON ]

```
{
   "mainSteps":[
      {
         "action":"aws:softwareInventory",
         "name":"collectSoftwareInventoryItems",
         "inputs":{
            "applications":"{{ applications }}",
            "awsComponents":"{{ awsComponents }}",
            "networkConfig":"{{ networkConfig }}",
            "files":"{{ files }}",
            "services":"{{ services }}",
            "windowsRoles":"{{ windowsRoles }}",
            "windowsRegistry":"{{ windowsRegistry}}",
            "windowsUpdates":"{{ windowsUpdates }}",
            "instanceDetailedInformation":"{{ instanceDetailedInformation }}",
            "customInventory":"{{ customInventory }}"
         }
      }
   ]
}
```

------

### Inputs


**applications**  
(Optional) Collect metadata for installed applications.  
Type: String  
Required: No

**awsComponents**  
(Optional) Collect metadata for AWS components like amazon-ssm-agent.  
Type: String  
Required: No

**files**  
(Optional, requires SSM Agent version 2.2.64.0 or later) Collect metadata for files, including file names, the time files were created, the time files were last modified and accessed, and file sizes, to name a few. For more information about collecting file inventory, see [Working with file and Windows registry inventory](inventory-file-and-registry.md).  
Type: String  
Required: No

**networkConfig**  
(Optional) Collect metadata for network configurations.  
Type: String  
Required: No

**billingInfo**  
(Optional) Collect metadata for platform details associated with the billing code of the AMI.  
Type: String  
Required: No

**windowsUpdates**  
(Optional) Collect metadata for all Windows updates.  
Type: String  
Required: No

**instanceDetailedInformation**  
(Optional) Collect more instance information than is provided by the default inventory plugin (`aws:instanceInformation`), including CPU model, speed, and the number of cores, to name a few.  
Type: String  
Required: No

**services**  
(Optional, Windows OS only, requires SSM Agent version 2.2.64.0 or later) Collect metadata for service configurations.  
Type: String  
Required: No

**windowsRegistry**  
(Optional, Windows OS only, requires SSM Agent version 2.2.64.0 or later) Collect Windows Registry keys and values. You can choose a key path and collect all keys and values recursively. You can also collect a specific registry key and its value for a specific path. Inventory collects the key path, name, type, and the value. For more information about collecting Windows Registry inventory, see [Working with file and Windows registry inventory](inventory-file-and-registry.md).  
Type: String  
Required: No

**windowsRoles**  
(Optional, Windows OS only, requires SSM Agent version 2.2.64.0 or later) Collect metadata for Microsoft Windows role configurations.  
Type: String  
Required: No

**customInventory**  
(Optional) Collect custom inventory data. For more information about custom inventory, see [Working with custom inventory](inventory-custom.md)  
Type: String  
Required: No

**customInventoryDirectory**  
(Optional) Collect custom inventory data from the specified directory. For more information about custom inventory, see [Working with custom inventory](inventory-custom.md)  
Type: String  
Required: No

## `aws:updateAgent`


Update the EC2Config service to the latest version or specify an older version. This plugin only runs on Microsoft Windows Server operating systems. For more information about the EC2Config service, see [Configuring a Windows Instance using the EC2Config service (legacy)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2config-service.html) in the *Amazon EC2 User Guide*.

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:updateAgent
mainSteps:
- action: aws:updateAgent
  name: updateAgent
  inputs:
    agentName: Ec2Config
    source: https://s3.{Region}.amazonaws.com/aws-ssm-{Region}/manifest.json
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:updateAgent",
  "mainSteps": [
    {
      "action": "aws:updateAgent",
      "name": "updateAgent",
      "inputs": {
        "agentName": "Ec2Config",
        "source": "https://s3.{Region}.amazonaws.com/aws-ssm-{Region}/manifest.json"
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:updateAgent:
    properties:
      agentName: Ec2Config
      source: https://s3.{Region}.amazonaws.com/aws-ssm-{Region}/manifest.json
      allowDowngrade: "{{ allowDowngrade }}"
      targetVersion: "{{ version }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:updateAgent":{
         "properties":{
            "agentName":"Ec2Config",
            "source":"https://s3.{Region}.amazonaws.com/aws-ssm-{Region}/manifest.json",
            "allowDowngrade":"{{ allowDowngrade }}",
            "targetVersion":"{{ version }}"
         }
      }
   }
}
```

------

### Properties


**agentName**  
EC2Config. This is the name of the agent that runs the EC2Config service.  
Type: String  
Required: Yes

**allowDowngrade**  
Allow the EC2Config service to be downgraded to an earlier version. If set to false, the service can be upgraded to newer versions only (default). If set to true, specify the earlier version.   
Type: Boolean  
Required: No

**source**  
The location where Systems Manager copies the version of EC2Config to install. You can't change this location.  
Type: String  
Required: Yes

**targetVersion**  
A specific version of the EC2Config service to install. If not specified, the service will be updated to the latest version.  
Type: String  
Required: No

## `aws:updateSsmAgent`


Update the SSM Agent to the latest version or specify an older version. This plugin runs on Linux and Windows Server operating systems. For more information, see [Working with SSM Agent](ssm-agent.md). 

### Syntax


#### Schema 2.2


------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: aws:updateSsmAgent
parameters:
  allowDowngrade:
    default: 'false'
    description: "(Optional) Allow the Amazon SSM Agent service to be downgraded to
      an earlier version. If set to false, the service can be upgraded to newer versions
      only (default). If set to true, specify the earlier version."
    type: String
    allowedValues:
    - 'true'
    - 'false'
mainSteps:
- action: aws:updateSsmAgent
  name: updateSSMAgent
  inputs:
    agentName: amazon-ssm-agent
    source: https://s3.{Region}.amazonaws.com/amazon-ssm-{Region}/ssm-agent-manifest.json
    allowDowngrade: "{{ allowDowngrade }}"
```

------
#### [ JSON ]

```
{
  "schemaVersion": "2.2",
  "description": "aws:updateSsmAgent",
  "parameters": {
    "allowDowngrade": {
      "default": "false",
      "description": "(Required) Allow the Amazon SSM Agent service to be downgraded to an earlier version. If set to false, the service can be upgraded to newer versions only (default). If set to true, specify the earlier version.",
      "type": "String",
      "allowedValues": [
        "true",
        "false"
      ]
    }
  },
  "mainSteps": [
    {
      "action": "aws:updateSsmAgent",
      "name": "awsupdateSsmAgent",
      "inputs": {
        "agentName": "amazon-ssm-agent",
        "source": "https://s3.{Region}.amazonaws.com/amazon-ssm-{Region}/ssm-agent-manifest.json",
        "allowDowngrade": "{{ allowDowngrade }}"
      }
    }
  ]
}
```

------

#### Schema 1.2


------
#### [ YAML ]

```
---
runtimeConfig:
  aws:updateSsmAgent:
    properties:
    - agentName: amazon-ssm-agent
      source: https://s3.{Region}.amazonaws.com/aws-ssm-{Region}/manifest.json
      allowDowngrade: "{{ allowDowngrade }}"
```

------
#### [ JSON ]

```
{
   "runtimeConfig":{
      "aws:updateSsmAgent":{
         "properties":[
            {
               "agentName":"amazon-ssm-agent",
               "source":"https://s3.{Region}.amazonaws.com/aws-ssm-{Region}/manifest.json",
               "allowDowngrade":"{{ allowDowngrade }}"
            }
         ]
      }
   }
}
```

------

### Properties


**agentName**  
amazon-ssm-agent. This is the name of the Systems Manager agent that processes requests and runs commands on the instance.  
Type: String  
Required: Yes

**allowDowngrade**  
Allow the SSM Agent to be downgraded to an earlier version. If set to false, the agent can be upgraded to newer versions only (default). If set to true, specify the earlier version.   
Type: Boolean  
Required: Yes

**source**  
The location where Systems Manager copies the SSM Agent version to install. You can't change this location.  
Type: String  
Required: Yes

**targetVersion**  
A specific version of SSM Agent to install. If not specified, the agent will be updated to the latest version.  
Type: String  
Required: No

# Creating SSM document content


If the AWS Systems Manager public documents don't perform all the actions you want to perform on your AWS resources, you can create your own SSM documents. You can also clone SSM documents using the console. Cloning documents copies content from an existing document to a new document that you can modify. When creating or cloning a document, the content of the document must not exceed 64KB. This quota also includes the content specified for input parameters at runtime. When you create a new `Command` or `Policy` document, we recommend that you use schema version 2.2 or later so you can take advantage of the latest features, such as document editing, automatic versioning, sequencing, and more.

## Writing SSM document content


To create your own SSM document content, it's important to understand the different schemas, features, plugins, and syntax available for SSM documents. We recommend becoming familiar with the following resources.
+  [Writing your own AWS Systems Manager documents](https://aws.amazon.com/blogs//mt/writing-your-own-aws-systems-manager-documents/) 
+  [Data elements and parameters](documents-syntax-data-elements-parameters.md) 
+  [Schemas, features, and examples](documents-schemas-features.md) 
+  [Command document plugin reference](documents-command-ssm-plugin-reference.md) 
+  [Systems Manager Automation actions reference](automation-actions.md) 
+  [Automation system variables](automation-variables.md) 
+  [Additional runbook examples](automation-document-examples.md) 
+  [Working with Systems Manager Automation runbooks](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/systems-manager-automation-docs.html) using the AWS Toolkit for Visual Studio Code 
+  [Visual design experience for Automation runbooks](automation-visual-designer.md) 
+  [Using scripts in runbooks](automation-document-script-considerations.md) 

AWS pre-defined SSM documents might perform some of the actions you require. You can call these documents by using the `aws:runDocument`, `aws:runCommand`, or `aws:executeAutomation` plugins within your custom SSM document, depending on the document type. You can also copy portions of those documents into a custom SSM document, and edit the content to meet your requirements.

**Tip**  
When creating SSM document content, you might change the content and update your SSM document several times while testing. The following commands update the SSM document with your latest content, and update the document's default version to the latest version of the document.  
The Linux and Windows commands use the `jq` command line tool to filter the JSON response data.

```
latestDocVersion=$(aws ssm update-document \
    --content file://path/to/file/documentContent.json \
    --name "ExampleDocument" \
    --document-format JSON \
    --document-version '$LATEST' \
    | jq -r '.DocumentDescription.LatestVersion')

aws ssm update-document-default-version \
    --name "ExampleDocument" \
    --document-version $latestDocVersion
```

```
latestDocVersion=$(aws ssm update-document ^
    --content file://C:\path\to\file\documentContent.json ^
    --name "ExampleDocument" ^
    --document-format JSON ^
    --document-version "$LATEST" ^
    | jq -r '.DocumentDescription.LatestVersion')

aws ssm update-document-default-version ^
    --name "ExampleDocument" ^
    --document-version $latestDocVersion
```

```
$content = Get-Content -Path "C:\path\to\file\documentContent.json" | Out-String
$latestDocVersion = Update-SSMDocument `
    -Content $content `
    -Name "ExampleDocument" `
    -DocumentFormat "JSON" `
    -DocumentVersion '$LATEST' `
    | Select-Object -ExpandProperty LatestVersion

Update-SSMDocumentDefaultVersion `
    -Name "ExampleDocument" `
    -DocumentVersion $latestDocVersion
```

### Security best practices for SSM documents


When creating SSM documents, follow these security best practices to help prevent command injection and ensure secure parameter handling:
+ Use environment variable interpolation for string parameters that will be used in commands or scripts. Add the `interpolationType` property with value `ENV_VAR` to your string parameters:

  ```
  {
      "command": {
          "type": "String",
          "description": "Command to execute",
          "interpolationType": "ENV_VAR"
      }
  }
  ```

  You can further improve the security of your SSM documents by specifying that double-quote marks aren’t accepted in values delivered by interpolation:

  ```
  {
      "command": {
          "type": "String",
          "description": "Command to execute",
          "interpolationType": "ENV_VAR",
              "allowedPattern": "^[^"]*$"
      }
  }
  ```
+ When using interpreted languages like Python, Ruby, or Node.js, reference parameters using the appropriate environment variable syntax:

  ```
  # Python example
  import os
  command = os.environ['SSM_Message']
  ```
+ For backwards compatibility with older SSM Agent versions (prior to version 3.3.2746.0), include fallback logic for environment variables:

  ```
  if [ -z "${SSM_command+x}" ]; then
      export SSM_command="{{command}}"
  fi
  ```
+ Combine environment variable interpolation with `allowedPattern` for additional input validation. In the following example, the `allowedPattern` value `^[^"]*$` specifically prevent double-quotes in the string value:

  ```
  {
      "command": {
          "type": "String",
          "interpolationType": "ENV_VAR",
          "allowedPattern": "^[a-zA-Z0-9_-]+$"
      }
  }
  ```
+ Before implementing your SSM document, verify the following security considerations:
  + All string parameters that accept user input use environment variable interpolation when appropriate.
  + Input validation is implemented using `allowedPattern` where possible.
  + The document includes appropriate error handling for parameter processing.
  + Backwards compatibility is maintained for environments using older SSM Agent versions.

For information about AWS service-owned resources that Systems Manager accesses and how to configure data perimeter policies, see [Data perimeters in AWS Systems Manager](data-perimeters.md).

## Cloning an SSM document


You can clone AWS Systems Manager documents using the Systems Manager Documents console to create SSM documents. Cloning SSM documents copies content from an existing document to a new document that you can modify. You can't clone a document larger than 64KB.

**To clone an SSM document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the search box, enter the name of the document you want to clone.

1. Choose the name of the document you want to clone, and then choose **Clone document** in the **Actions** dropdown. 

1. Modify the document as you prefer, and then choose **Create document** to save the document. 

After writing your SSM document content, you can use your content to create an SSM document using one of the following methods.

**Topics**
+ [

## Writing SSM document content
](#writing-ssm-doc-content)
+ [

## Cloning an SSM document
](#cloning-ssm-document)
+ [

## Creating composite documents
](#documents-creating-composite)

## Creating composite documents


A *composite* AWS Systems Manager (SSM) document is a custom document that performs a series of actions by running one or more secondary SSM documents. Composite documents promote *infrastructure as code* by allowing you to create a standard set of SSM documents for common tasks such as boot-strapping software or domain-joining instances. You can then share these documents across AWS accounts in the same AWS Region to reduce SSM document maintenance and ensure consistency.

For example, you can create a composite document that performs the following actions:

1. Installs all patches in the allow list.

1. Installs antivirus software.

1. Downloads scripts from GitHub and runs them.

In this example, your custom SSM document includes the following plugins to perform these actions:

1. The `aws:runDocument` plugin to run the `AWS-RunPatchBaseline` document, which installs all allow listed patches.

1. The `aws:runDocument` plugin to run the `AWS-InstallApplication` document, which installs the antivirus software.

1. The `aws:downloadContent` plugin to download scripts from GitHub and run them.

Composite and secondary documents can be stored in Systems Manager, GitHub (public and private repositories), or Amazon S3. Composite documents and secondary documents can be created in JSON or YAML. 

**Note**  
Composite documents can only run to a maximum depth of three documents. This means that a composite document can call a child document; and that child document can call one last document.

To create a composite document, add the [`aws:runDocument`](documents-command-ssm-plugin-reference.md#aws-rundocument) plugin in a custom SSM document and specify the required inputs. The following is an example of a composite document that performs the following actions:

1. Runs the [`aws:downloadContent`](documents-command-ssm-plugin-reference.md#aws-downloadContent) plugin to download an SSM document from a GitHub public repository to a local directory called bootstrap. The SSM document is called StateManagerBootstrap.yml (a YAML document).

1. Runs the `aws:runDocument` plugin to run the StateManagerBootstrap.yml document. No parameters are specified.

1. Runs the `aws:runDocument` plugin to run the `AWS-ConfigureDocker pre-defined` SSM document. The specified parameters install Docker on the instance.

```
{
  "schemaVersion": "2.2",
  "description": "My composite document for bootstrapping software and installing Docker.",
  "parameters": {
  },
  "mainSteps": [
    {
      "action": "aws:downloadContent",
      "name": "downloadContent",
      "inputs": {
        "sourceType": "GitHub",
        "sourceInfo": "{\"owner\":\"TestUser1\",\"repository\":\"TestPublic\", \"path\":\"documents/bootstrap/StateManagerBootstrap.yml\"}",
        "destinationPath": "bootstrap"
      }
    },
    {
      "action": "aws:runDocument",
      "name": "runDocument",
      "inputs": {
        "documentType": "LocalPath",
        "documentPath": "bootstrap",
        "documentParameters": "{}"
      }
    },
    {
      "action": "aws:runDocument",
      "name": "configureDocker",
      "inputs": {
        "documentType": "SSMDocument",
        "documentPath": "AWS-ConfigureDocker",
        "documentParameters": "{\"action\":\"Install\"}"
      }
    }
  ]
}
```

**More info**  
+ For information about rebooting servers and instances when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).
+ For more information about the plugins you can add to a custom SSM document, see [Command document plugin reference](documents-command-ssm-plugin-reference.md).
+ If you simply want to run a document from a remote location (without creating a composite document), see [Running documents from remote locations](documents-running-remote-github-s3.md).

# Working with documents


This section includes information about how to use and work with SSM documents.

**Topics**
+ [

# Compare SSM document versions
](comparing-versions.md)
+ [

# Create an SSM document
](create-ssm-console.md)
+ [

# Deleting custom SSM documents
](deleting-documents.md)
+ [

# Running documents from remote locations
](documents-running-remote-github-s3.md)
+ [

# Sharing SSM documents
](documents-ssm-sharing.md)
+ [

# Searching for SSM documents
](ssm-documents-searching.md)

# Compare SSM document versions


You can compare the differences in content between versions of AWS Systems Manager (SSM) documents in the Systems Manager Documents console. When comparing versions of an SSM document, differences between the content of the versions are highlighted.

**To compare SSM document content (console)**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the documents list, choose the document whose content you want to compare.

1. On the **Content** tab, select **Compare versions**, and choose the version of the document you want to compare the content to.

# Create an SSM document


After you create the content for your custom SSM document, as described in [Writing SSM document content](documents-creating-content.md#writing-ssm-doc-content), you can use the Systems Manager console to create an SSM document using your content.

**To create an SSM document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Choose **Create command or session**.

1. Enter a descriptive name for the document.

1. (Optional) For **Target type**, specify the type of resources the document can run on.

1. In the **Document type** list, choose the type of document you want to create.

1. Delete the brackets in the **Content** field, and then paste the document content you created earlier.

1. (Optional) In the **Document tags** section, apply one or more tag key name/value pairs to the document.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a document to identify the type of tasks it runs, the type of operating systems it targets, and the environment it runs in. In this case, you could specify the following key name/value pairs:
   + `Key=TaskType,Value=MyConfigurationUpdate`
   + `Key=OS,Value=AMAZON_LINUX_2`
   + `Key=Environment,Value=Production`

1. Choose **Create document** to save the document.

# Deleting custom SSM documents


If you no longer want to use a custom SSM document, you can delete it using the AWS Systems Manager console. 

**To delete an SSM document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Select the document you want to delete.

1. Select **Delete**. When prompted to delete the document, select **Delete**.

For examples using command line tools or SDKs to delete SSM documents, see [Use `DeleteDocument` with an AWS SDK or CLI](example_ssm_DeleteDocument_section.md).

# Running documents from remote locations


You can run AWS Systems Manager (SSM) documents from remote locations by using the `AWS-RunDocument` pre-defined SSM document. This document supports running SSM documents stored in the following locations:
+ Public and private GitHub repositories (GitHub Enterprise is not supported)
+ Amazon S3 buckets
+ Systems Manager

While you can also run remote documents by using State Manager or Automation, tools in AWS Systems Manager, the following procedure describes only how to run remote SSM documents by using AWS Systems Manager Run Command in the Systems Manager console. 

**Note**  
`AWS-RunDocument` can be used to run only command-type SSM documents, not other types such as Automation runbooks. The `AWS-RunDocument` uses the `aws:downloadContent` plugin. For more information about the `aws:downloadContent` plugin, see [`aws:downloadContent`](documents-command-ssm-plugin-reference.md#aws-downloadContent).

**Warning**  
`AWS-RunDocument` can execute document content from various sources (SSM documents, GitHub, S3, URLs). When executing remote documents, the IAM permissions evaluated are for `ssm:GetDocument` on the remote document and `ssm:SendCommand` on `AWS-RunDocument`. If you have IAM policies that deny access to specific SSM documents, users with `AWS-RunDocument` permissions can still execute those denied documents by passing the document content as parameters, which may not be subject to the same document-specific IAM restrictions.  
To properly restrict document execution, use one of these approaches:  
**Allowlist approved sources**: If you need to use nested document execution, restrict access to only approved sources using appropriate controls for each source type: IAM policies to control `ssm:GetDocument` for SSM document sources, IAM and Amazon S3 bucket policies for Amazon S3 sources, and network settings (such as VPC endpoints or security groups) for public Internet sources.
**Restrict access to AWS-RunDocument**: Deny `ssm:SendCommand` on `AWS-RunDocument` and any other documents that use the `aws:runDocument` plugin in your IAM policies to prevent nested document execution.
**Use permission boundaries**: Implement IAM permission boundaries to set maximum permissions for users, preventing them from executing unauthorized documents regardless of the execution method.
For more information about IAM best practices and permission boundaries, see [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) in the *AWS Identity and Access Management User Guide*.

**Before you begin**  
Before you run a remote document, you must complete the following tasks.
+ Create an SSM Command document and save it in a remote location. For more information, see [Creating SSM document content](documents-creating-content.md)
+ If you plan to run a remote document that is stored in a private GitHub repository, then you must create a Systems Manager `SecureString` parameter for your GitHub security access token. You can't access a remote document in a private GitHub repository by manually passing your token over SSH. The access token must be passed as a Systems Manager `SecureString` parameter. For more information about creating a `SecureString` parameter, see [Creating Parameter Store parameters in Systems Manager](sysman-paramstore-su-create.md).

## Run a remote document (console)


**To run a remote document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Document** list, choose **`AWS-RunDocument`**.

1. In **Command parameters**, for **Source Type**, choose an option. 
   + If you choose **GitHub**, specify **Source Info** information in the following format:

     ```
     {
         "owner": "owner_name",
         "repository": "repository_name",
         "path": "path_to_document",
         "getOptions":"branch:branch_name",
         "tokenInfo": "{{ssm-secure:secure-string-token}}"
     }
     ```

     For example:

     ```
     {
         "owner":"TestUser",
         "repository":"GitHubTestExamples",
         "path":"scripts/python/test-script",
         "getOptions":"branch:exampleBranch",
         "tokenInfo":"{{ssm-secure:my-secure-string-token}}"
     }
     ```
**Note**  
`getOptions` are extra options to retrieve content from a branch other than master, or from a specific commit in the repository. `getOptions` can be omitted if you are using the latest commit in the master branch. The `branch` parameter is required only if your SSM document is stored in a branch other than `master`.  
To use the version of your SSM document in a particular *commit* in your repository, use `commitID` with `getOptions` instead of `branch`. For example:  

     ```
     "getOptions": "commitID:bbc1ddb94...b76d3bEXAMPLE",
     ```
   + If you choose **S3**, specify **Source Info** information in the following format:

     ```
     {"path":"URL_to_document_in_S3"}
     ```

     For example:

     ```
     {"path":"https://s3.amazonaws.com/amzn-s3-demo-bucket/scripts/ruby/mySSMdoc.json"}
     ```
   + If you choose **SSMDocument**, specify **Source Info** information in the following format:

     ```
     {"name": "document_name"}
     ```

     For example:

     ```
     {"name": "mySSMdoc"}
     ```

1. In the **Document Parameters** field, enter parameters for the remote SSM document. For example, if you run the `AWS-RunPowerShell` document, you could specify:

   ```
   {"commands": ["date", "echo \"Hello World\""]}
   ```

   If you run the `AWS-ConfigureAWSPack` document, you could specify:

   ```
   {
      "action":"Install",
      "name":"AWSPVDriver"
   }
   ```

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

**Note**  
For information about rebooting servers and instances when using Run Command to call scripts, see [Handling reboots when running commands](send-commands-reboot.md).

# Sharing SSM documents


You can share AWS Systems Manager (SSM) documents privately or publicly with accounts in the same AWS Region. To privately share a document, you modify the document permissions and allow specific individuals to access it according to their AWS account ID. To publicly share an SSM document, you modify the document permissions and specify `All`. Documents can't be simultaneously shared publicly and privately.

**Warning**  
Use shared SSM documents only from trusted sources. When using any shared document, carefully review the contents of the document before using it so that you understand how it will change the configuration of your instance. For more information about shared document best practices, see [Best practices for shared SSM documents](#best-practices-shared). 

**Limitations**  
As you begin working with SSM documents, be aware of the following limitations.
+ Only the owner can share a document.
+ You must stop sharing a document before you can delete it. For more information, see [Modify permissions for a shared SSM document](#modify-permissions-shared).
+ You can share a document with a maximum of 1000 AWS accounts. You can request an increase to this limit in the [Support Center](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). For **Limit type**, choose *EC2 Systems Manager* and describe your reason for the request.
+ You can publicly share a maximum of five SSM documents. You can request an increase to this limit in the [Support Center](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). For **Limit type**, choose *EC2 Systems Manager* and describe your reason for the request.
+ Documents can be shared with other accounts in the same AWS Region only. Cross-Region sharing isn't supported.

**Important**  
In Systems Manager, an *Amazon-owned* SSM document is a document created and managed by Amazon Web Services itself. *Amazon-owned* documents include a prefix like `AWS-*` in the document name. The owner of the document is considered to be Amazon, not a specific user account within AWS. These documents are publicly available for all to use.

For more information about Systems Manager service quotas, see [AWS Systems Manager Service Quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm).

**Topics**
+ [

## Best practices for shared SSM documents
](#best-practices-shared)
+ [

## Block public sharing for SSM documents
](#block-public-access)
+ [

## Share an SSM document
](#ssm-how-to-share)
+ [

## Modify permissions for a shared SSM document
](#modify-permissions-shared)
+ [

## Using shared SSM documents
](#using-shared-documents)

## Best practices for shared SSM documents


Review the following guidelines before you share or use a shared document. 

**Remove sensitive information**  
Review your AWS Systems Manager (SSM) document carefully and remove any sensitive information. For example, verify that the document doesn't include your AWS credentials. If you share a document with specific individuals, those users can view the information in the document. If you share a document publicly, anyone can view the information in the document.

**Block public sharing for documents**  
Review all publicly shared SSM documents in your account and confirm whether you want to continue sharing them. To stop sharing a document with the public, you must modify the document permission setting as described in the [Modify permissions for a shared SSM document](#modify-permissions-shared) section of this topic. Turning on the block public sharing setting doesn't affect any documents you're currently sharing with the public. Unless your use case requires you to share documents with the public, we recommend turning on the block public sharing setting for your SSM documents in the **Preferences** section of the Systems Manager Documents console. Turning on this setting prevents unwanted access to your SSM documents. The block public sharing setting is an account level setting that can differ for each AWS Region.

**Restrict Run Command actions using an IAM trust policy**  
Create a restrictive AWS Identity and Access Management (IAM) policy for users who will have access to the document. The IAM policy determines which SSM documents a user can see in either the Amazon Elastic Compute Cloud (Amazon EC2) console or by calling `ListDocuments` using the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell. The policy also restricts the actions the user can perform with SSM documents. You can create a restrictive policy so that a user can only use specific documents. For more information, see [Customer managed policy examples](security_iam_id-based-policy-examples.md#customer-managed-policies).

**Use caution when using shared SSM documents**  
Review the contents of every document that is shared with you, especially public documents, to understand the commands that will be run on your instances. A document could intentionally or unintentionally have negative repercussions after it's run. If the document references an external network, review the external source before you use the document. 

**Send commands using the document hash**  
When you share a document, the system creates a Sha-256 hash and assigns it to the document. The system also saves a snapshot of the document content. When you send a command using a shared document, you can specify the hash in your command to ensure that the following conditions are true:  
+ You're running a command from the correct Systems Manager document
+ The content of the document hasn't changed since it was shared with you.
If the hash doesn't match the specified document or if the content of the shared document has changed, the command returns an `InvalidDocument` exception. The hash can't verify document content from external locations.

**Use the interpolation parameter to improve security**  
For `String` type parameters in your SSM documents, use the parameter and value `interpolationType": "ENV_VAR` to improve security against command injection attacks by treating parameter inputs as string literals rather than potentially executable commands. In this case, the agent creates an environment variable named `SSM_parameter-name` with the parameter's value. We recommend updating all your existing SSM documents that include `String` type parameters to include `"interpolationType": "ENV_VAR"`. For more information, see [Writing SSM document content](documents-creating-content.md#writing-ssm-doc-content).

## Block public sharing for SSM documents


Before you begin, review all publicly shared SSM documents in your AWS account and confirm whether you want to continue sharing them. To stop sharing an SSM document with the public, you must modify the document permission setting as described in the [Modify permissions for a shared SSM document](#modify-permissions-shared) section of this topic. Turning on the block public sharing setting doesn't affect any SSM documents you're currently sharing with the public. With the block public sharing setting enabled, you won’t be able to share any additional SSM documents with the public.

Unless your use case requires you to share documents with the public, we recommend turning on the block public sharing setting for your SSM documents. Turning on this setting prevents unwanted access to your SSM documents. The block public sharing setting is an account level setting that can differ for each AWS Region. Complete the following tasks to block public sharing for any SSM documents you're not currently sharing.

### Block public sharing (console)


**To block public sharing of your SSM documents**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Choose **Preferences**, and then choose **Edit** in the **Block public sharing** section.

1. Select the **Block public sharing** check box, and then choose **Save**. 

### Block public sharing (command line)


Open the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell on your local computer and run the following command to block public sharing of your SSM documents.

------
#### [ Linux & macOS ]

```
aws ssm update-service-setting  \
    --setting-id /ssm/documents/console/public-sharing-permission \
    --setting-value Disable \
    --region 'The AWS Region you want to block public sharing in'
```

------
#### [ Windows ]

```
aws ssm update-service-setting ^
    --setting-id /ssm/documents/console/public-sharing-permission ^
    --setting-value Disable ^
    --region "The AWS Region you want to block public sharing in"
```

------
#### [ PowerShell ]

```
Update-SSMServiceSetting `
    -SettingId /ssm/documents/console/public-sharing-permission `
    -SettingValue Disable `
    –Region The AWS Region you want to block public sharing in
```

------

Confirm the setting value was updated using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-service-setting   \
    --setting-id /ssm/documents/console/public-sharing-permission \
    --region The AWS Region you blocked public sharing in
```

------
#### [ Windows ]

```
aws ssm get-service-setting  ^
    --setting-id /ssm/documents/console/public-sharing-permission ^
    --region "The AWS Region you blocked public sharing in"
```

------
#### [ PowerShell ]

```
Get-SSMServiceSetting `
    -SettingId /ssm/documents/console/public-sharing-permission `
    -Region The AWS Region you blocked public sharing in
```

------

### Restricting access to block public sharing with IAM


You can create AWS Identity and Access Management (IAM) policies that restrict users from modifying the block public sharing setting. This prevents users from allowing unwanted access to your SSM documents. 

The following is an example of an IAM policy that prevents users from updating the block public sharing setting. To use this example, you must replace the example Amazon Web Services account ID with your own account ID.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": "ssm:UpdateServiceSetting",
            "Resource": "arn:aws:ssm:*:444455556666:servicesetting/ssm/documents/console/public-sharing-permission"
        }
    ]
}
```

------

## Share an SSM document


You can share AWS Systems Manager (SSM) documents by using the Systems Manager console. When sharing documents from the console, only the default version of the document can be shared. You can also share SSM documents programmatically by calling the `ModifyDocumentPermission` API operation using the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDK. Before you share a document, get the AWS account IDs of the people with whom you want to share. You will specify these account IDs when you share the document.

### Share a document (console)


1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the documents list, choose the document you want to share, and then choose **View details**. On the **Permissions** tab, verify that you're the document owner. Only a document owner can share a document.

1. Choose **Edit**.

1. To share the command publicly, choose **Public** and then choose **Save**. To share the command privately, choose **Private**, enter the AWS account ID, choose **Add permission**, and then choose **Save**. 

### Share a document (command line)


The following procedure requires that you specify an AWS Region for your command line session.

1. Open the AWS CLI or AWS Tools for Windows PowerShell on your local computer and run the following command to specify your credentials. 

   In the following command, replace *region* with your own information. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

------
#### [ Linux & macOS ]

   ```
   aws config
   
   AWS Access Key ID: [your key]
   AWS Secret Access Key: [your key]
   Default region name: region
   Default output format [None]:
   ```

------
#### [ Windows ]

   ```
   aws config
   
   AWS Access Key ID: [your key]
   AWS Secret Access Key: [your key]
   Default region name: region
   Default output format [None]:
   ```

------
#### [ PowerShell ]

   ```
   Set-AWSCredentials –AccessKey your key –SecretKey your key
   Set-DefaultAWSRegion -Region region
   ```

------

1. Use the following command to list all of the SSM documents that are available for you. The list includes documents that you created and documents that were shared with you.

------
#### [ Linux & macOS ]

   ```
   aws ssm list-documents
   ```

------
#### [ Windows ]

   ```
   aws ssm list-documents
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentList
   ```

------

1. Use the following command to get a specific document.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-document \
       --name document name
   ```

------
#### [ Windows ]

   ```
   aws ssm get-document ^
       --name document name
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocument `
       –Name document name
   ```

------

1. Use the following command to get a description of the document.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-document \
       --name document name
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-document ^
       --name document name
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentDescription `
       –Name document name
   ```

------

1. Use the following command to view the permissions for the document.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-document-permission \
       --name document name \
       --permission-type Share
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-document-permission ^
       --name document name ^
       --permission-type Share
   ```

------
#### [ PowerShell ]

   ```
   Get-SSMDocumentPermission `
       –Name document name `
       -PermissionType Share
   ```

------

1. Use the following command to modify the permissions for the document and share it. You must be the owner of the document to edit the permissions. Optionally, for documents shared with specific AWS account IDs, you can specify a version of the document you want to share using the `--shared-document-version` parameter. If you don't specify a version, the system shares the `Default` version of the document. If you share a document publicly (with `all`), all versions of the specified document are shared by default. The following example command privately shares the document with a specific individual, based on that person's AWS account ID.

------
#### [ Linux & macOS ]

   ```
   aws ssm modify-document-permission \
       --name document name \
       --permission-type Share \
       --account-ids-to-add AWS account ID
   ```

------
#### [ Windows ]

   ```
   aws ssm modify-document-permission ^
       --name document name ^
       --permission-type Share ^
       --account-ids-to-add AWS account ID
   ```

------
#### [ PowerShell ]

   ```
   Edit-SSMDocumentPermission `
       –Name document name `
       -PermissionType Share `
       -AccountIdsToAdd AWS account ID
   ```

------

1. Use the following command to share a document publicly.
**Note**  
If you share a document publicly (with `all`), all versions of the specified document are shared by default. 

------
#### [ Linux & macOS ]

   ```
   aws ssm modify-document-permission \
       --name document name \
       --permission-type Share \
       --account-ids-to-add 'all'
   ```

------
#### [ Windows ]

   ```
   aws ssm modify-document-permission ^
       --name document name ^
       --permission-type Share ^
       --account-ids-to-add "all"
   ```

------
#### [ PowerShell ]

   ```
   Edit-SSMDocumentPermission `
       -Name document name `
       -PermissionType Share `
       -AccountIdsToAdd ('all')
   ```

------

## Modify permissions for a shared SSM document


If you share a command, users can view and use that command until you either remove access to the AWS Systems Manager (SSM) document or delete the SSM document. However, you can't delete a document as long as it's shared. You must stop sharing it first and then delete it.

### Stop sharing a document (console)


**Stop sharing a document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. In the documents list, choose the document you want to stop sharing, and then choose the **Details**. In the **Permissions** section, verify that you're the document owner. Only a document owner can stop sharing a document.

1. Choose **Edit**.

1. Choose **X** to delete the AWS account ID that should no longer have access to the command, and then choose **Save**. 

### Stop sharing a document (command line)


Open the AWS CLI or AWS Tools for Windows PowerShell on your local computer and run the following command to stop sharing a command.

------
#### [ Linux & macOS ]

```
aws ssm modify-document-permission \
    --name document name \
    --permission-type Share \
    --account-ids-to-remove 'AWS account ID'
```

------
#### [ Windows ]

```
aws ssm modify-document-permission ^
    --name document name ^
    --permission-type Share ^
    --account-ids-to-remove "AWS account ID"
```

------
#### [ PowerShell ]

```
Edit-SSMDocumentPermission `
    -Name document name `
    -PermissionType Share `
    –AccountIdsToRemove AWS account ID
```

------

## Using shared SSM documents


When you share an AWS Systems Manager (SSM) document, the system generates an Amazon Resource Name (ARN) and assigns it to the command. If you select and run a shared document from the Systems Manager console, you don't see the ARN. However, if you want to run a shared SSM document using a method other than the Systems Manager console, you must specify the full ARN of the document for the `DocumentName` request parameter. You're shown the full ARN for an SSM document when you run the command to list documents. 

**Note**  
You aren't required to specify ARNs for AWS public documents (documents that begin with `AWS-*`) or documents that you own.

### Use a shared SSM document (command line)


 **To list all public SSM documents** 

------
#### [ Linux & macOS ]

```
aws ssm list-documents \
    --filters Key=Owner,Values=Public
```

------
#### [ Windows ]

```
aws ssm list-documents ^
    --filters Key=Owner,Values=Public
```

------
#### [ PowerShell ]

```
$filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
$filter.Key = "Owner"
$filter.Values = "Public"

Get-SSMDocumentList `
    -Filters @($filter)
```

------

 **To list private SSM documents that have been shared with you** 

------
#### [ Linux & macOS ]

```
aws ssm list-documents \
    --filters Key=Owner,Values=Private
```

------
#### [ Windows ]

```
aws ssm list-documents ^
    --filters Key=Owner,Values=Private
```

------
#### [ PowerShell ]

```
$filter = New-Object Amazon.SimpleSystemsManagement.Model.DocumentKeyValuesFilter
$filter.Key = "Owner"
$filter.Values = "Private"

Get-SSMDocumentList `
    -Filters @($filter)
```

------

 **To list all SSM documents available to you** 

------
#### [ Linux & macOS ]

```
aws ssm list-documents
```

------
#### [ Windows ]

```
aws ssm list-documents
```

------
#### [ PowerShell ]

```
Get-SSMDocumentList
```

------

 **To get information about an SSM document that has been shared with you** 

------
#### [ Linux & macOS ]

```
aws ssm describe-document \
    --name arn:aws:ssm:us-east-2:12345678912:document/documentName
```

------
#### [ Windows ]

```
aws ssm describe-document ^
    --name arn:aws:ssm:us-east-2:12345678912:document/documentName
```

------
#### [ PowerShell ]

```
Get-SSMDocumentDescription `
    –Name arn:aws:ssm:us-east-2:12345678912:document/documentName
```

------

 **To run a shared SSM document** 

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name arn:aws:ssm:us-east-2:12345678912:document/documentName \
    --instance-ids ID
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name arn:aws:ssm:us-east-2:12345678912:document/documentName ^
    --instance-ids ID
```

------
#### [ PowerShell ]

```
Send-SSMCommand `
    –DocumentName arn:aws:ssm:us-east-2:12345678912:document/documentName `
    –InstanceIds ID
```

------

# Searching for SSM documents


You can search the AWS Systems Manager (SSM) document store for SSM documents by using either free text search or a filter-based search. You can also favorite documents to help you find frequently used SSM documents. The following sections describes how to use these features.

## Using free text search


The search box on the Systems Manager **Documents** page supports free text search. Free text search compares the search term or terms that you enter against the document name in each SSM document. If you enter a single search term, for example **ansible**, then Systems Manager returns all SSM documents where this term was discovered. If you enter multiple search terms, then Systems Manager searches by using an `OR` statement. For example, if you specify **ansible** and **linux**, then search returns all documents with *either* keyword in their name.

If you enter a free text search term and choose a search option, such as **Platform type**, then search uses an `AND` statement and returns all documents with the keyword in their name and the specified platform type.

**Note**  
Note the following details about free text search.  
Free text search is *not* case sensitive.
Search terms require a minimum of three characters and have a maximum of 20 characters.
Free text search accepts up to five search terms.
If you enter a space between search terms, the system includes the space when searching.
You can combine free text search with other search options such as **Document type** or **Platform type**.
The **Document Name Prefix** filter and free text search can't be used together. they're mutually exclusive.

**To search for an SSM document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Enter your search terms in the search box, and press Enter.

### Performing free text document search by using the AWS CLI


**To perform a free text document search by using the CLI**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. To perform free text document search with a single term, run the following command. In this command, replace *search\$1term* with your own information.

   ```
   aws ssm list-documents --filters Key="SearchKeyword",Values="search_term"
   ```

   Here's an example.

   ```
   aws ssm list-documents --filters Key="SearchKeyword",Values="aws-asg" --region us-east-2
   ```

   To search using multiple terms that create an `AND` statement, run the following command. In this command, replace *search\$1term\$11* and *search\$1term\$12* with your own information.

   ```
   aws ssm list-documents --filters Key="SearchKeyword",Values="search_term_1","search_term_2","search_term_3" --region us-east-2
   ```

   Here's an example.

   ```
   aws ssm list-documents --filters Key="SearchKeyword",Values="aws-asg","aws-ec2","restart" --region us-east-2
   ```

## Using filters


The Systems Manager **Documents** page automatically displays the following filters when you choose the search box. 
+ Document name prefix
+ Platform types
+ Document type
+ Tag key

![\[Filter options on SSM Documents page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/ssm-documents-filters-1.png)


You can search for SSM documents by using a single filter. If you want to return a more specific set of SSM documents, you can apply multiple filters. Here is an example of a search that uses the **Platform types** and the **Document name prefix** filters.

![\[Applying multiple filter options on the SSM Documents page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/ssm-documents-filters-2.png)


If you apply multiple filters, Systems Manager creates different search statements based on the filters you choose: 
+ If you apply the *same* filter multiple times, for example **Document name prefix**, then Systems Manager searches by using an `OR` statement. For example, if you specify one filter of **Document name prefix**=**AWS** and a second filter of **Document name prefix**=**Lambda**, then search returns all documents with the prefix "`AWS`" and all documents with the prefix "`Lambda`".
+ If you apply *different* filters, for example **Document name prefix** and **Platform types**, then Systems Manager searches by using an `AND` statement. For example, if you specify a **Document name prefix**=**AWS** filter and a **Platform types**=**Linux** filter, then search returns all documents with the prefix "`AWS`" that are specific to the Linux platform.

**Note**  
Searches that use filters are case sensitive. 

## Adding documents to your favorites


To help you find frequently used SSM documents, add documents to your favorites. You can favorite up to 20 documents per document type, per AWS account and AWS Region. You can choose, modify, and view your favorites from the documents AWS Management Console. The following procedures describe how to choose, modify, and view your favorites.

**To favorite an SSM document**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Select the star icon next to the document name you want to favorite.

**To remove an SSM document from your favorites**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Deselect the star icon next to the document name you want to remove from your favorites.

**To view your favorites from the documents AWS Management Console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Documents**.

1. Select the **Favorites** tab.

# Troubleshooting parameter handling issues


## Common parameter handling issues


**Environment variables not available during execution**  
**Problem:** Commands fail because environment variables (`SSM_parameter-name`) are not found.  
**Possible causes:**  
+ SSM Agent version doesn't support environment variable interpolation
+ `interpolationType` is not set to `ENV_VAR`
+ Parameter name doesn't match the expected environment variable name
**Solution:**  
+ Verify SSM Agent version is 3.3.2746.0 or later
+ Add fallback logic for older agent versions:

  ```
  if [ -z "${SSM_parameterName+x}" ]; then
      export SSM_parameterName="{{parameterName}}"
  fi
  ```

**Parameter values containing special characters**  
**Problem:** Commands fail when parameter values contain spaces, quotes, or other special characters.  
**Solution:**  
+ Use proper quoting when referencing environment variables:

  ```
  # Correct
  echo "$SSM_parameter-name"
  
  # Incorrect
  echo $SSM_parameter-name
  ```
+ Add input validation using `allowedPattern` to restrict special characters

**Inconsistent behavior across platforms**  
**Problem:** Parameter handling works differently on Linux and Windows Server systems.  
**Solution:**  
+ Use platform-specific environment variable syntax:

  ```
  # PowerShell
  $env:SSM_parameter-name
  
  # Bash
  $SSM_parameter-name
  ```
+ Use platform-specific precondition checks in your document

**Parameter values not properly escaped**  
**Problem:** Command injection vulnerabilities despite using environment variable interpolation.  
**Solution:**  
+ Always use proper escaping when including parameter values in commands:

  ```
  # Correct
  mysql_command="mysql -u \"$SSM_username\" -p\"$SSM_password\""
  
  # Incorrect
  mysql_command="mysql -u $SSM_username -p$SSM_password"
  ```

## Parameter validation tips


Use these techniques to validate your parameter handling:

1. Test environment variable availability:

   ```
   #!/bin/bash
   # Print all SSM_ environment variables
   env | grep ^SSM_
   
   # Test specific parameter
   if [ -n "$SSM_parameter" ]; then
       echo "Parameter is available"
   else
       echo "Parameter is not available"
   fi
   ```

1. Verify parameter patterns:

   ```
   parameters:
     myParameter:
       type: String
       allowedPattern: "^[a-zA-Z0-9_-]+$"
       description: "Test this pattern with sample inputs"
   ```

1. Include error handling:

   ```
   if [[ ! "$SSM_parameter" =~ ^[a-zA-Z0-9_-]+$ ]]; then
       echo "Parameter validation failed"
       exit 1
   fi
   ```

# AWS Systems Manager Maintenance Windows
Maintenance Windows

Maintenance Windows, a tool in AWS Systems Manager, helps you define a schedule for when to perform potentially disruptive actions on your nodes such as patching an operating system, updating drivers, or installing software or patches.

**Note**  
State Manager and Maintenance Windows can perform some similar types of updates on your managed nodes. Which one you choose depends on whether you need to automate system compliance or perform high-priority, time-sensitive tasks during periods you specify.  
For more information, see [Choosing between State Manager and Maintenance Windows](state-manager-vs-maintenance-windows.md).

With Maintenance Windows, you can schedule actions on numerous other AWS resource types, such as Amazon Simple Storage Service (Amazon S3) buckets, Amazon Simple Queue Service (Amazon SQS) queues, AWS Key Management Service (AWS KMS) keys, and many more. 

For a full list of supported resource types that you can include in a maintenance window target, see [Resources you can use with AWS Resource Groups and Tag Editor](https://docs.aws.amazon.com/ARG/latest/userguide/supported-resources.html#supported-resources-console) in the *AWS Resource Groups User Guide*. To get started with Maintenance Windows, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/maintenance-windows). In the navigation pane, choose **Maintenance Windows**.

Each maintenance window has a schedule, a maximum duration, a set of registered targets (the managed nodes or other AWS resources that are acted upon), and a set of registered tasks. You can add tags to your maintenance windows when you create or update them. (Tags are keys that help identify and sort your resources within your organization.) You can also specify dates that a maintenance window shouldn't run before or after, and you can specify the international time zone on which to base the maintenance window schedule. 

For an explanation of how the various schedule-related options for maintenance windows relate to one another, see [Maintenance window scheduling and active period options](maintenance-windows-schedule-options.md).

For more information about working with the `--schedule` option, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

**Supported task types**  
With maintenance windows, you can run four types of tasks:
+ Commands in Run Command, a tool in Systems Manager

  For more information about Run Command, see [AWS Systems Manager Run Command](run-command.md).
+ Workflows in Automation, a tool in Systems Manager

  For more information about Automation workflows, see [AWS Systems Manager Automation](systems-manager-automation.md).
+ Functions in AWS Lambda

  For more information about Lambda functions, see [Getting started with Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) in the *AWS Lambda Developer Guide*.
+ Tasks in AWS Step Functions
**Note**  
Maintenance window tasks support Step Functions Standard state machine workflows only. They don't support Express state machine workflows. For information about state machine workflow types, see [Standard vs. Express Workflows](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html) in the *AWS Step Functions Developer Guide*.

  For more information about Step Functions, see the *[AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/)*.

This means you can use maintenance windows to perform tasks such as the following on your selected targets.
+ Install or update applications.
+ Apply patches.
+ Install or update SSM Agent.
+ Run PowerShell commands and Linux shell scripts by using a Systems Manager Run Command task.
+ Build Amazon Machine Images (AMIs), boot-strap software, and configure nodes by using a Systems Manager Automation task.
+ Run AWS Lambda functions that invokes additional actions, such as scanning your nodes for patch updates.
+ Run AWS Step Functions state machines to perform tasks such as removing a node from an Elastic Load Balancing environment, patching the node, and then adding the node back to the Elastic Load Balancing environment.
+ Target nodes that are offline by specifying an AWS resource group as the target.

**Note**  
One or more targets must be specified for maintenance window Run Command-type tasks. Depending on the task, targets are optional for other maintenance window task types (Automation, AWS Lambda, and AWS Step Functions). For more information about running tasks that don't specify targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).

**EventBridge support**  
This Systems Manager tool is supported as an *event* type in Amazon EventBridge rules. For information, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md).

**Topics**
+ [

# Setting up Maintenance Windows
](setting-up-maintenance-windows.md)
+ [

# Create and manage maintenance windows using the console
](sysman-maintenance-working.md)
+ [

# Tutorials
](maintenance-windows-tutorials.md)
+ [

# Using pseudo parameters when registering maintenance window tasks
](maintenance-window-tasks-pseudo-parameters.md)
+ [

# Maintenance window scheduling and active period options
](maintenance-windows-schedule-options.md)
+ [

# Registering maintenance window tasks without targets
](maintenance-windows-targetless-tasks.md)
+ [

# Troubleshooting maintenance windows
](troubleshooting-maintenance-windows.md)

# Setting up Maintenance Windows
Setting up

Before users in your AWS account can create and schedule maintenance window tasks using Maintenance Windows, a tool in AWS Systems Manager, they must be granted the necessary permissions. In addition, you must create an IAM service role for maintenance windows and the IAM policy to attach to it.

**Before you begin**  
In addition to the permissions you configure in this section, the IAM Entities (users, roles, or groups that will work with maintenance windows should already have general maintenance window permissions. You can grant these permissions by assigning the IAM policy `AmazonSSMFullAccess` to the Entities, or assigning a custom IAM policy that provides a smaller set of access permissions for Systems Manager that covers maintenance window tasks.

**Topics**
+ [

# Control access to maintenance windows using the console
](configuring-maintenance-window-permissions-console.md)
+ [

# Control access to maintenance windows using the AWS CLI
](configuring-maintenance-window-permissions-cli.md)

# Control access to maintenance windows using the console
Control access using the console

The following procedures describe how to use the AWS Systems Manager console to create the required permissions and roles for maintenance windows.

**Topics**
+ [

## Task 1: Create a custom policy for your maintenance window service role using the console
](#create-custom-policy-console)
+ [

## Task 2: Create a custom service role for maintenance windows using the console
](#create-custom-role-console)
+ [

## Task 3: Grant permissions to specified users to register maintenance window tasks using the console
](#allow-maintenance-window-access-console)
+ [

## Task 4: Prevent specified users from registering maintenance window tasks using the console
](#deny-maintenance-window-access-console)

## Task 1: Create a custom policy for your maintenance window service role using the console


Maintenance window tasks require an IAM role to provide the permissions required to run on the target resources. The permissions are provided through an IAM policy attached to the role. The types of tasks you run and your other operational requirements determine the contents of this policy. We provide a base policy you can adapt to your needs. Depending on the tasks and types of tasks your maintenance windows run, you might not need all the permissions in this policy, and you might need to include additional permissions. You attach this policy to the role that you create later in [Task 2: Create a custom service role for maintenance windows using the console](#create-custom-role-console).

**To create a custom policy using the console**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then choose **Create policy**.

1. In the **Policy editor** area, choose **JSON**.

1. Replace the default contents with the following:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:SendCommand",
                   "ssm:CancelCommand",
                   "ssm:ListCommands",
                   "ssm:ListCommandInvocations",
                   "ssm:GetCommandInvocation",
                   "ssm:GetAutomationExecution",
                   "ssm:StartAutomationExecution",
                   "ssm:ListTagsForResource",
                   "ssm:DescribeInstanceInformation",
                   "ssm:GetParameters"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "states:DescribeExecution",
                   "states:StartExecution"
               ],
               "Resource": [
                   "arn:aws:states:*:*:execution:*:*",
                   "arn:aws:states:*:*:stateMachine:*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "lambda:InvokeFunction"
               ],
               "Resource": [
                   "arn:aws:lambda:*:*:function:*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "resource-groups:ListGroups",
                   "resource-groups:ListGroupResources"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "tag:GetResources"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": "iam:PassRole",
               "Resource": "arn:aws:iam::111122223333:role/maintenance-window-role-name",
               "Condition": {
                   "StringEquals": {
                       "iam:PassedToService": [
                           "ssm.amazonaws.com"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. Modify the JSON content as needed for the maintenance tasks that you run in your account. The changes you make are specific to your planned operations. 

   For example:
   + You can provide Amazon Resource Names (ARNs) for specific functions and state machines instead of using wildcard (\$1) qualifiers.
   + If you don’t plan to run AWS Step Functions tasks, you can remove the `states` permissions and (ARNs).
   + If you don’t plan to run AWS Lambda tasks, you can remove the `lambda` permissions and ARNs.
   + If you don't plan to run Automation tasks, you can remove the `ssm:GetAutomationExecution` and `ssm:StartAutomationExecution` permissions.
   + Add additional permissions that might be needed for the tasks to run. For example, some Automation actions work with AWS CloudFormation stacks. Therefore, the permissions `cloudformation:CreateStack`, `cloudformation:DescribeStacks`, and `cloudformation:DeleteStack` are required. 

     For another example, the Automation runbook `AWS-CopySnapshot` requires permissions to create an Amazon Elastic Block Store (Amazon EBS) snapshot. Therefore, the service role needs the permission `ec2:CreateSnapshot`. 

     For information about the role permissions needed by Automation runbooks, see the runbook descriptions in the [AWS Systems Manager Automation Runbook Reference](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-runbook-reference.html).

1. After completing the policy revisions, choose **Next**.

1. For **Policy name**, enter a name that identifies this as the policy attached to the service role you create. For example: **my-maintenance-window-role-policy**.

1. (Optional) In the **Add tags** area, add one or more tag-key value pairs to organize, track, or control access for this policy. 

1. Choose **Create policy**.

   Make a note of the name you specified for the policy. You refer to it in the next procedure, [Task 2: Create a custom service role for maintenance windows using the console](#create-custom-role-console).

## Task 2: Create a custom service role for maintenance windows using the console


The policy you created in the previous task is attached to the maintenance window service role you create in this task. When users register a maintenance window task, they specify this IAM role as part of the task configuration. The permissions in this role allow Systems Manager to run tasks in maintenance windows on your behalf.

**Important**  
Previously, the Systems Manager console provided you with the ability to choose the AWS managed IAM service-linked role `AWSServiceRoleForAmazonSSM` to use as the maintenance role for your tasks. Using this role and its associated policy, `AmazonSSMServiceRolePolicy`, for maintenance window tasks is no longer recommended. If you're using this role for maintenance window tasks now, we encourage you to stop using it. Instead, create your own IAM role that enables communication between Systems Manager and other AWS services when your maintenance window tasks run.

Use the following procedure to create a custom service role for Maintenance Windows, so that Systems Manager can run Maintenance Windows tasks on your behalf. You attach the policy you created in the previous task to the custom service role you create.

**To create a custom service role for maintenance windows using the console**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Roles**, and then choose **Create role**.

1. For **Select trusted entity**, make the following choices:

   1. For **Trusted entity type**, choose **AWS service**.

   1. For **Use case**, choose **Systems Manager**

   1. Choose **Systems Manager**.

      The following image highlights the location of the Systems Manager option.  
![\[Systems Manager is one of the options for Use case.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/iam_use_cases_for_MWs.png)

1. Choose **Next**. 

1. In the **Permissions policies** area, in the search box, enter the name of the policy you created in [Task 1: Create a custom policy for your maintenance window service role using the console](#create-custom-policy-console), select the box next to its name, and then choose **Next**.

1. For **Role name**, enter a name that identifies this role as a Maintenance Windows role. For example: **my-maintenance-window-role**.

1. (Optional) Change the default role description to reflect the purpose of this role. For example: **Performs maintenance window tasks on your behalf**.

1. For **Step 1: Select trusted entities**, verify that the following policy is displayed in the **Trusted policy** box.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   ```

------

1. For **Step 2: Add permissions**, verify that policy you created in [Task 1: Create a custom policy for your maintenance window service role using the console](#create-custom-policy-console) is present.

1. (Optional) In **Step 3: Add tags**, add one or more tag-key value pairs to organize, track, or control access for this role. 

1. Choose **Create role**. The system returns you to the **Roles** page.

1. Choose the name of the IAM role you just created.

1. Copy or make a note of the role name and the **ARN** value in the **Summary** area. Users in your account specify this information when they create maintenance windows.

## Task 3: Grant permissions to specified users to register maintenance window tasks using the console


Providing users with permissions to access the custom service role for maintenance windows lets them use it with their maintenance windows tasks. This is in addition to permissions that you’ve already given them to work with the Systems Manager API commands for the Maintenance Windows tool. This IAM role conveys the permissions need to run a maintenance window task. As a result, a user can't register tasks with a maintenance window using your custom service role without the ability to pass these IAM permissions.

When you register a task with a maintenance window, you specify a service role to run the actual task operations. This is the role that the service assumes when it runs tasks on your behalf. Before that, to register the task itself, assign the IAM `PassRole` policy to an IAM entity (such as a user or group). This allows the IAM entity to specify, as part of registering those tasks with the maintenance window, the role that should be used when running tasks. For information, see [Grant a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

**To configure permissions to allow users to register maintenance window tasks**

If an IAM entity (user, role, or group) is set up with administrator permissions, then the IAM user or role has access to Maintenance Windows. For IAM entities without administrator permissions, an administrator must grant the following permissions to the IAM entity. These are the minimum permissions required to register tasks with a maintenance window:
+ The `AmazonSSMFullAccess` managed policy, or a policy that provides comparable permissions.
+ The following `iam:PassRole` and `iam:ListRoles`permissions.

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": "iam:PassRole",
              "Resource": "arn:aws:iam::111122223333:role/my-maintenance-window-role"
          },
          {
              "Effect": "Allow",
              "Action": "iam:ListRoles",
              "Resource": "arn:aws:iam::111122223333:role/"
          },
          {
              "Effect": "Allow",
              "Action": "iam:ListRoles",
              "Resource": "arn:aws:iam::111122223333:role/aws-service-role/ssm.amazonaws.com/"
          }
      ]
  }
  ```

------

  *my-maintenance-window-role* represents the name of the custom maintenance window service role you created earlier.

  *account-id* represents the ID of your AWS account. Adding this permission for the resource `arn:aws:iam::account-id:role/` allows a user to view and choose from customer roles in the console when they create a maintenance window task. Adding this permission for `arn:aws:iam::account-id:role/aws-service-role/ssm.amazonaws.com/` allows a user to choose the Systems Manager service-linked role in the console when they create a maintenance window task. 

  To provide access, add permissions to your users, groups, or roles:
  + Users and groups in AWS IAM Identity Center:

    Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
  + Users managed in IAM through an identity provider:

    Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
  + IAM users:
    + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
    + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

**To configure permissions for groups that are allowed to register maintenance window tasks using the console**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **User groups**.

1. In the list of groups, select the name of the group you want to assign the `iam:PassRole` permission to, or first create a new group if necessary 

1. On the **Permissions** tab, choose **Add permissions, Create inline policy**.

1. In the **Policy editor** area, choose **JSON**, and replace the default contents of the box with the following.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "iam:PassRole",
               "Resource": "arn:aws:iam::111122223333:role/my-maintenance-window-role"
           },
           {
               "Effect": "Allow",
               "Action": "iam:ListRoles",
               "Resource": "arn:aws:iam::111122223333:role/"
           },
           {
               "Effect": "Allow",
               "Action": "iam:ListRoles",
               "Resource": "arn:aws:iam::111122223333:role/aws-service-role/ssm.amazonaws.com/"
           }
       ]
   }
   ```

------

   *my-maintenance-window-role* represents the name of the custom maintenance window role you created earlier.

   *account-id* represents the ID of your AWS account. Adding this permission for the resource `arn:aws:iam::account-id:role/` allows a user to view and choose from customer roles in the console when they create a maintenance window task. Adding this permission for `arn:aws:iam::account-id:role/aws-service-role/ssm.amazonaws.com/` allows a user to choose the Systems Manager service-linked role in the console when they create a maintenance window task. 

1. Choose **Next**.

1. On the **Review and create** page, enter a name in the **Policy name** box to identify this `PassRole` policy, such as **my-group-iam-passrole-policy**, and then choose **Create policy**.

## Task 4: Prevent specified users from registering maintenance window tasks using the console


You can deny the `ssm:RegisterTaskWithMaintenanceWindow` permission for the users in your AWS account who you don't want to register tasks with maintenance windows. This provides an extra layer of prevention for users who shouldn’t register maintenance window tasks.

**To configure permissions for groups that aren't allowed to register maintenance window tasks using the console**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **User groups**.

1. In the list of groups, select the name of the group you want to deny the `ssm:RegisterTaskWithMaintenanceWindow` permission from, or first create a new group if necessary.

1. On the **Permissions** tab, choose **Add permissions, Create inline policy**.

1. In the **Policy editor** area, choose **JSON**, and then replace the default contents of the box with the following.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Deny",
               "Action": "ssm:RegisterTaskWithMaintenanceWindow",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Choose **Next**.

1. On the **Review and create** page, for **Policy name**, enter a name to identify this policy, such as **my-groups-deny-mw-tasks-policy**, and then choose **Create policy**.

# Control access to maintenance windows using the AWS CLI
Control access using the AWS CLI

The following procedures describe how to use the AWS Command Line Interface (AWS CLI) to create the required permissions and roles for Maintenance Windows, a tool in AWS Systems Manager.

**Topics**
+ [

## Task 1: Create trust policy and customer managed policy files in JSON format
](#create-custom-policy-json-files-cli)
+ [

## Task 2: Create and verify a custom service role for maintenance windows using the AWS CLI
](#create-custom-role-cli)
+ [

## Task 3: Grant permissions to specified users to register maintenance window tasks using the AWS CLI
](#allow-maintenance-window-access-cli)
+ [

## Task 4: Prevent specified users from registering maintenance window tasks using the AWS CLI
](#deny-maintenance-window-access-cli)

## Task 1: Create trust policy and customer managed policy files in JSON format


Maintenance window tasks require an IAM role to provide the permissions required to run on the target resources. The permissions are provided through an IAM policy attached to the role. The types of tasks you run and your other operational requirements determine the contents of this policy. We provide a base policy you can adapt to your needs. Depending on the tasks and types of tasks your maintenance windows run, you might not need all the permissions in this policy, and you might need to include additional permissions. 

In this task, you specify the permissions needed for your custom maintenance window role in a pair of JSON files. You attach this policy to the role that you create later in [Task 2: Create and verify a custom service role for maintenance windows using the AWS CLI](#create-custom-role-cli). 

**To create trust policy and customer managed policy files**

1. Copy and paste the following trust policy into a text file. Save this file with the following name and file extension: **mw-role-trust-policy.json**.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "sts:AssumeRole"
           }
       ]
   }
   ```

------

1. Copy and paste the following JSON policy into a different text file. In the same directory where you created the first file, save this file with the following name and file extension: **mw-role-custom-policy.json**.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "ssm:SendCommand",
                   "ssm:CancelCommand",
                   "ssm:ListCommands",
                   "ssm:ListCommandInvocations",
                   "ssm:GetCommandInvocation",
                   "ssm:GetAutomationExecution",
                   "ssm:StartAutomationExecution",
                   "ssm:ListTagsForResource",
                   "ssm:GetParameters"
               ],
               "Resource": "*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "states:DescribeExecution",
                   "states:StartExecution"
               ],
               "Resource": [
                   "arn:aws:states:*:*:execution:*:*",
                   "arn:aws:states:*:*:stateMachine:*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "lambda:InvokeFunction"
               ],
               "Resource": [
                   "arn:aws:lambda:*:*:function:*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "resource-groups:ListGroups",
                   "resource-groups:ListGroupResources"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "tag:GetResources"
               ],
               "Resource": [
                   "*"
               ]
           },
           {
               "Effect": "Allow",
               "Action": "iam:PassRole",
               "Resource": "arn:aws:iam::111122223333:role/maintenance-window-role-name",
               "Condition": {
                   "StringEquals": {
                       "iam:PassedToService": [
                           "ssm.amazonaws.com"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. Modify the the content of `mw-role-custom-policy.json` as needed for the maintenance tasks that you run in your account. The changes you make are specific to your planned operations. 

   For example:
   + You can provide Amazon Resource Names (ARNs) for specific functions and state machines instead of using wildcard (\$1) qualifiers.
   + If you don’t plan to run AWS Step Functions tasks, you can remove the `states` permissions and (ARNs).
   + If you don’t plan to run AWS Lambda tasks, you can remove the `lambda` permissions and ARNs.
   + If you don't plan to run Automation tasks, you can remove the `ssm:GetAutomationExecution` and `ssm:StartAutomationExecution` permissions.
   + Add additional permissions that might be needed for the tasks to run. For example, some Automation actions work with AWS CloudFormation stacks. Therefore, the permissions `cloudformation:CreateStack`, `cloudformation:DescribeStacks`, and `cloudformation:DeleteStack` are required. 

     For another example, the Automation runbook `AWS-CopySnapshot` requires permissions to create an Amazon Elastic Block Store (Amazon EBS) snapshot. Therefore, the service role needs the permission `ec2:CreateSnapshot`. 

     For information about the role permissions needed by Automation runbooks, see the runbook descriptions in the [AWS Systems Manager Automation Runbook Reference](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-runbook-reference.html).

   Save the file again after making any needed changes.

## Task 2: Create and verify a custom service role for maintenance windows using the AWS CLI


The policy you created in the previous task is attached to the maintenance window service role you create in this task. When users register a maintenance window task, they specify this IAM role as part of the task configuration. The permissions in this role allow Systems Manager to run tasks in maintenance windows on your behalf.

**Important**  
Previously, the Systems Manager console provided you with the ability to choose the AWS managed IAM service-linked role `AWSServiceRoleForAmazonSSM` to use as the maintenance role for your tasks. Using this role and its associated policy, `AmazonSSMServiceRolePolicy`, for maintenance window tasks is no longer recommended. If you're using this role for maintenance window tasks now, we encourage you to stop using it. Instead, create your own IAM role that enables communication between Systems Manager and other AWS services when your maintenance window tasks run.

In this task, you run CLI commands to create your maintenance windows service role, adding the policy content from the JSON files you created. 

**Create a custom service role for maintenance windows using the AWS CLI**

1. Open the AWS CLI and run the following command in the directory where you placed `mw-role-custom-policy.json` and `mw-role-trust-policy.json`. The command creates a maintenance window service role called `my-maintenance-window-role` and attaches the *trust policy* to it.

------
#### [ Linux & macOS ]

   ```
   aws iam create-role \
       --role-name "my-maintenance-window-role" \
       --assume-role-policy-document file://mw-role-trust-policy.json
   ```

------
#### [ Windows ]

   ```
   aws iam create-role ^
       --role-name "my-maintenance-window-role" ^
       --assume-role-policy-document file://mw-role-trust-policy.json
   ```

------

   The system returns information similar to the following.

   ```
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Version": "2012-10-17", 		 	 	 		 	 	 
               "Statement": [
                   {
                       "Action": "sts:AssumeRole",
                       "Effect": "Allow",
                       "Principal": {
                           "Service": "ssm.amazonaws.com"
                       }
                   }
               ]
           },
           "RoleId": "AROAIIZKPBKS2LEXAMPLE",
           "CreateDate": "2024-08-19T03:40:17.373Z",
           "RoleName": "my-maintenance-window-role",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/my-maintenance-window-role"
       }
   }
   ```
**Note**  
Make a note of the `RoleName` and the `Arn` values. You include them in the next command.

1. Run the following command to attach the *customer managed policy* to the role. Replace the *account-id* placeholder with your own AWS account ID

------
#### [ Linux & macOS ]

   ```
   aws iam attach-role-policy \
       --role-name "my-maintenance-window-role" \
       --policy-arn "arn:aws:iam::account-id:policy/mw-role-custom-policy.json"
   ```

------
#### [ Windows ]

   ```
   aws iam attach-role-policy ^
       --role-name "my-maintenance-window-role" ^
       --policy-arn "arn:aws:iam::account-id:policy/mw-role-custom-policy.json"
   ```

------

1. Run the following command to verify that your role has been created, and that the trust policy has been attached.

   ```
   aws iam get-role --role-name my-maintenance-window-role
   ```

   The command returns information similar to the following:

   ```
   {
       "Role": {
           "Path": "/",
           "RoleName": "my-maintenance-window-role",
           "RoleId": "AROA123456789EXAMPLE",
           "Arn": "arn:aws:iam::123456789012:role/my-maintenance-window-role",
           "CreateDate": "2024-08-19T14:13:32+00:00",
           "AssumeRolePolicyDocument": {
               "Version": "2012-10-17", 		 	 	 		 	 	 
               "Statement": [
                   {
                       "Effect": "Allow",
                       "Principal": {
                           "Service": "ssm.amazonaws.com"
                       },
                       "Action": "sts:AssumeRole"
                   }
               ]
           },
           "MaxSessionDuration": 3600,
           "RoleLastUsed": {
               "LastUsedDate": "2024-08-19T14:30:44+00:00",
               "Region": "us-east-2"
           }
       }
   }
   ```

1. Run the following command to verify that the customer managed policy has been attached to the role.

   ```
   aws iam list-attached-role-policies --role-name my-maintenance-window-role
   ```

   The command returns information similar to the following:

   ```
   {
       "AttachedPolicies": [
           {
               "PolicyName": "mw-role-custom-policy",
               "PolicyArn": "arn:aws:iam::123456789012:policy/mw-role-custom-policy"
           }
       ]
   }
   ```

## Task 3: Grant permissions to specified users to register maintenance window tasks using the AWS CLI


Providing users with permissions to access the custom service role for maintenance windows lets them use it with their maintenance windows tasks. This is in addition to permissions that you’ve already given them to work with the Systems Manager API commands for the Maintenance Windows tool. This IAM role conveys the permissions need to run a maintenance window task. As a result, a user can't register tasks with a maintenance window using your custom service role without the ability to pass these IAM permissions.

When you register a task with a maintenance window, you specify a service role to run the actual task operations. This is the role that the service assumes when it runs tasks on your behalf. Before that, to register the task itself, assign the IAM `PassRole` policy to an IAM entity (such as a user or group). This allows the IAM entity to specify, as part of registering those tasks with the maintenance window, the role that should be used when running tasks. For information, see [Grant a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.

**To configure permissions for users who are allowed to register maintenance window tasks using the AWS CLI**

1. Copy and paste the following AWS Identity and Access Management (IAM) policy into a text editor and save it with the following name and file extension: `mw-passrole-policy.json`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "iam:PassRole",
               "Resource": "arn:aws:iam::111122223333:role/my-maintenance-window-role"
           },
           {
               "Effect": "Allow",
               "Action": "iam:ListRoles",
               "Resource": "arn:aws:iam::111122223333:role/"
           },
           {
               "Effect": "Allow",
               "Action": "iam:ListRoles",
               "Resource": "arn:aws:iam::111122223333:role/aws-service-role/ssm.amazonaws.com/"
           }
       ]
   }
   ```

------

   Replace *my-maintenance-window-role* with the name of the custom maintenance window role you created earlier.

   Replace *account-id* with the ID of your AWS account. Adding this permission for the resource `arn:aws:iam::account-id:role/` allows users in the group to view and choose from customer roles in the console when they create a maintenance window task. Adding this permission for `arn:aws:iam::account-id:role/aws-service-role/ssm.amazonaws.com/` allows users in the group to choose the Systems Manager service-linked role in the console when they create a maintenance window task. 

1. Open the AWS CLI.

1. Depending on whether you're assigning the permission to an IAM entity (user or group), run one of the following commands.
   + **For an IAM entity:**

------
#### [ Linux & macOS ]

     ```
     aws iam put-user-policy \
         --user-name "user-name" \
         --policy-name "policy-name" \
         --policy-document file://path-to-document
     ```

------
#### [ Windows ]

     ```
     aws iam put-user-policy ^
         --user-name "user-name" ^
         --policy-name "policy-name" ^
         --policy-document file://path-to-document
     ```

------

     For *user-name*, specify the user who assigns tasks to maintenance windows. For *policy-name*, specify the name you want to use to identify the policy, such as **my-iam-passrole-policy**. For *path-to-document*, specify the path to the file you saved in step 1. For example: `file://C:\Temp\mw-passrole-policy.json`
**Note**  
To grant access for a user to register tasks for maintenance windows using the Systems Manager console, you must also assign the `AmazonSSMFullAccess` policy to your user (or an IAM policy that provides a smaller set of access permissions for Systems Manager that covers maintenance window tasks). Run the following command to assign the `AmazonSSMFullAccess` policy to your user.  

     ```
     aws iam attach-user-policy \
         --policy-arn "arn:aws:iam::aws:policy/AmazonSSMFullAccess" \
         --user-name "user-name"
     ```

     ```
     aws iam attach-user-policy ^
         --policy-arn "arn:aws:iam::aws:policy/AmazonSSMFullAccess" ^
         --user-name "user-name"
     ```
   + **For an IAM group:**

------
#### [ Linux & macOS ]

     ```
     aws iam put-group-policy \
         --group-name "group-name" \
         --policy-name "policy-name" \
         --policy-document file://path-to-document
     ```

------
#### [ Windows ]

     ```
     aws iam put-group-policy ^
         --group-name "group-name" ^
         --policy-name "policy-name" ^
         --policy-document file://path-to-document
     ```

------

     For *group-name*, specify the group whose members assign tasks to maintenance windows. For *policy-name*, specify the name you want to use to identify the policy, such as **my-iam-passrole-policy**. For *path-to-document*, specify the path to the file you saved in step 1. For example: `file://C:\Temp\mw-passrole-policy.json`
**Note**  
To grant access for members of a group to register tasks for maintenance windows using the Systems Manager console, you must also assign the `AmazonSSMFullAccess` policy to your group. Run the following command to assign this policy to your group.  

     ```
     aws iam attach-group-policy \
         --policy-arn "arn:aws:iam::aws:policy/AmazonSSMFullAccess" \
         --group-name "group-name"
     ```

     ```
     aws iam attach-group-policy ^
         --policy-arn "arn:aws:iam::aws:policy/AmazonSSMFullAccess" ^
         --group-name "group-name"
     ```

1. Run the following command to verify that the policy has been assigned to the group.

------
#### [ Linux & macOS ]

   ```
   aws iam list-group-policies \
       --group-name "group-name"
   ```

------
#### [ Windows ]

   ```
   aws iam list-group-policies ^
       --group-name "group-name"
   ```

------

## Task 4: Prevent specified users from registering maintenance window tasks using the AWS CLI


You can deny the `ssm:RegisterTaskWithMaintenanceWindow` permission for the users in your AWS account who you don't want to register tasks with maintenance windows. This provides an extra layer of prevention for users who shouldn’t register maintenance window tasks.

Depending on whether you're denying the `ssm:RegisterTaskWithMaintenanceWindow` permission for an individual user or a group, use one of the following procedures to prevent users from registering tasks with a maintenance window. 

**To configure permissions for users who aren't allowed to register maintenance window tasks using the AWS CLI**

1. Copy and paste the following IAM policy into a text editor and save it with the following name and file extension: **deny-mw-tasks-policy.json**.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Deny",
               "Action": "ssm:RegisterTaskWithMaintenanceWindow",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Open the AWS CLI.

1. Depending on whether you're assigning the permission to an IAM entity (user or group), run one of the following commands.
   + **For a user:**

------
#### [ Linux & macOS ]

     ```
     aws iam put-user-policy \
         --user-name "user-name" \
         --policy-name "policy-name" \
         --policy-document file://path-to-document
     ```

------
#### [ Windows ]

     ```
     aws iam put-user-policy ^
         --user-name "user-name" ^
         --policy-name "policy-name" ^
         --policy-document file://path-to-document
     ```

------

     For *user-name*, specify the user to prevent from assigning tasks to maintenance windows. For *policy-name*, specify the name you want to use to identify the policy, such as **my-deny-mw-tasks-policy**. For *path-to-document*, specify the path to the file you saved in step 1. For example: `file://C:\Temp\deny-mw-tasks-policy.json`
   + **For a group:**

------
#### [ Linux & macOS ]

     ```
     aws iam put-group-policy \
         --group-name "group-name" \
         --policy-name "policy-name" \
         --policy-document file://path-to-document
     ```

------
#### [ Windows ]

     ```
     aws iam put-group-policy ^
         --group-name "group-name" ^
         --policy-name "policy-name" ^
         --policy-document file://path-to-document
     ```

------

     For *group-name*, specify the group whose to prevent from assigning tasks to maintenance windows. For *policy-name*, specify the name you want to use to identify the policy, such as **my-deny-mw-tasks-policy**. For *path-to-document*, specify the path to the file you saved in step 1. For example: `file://C:\Temp\deny-mw-tasks-policy.json`

1. Run the following command to verify that the policy has been assigned to the group.

------
#### [ Linux & macOS ]

   ```
   aws iam list-group-policies \
       --group-name "group-name"
   ```

------
#### [ Windows ]

   ```
   aws iam list-group-policies ^
       --group-name "group-name"
   ```

------

# Create and manage maintenance windows using the console


This section describes how to create, configure, update, and delete maintenance windows using the AWS Systems Manager console. This section also provides information about managing the targets and tasks of a maintenance window.

**Important**  
We recommend that you initially create and configure maintenance windows in a test environment. 

**Before you begin**  
Before you create a maintenance window, you must configure access to Maintenance Windows, a tool in AWS Systems Manager. For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).

**Topics**
+ [

# Create a maintenance window using the console
](sysman-maintenance-create-mw.md)
+ [

# Assign targets to a maintenance window using the console
](sysman-maintenance-assign-targets.md)
+ [

# Assign tasks to a maintenance window using the console
](sysman-maintenance-assign-tasks.md)
+ [

# Disable or enable a maintenance window using the console
](sysman-maintenance-disable.md)
+ [

# Update or delete maintenance window resources using the console
](sysman-maintenance-update.md)

# Create a maintenance window using the console
Create a maintenance window

In this procedure, you create a maintenance window in Maintenance Windows, a tool in AWS Systems Manager. You can specify its basic options, such as name, schedule, and duration. In later steps, you choose the targets, or resources, that it updates and the tasks that run when the maintenance window runs.

**Note**  
For an explanation of how the various schedule-related options for maintenance windows relate to one another, see [Maintenance window scheduling and active period options](maintenance-windows-schedule-options.md).  
For more information about working with the `--schedule` option, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

**To create a maintenance window using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. Choose **Create maintenance window**.

1. For **Name**, enter a descriptive name to help you identify this maintenance window.

1. (Optional) For **Description**, enter a description to identify how this maintenance window will be used.

1. (Optional) If you want to allow a maintenance window task to run on managed nodes, even if you haven't registered those nodes as targets, choose **Allow unregistered targets**. 

   If you choose this option, then you can choose the unregistered nodes (by node ID) when you register a task with the maintenance window. 

   If you don't choose this option, then you must choose previously registered targets when you register a task with the maintenance window.

1. Specify a schedule for the maintenance window by using one of the three scheduling options.

   For information about building cron/rate expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. For **Duration**, enter the number of hours the maintenance window will run. The value you specify determines the specific end time for the maintenance window based on the time it begins. No maintenance window tasks are permitted to start after the resulting endtime minus the number of hours you specify for **Stop initiating tasks** in the next step.

   For example, if the maintenance window starts at 3 PM, the duration is three hours, and the **Stop initiating tasks** value is one hour, no maintenance window tasks can start after 5 PM.

1. For **Stop initiating tasks**, enter the number of hours before the end of the maintenance window that the system should stop scheduling new tasks to run.

1. (Optional) For **Window start date**, specify a date and time, in ISO-8601 Extended format, for when you want the maintenance window to become active. This allows you to delay activation of the maintenance window until the specified future date.
**Note**  
You can't specify a start date and time that occurs in the past.

1. (Optional) For **Window end date**, specify a date and time, in ISO-8601 Extended format, for when you want the maintenance window to become inactive. This allows you to set a date and time in the future after which the maintenance window no longer runs.

1. (Optional) For **Schedule timezone**, specify the time zone to use as the basis for when scheduled maintenance windows run, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los\$1Angeles", "etc/UTC", or "Asia/Seoul".

   For more information about valid formats, see the [Time Zone Database](https://www.iana.org/time-zones) on the IANA website.

1. (Optional) For **Schedule offset**, enter the number of days to wait after the date and time specified by a cron or rate expression before running the maintenance window. You can specify between one and six days.
**Note**  
This option is available only if you specified a schedule by entering a cron or rate expression manually.

1. (Optional) In the **Manage tags** area, apply one or more tag key name/value pairs to the maintenance window.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a maintenance window to identify the type of tasks it runs, the types of targets, and the environment it runs in. In this case, you could specify the following key name/value pairs:
   + `Key=TaskType,Value=AgentUpdate`
   + `Key=OS,Value=Windows`
   + `Key=Environment,Value=Production`

1. Choose **Create maintenance window**. The system returns you to the maintenance window page. The state of the maintenance window you just created is **Enabled**.

# Assign targets to a maintenance window using the console
Assign targets to a maintenance window

In this procedure, you register a target with a maintenance window. In other words, you specify which resources the maintenance window performs actions on.

**Note**  
If a single maintenance window task is registered with multiple targets, its task invocations occur sequentially and not in parallel. If your task must run on multiple targets at the same time, register a task for each target individually and assign each task the same priority level.

**To assign targets to a maintenance window using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. In the list of maintenance windows, choose the maintenance window to add targets to.

1. Choose **Actions**, and then choose **Register targets**.

1. (Optional) For **Target name**, enter a name for the targets.

1. (Optional) For **Description**, enter a description.

1. (Optional) For **Owner information**, specify information to include in any Amazon EventBridge event raised while running tasks for these targets in this maintenance window.

   For information about using EventBridge to monitor Systems Manager events, see [Monitoring Systems Manager events with Amazon EventBridge](monitoring-eventbridge-events.md).

1. In the **Targets** area, choose one of the options described in the following table.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-maintenance-assign-targets.html)

1. Choose **Register target**.

If you want to assign more targets to this maintenance window, choose the **Targets** tab, and then choose **Register target**. With this option, you can choose a different means of targeting. For example, if you previously targeted nodes by node ID, you can register new targets and target nodes by specifying tags applied to managed nodes or choosing resource types from a resource group.

# Assign tasks to a maintenance window using the console
Assign tasks to a maintenance window

In this procedure, you add a task to a maintenance window. Tasks are the actions performed when a maintenance window runs.

The following four types of tasks can be added to a maintenance window:
+ AWS Systems Manager Run Command commands
+ Systems Manager Automation workflows
+ AWS Step Functions tasks
+ AWS Lambda functions
**Important**  
The IAM policy for Maintenance Windows requires that you add the prefix `SSM` to Lambda function (or alias) names. Before you proceed to register this type of task, update its name in AWS Lambda to include `SSM`. For example, if your Lambda function name is `MyLambdaFunction`, change it to `SSMMyLambdaFunction`.

**To assign tasks to a maintenance window**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. In the list of maintenance windows, choose a maintenance window.

1. Choose **Actions**, and then choose the option for the type of task you want to register with the maintenance window.
   + **Register Run command task**
   + **Register Automation task**
   + **Register Lambda task**
   + **Register Step Functions task**
**Note**  
Maintenance window tasks support Step Functions Standard state machine workflows only. They don't support Express state machine workflows. For information about state machine workflow types, see [Standard vs. Express Workflows](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html) in the *AWS Step Functions Developer Guide*.

1. (Optional) For **Name**, enter a name for the task.

1. (Optional) For **Description**, enter a description.

1. For **New task invocation cutoff**, if you don't want any new task invocations to start after the maintenance window cutoff time is reached, choose **Enabled**.

   When this option is *not* enabled, the task continues running when the cutoff time is reached and starts new task invocations until completion. 
**Note**  
The status for tasks that are not completed when you enable this option is `TIMED_OUT`. 

1. For this step, choose the tab for your selected task type.

------
#### [ Run Command ]

   1. In the **Command document** list, choose the Systems Manager Command document (SSM document) that defines the tasks to run.

   1. For **Document version**, choose the document version to use.

   1. For **Task priority**, specify a priority for this task. Zero (`0`) is the highest priority. Tasks in a maintenance window are scheduled in priority order with tasks that have the same priority scheduled in parallel.

------
#### [ Automation ]

   1.  In the **Automation document** list, choose the Automation runbook that defines the tasks to run.

   1. For **Document version**, choose the runbook version to use.

   1. For **Task priority**, specify a priority for this task. Zero (`0`) is the highest priority. Tasks in a maintenance window are scheduled in priority order with tasks that have the same priority scheduled in parallel.

------
#### [ Lambda ]

   1. In the **Lambda parameters** area, choose a Lambda function from the list.

   1. (Optional) Provide any content for **Payload**, **Client Context**, or **Qualifier** that you want to include.
**Note**  
In some cases, you can use a *pseudo parameter* as part of your `Payload` value. Then when the maintenance window task runs, it passes the correct values instead of the pseudo parameter placeholders. For information, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md).

   1. For **Task priority**, specify a priority for this task. Zero (`0`) is the highest priority. Tasks in a maintenance window are scheduled in priority order with tasks that have the same priority scheduled in parallel.

------
#### [ Step Functions ]

   1. In the **Step Functions parameters** area, choose a state machine from the list.

   1. (Optional) Provide a name for the state machine execution and any content for **Input** that you want to include.
**Note**  
In some cases, you can use a *pseudo parameter* as part of your `Input` value. Then when the maintenance window task runs, it passes the correct values instead of the pseudo parameter placeholders. For information, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md).

   1. For **Task priority**, specify a priority for this task. Zero (`0`) is the highest priority. Tasks in a maintenance window are scheduled in priority order with tasks that have the same priority scheduled in parallel.

------

1. In the **Targets** area, choose one of the following:
   + **Selecting registered target groups**: Select one or more maintenance window targets you have registered with the current maintenance window.
   + **Selecting unregistered targets**: Choose available resources one by one as targets for the task.

     If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.
   + **Task target not required**: Targets for the task might already be specified in other functions for all but Run Command-type tasks.

     Specify one or more targets for maintenance window Run Command-type tasks. Depending on the task, targets are optional for other maintenance window task types (Automation, AWS Lambda, and AWS Step Functions). For more information about running tasks that don't specify targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).
**Note**  
In many cases, you don't need to explicitly specify a target for an automation task. For example, say that you're creating an Automation-type task to update an Amazon Machine Image (AMI) for Linux using the `AWS-UpdateLinuxAmi` runbook. When the task runs, the AMI is updated with the latest available Linux distribution packages and Amazon software. New instances created from the AMI already have these updates installed. Because the ID of the AMI to be updated is specified in the input parameters for the runbook, there is no need to specify a target again in the maintenance window task.

1. *Automation tasks only:*

   In the **Input parameters** area, provide values for any required or optional parameters needed to run your task.
**Note**  
In some cases, you can use a *pseudo parameter* for certain input parameter values. Then when the maintenance window task runs, it passes the correct values instead of the pseudo parameter placeholders. For information, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md).

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **IAM service role**, choose a role to provide permissions for Systems Manager to assume when running a maintenance window task.

   If you don't specify a service role ARN, Systems Manager uses a service-linked role in your account. This role is not listed in the drop-down menu. If no appropriate service-linked role for Systems Manager exists in your account, it's created when the task is registered successfully. 
**Note**  
For an improved security posture, we strongly recommend creating a custom policy and custom service role for running your maintenance window tasks. The policy can be crafted to provide only the permissions needed for your particular maintenance window tasks. For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).

1. *Run Command tasks only:*

   (Optional) For **Output options**, do the following:
   + Select the **Enable writing to S3** check box to save the command output to a file. Enter the bucket and prefix (folder) names in the boxes.
   + Select the **CloudWatch output** check box to write complete output to Amazon CloudWatch Logs. Enter the name of a CloudWatch Logs log group.
**Note**  
The permissions that grant the ability to write data to an S3 bucket or CloudWatch Logs are those of the instance profile assigned to the node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md). In addition, if the specified S3 bucket or log group is in a different AWS account, verify that the instance profile associated with the node has the necessary permissions to write to that bucket.

1. *Run Command tasks only:*

   In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. *Run Command tasks only:*

   In the **Parameters** area, specify parameters for the document. 
**Note**  
In some cases, you can use a *pseudo parameter* for certain input parameter values. Then when the maintenance window task runs, it passes the correct values instead of the pseudo parameter placeholders. For information, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md).

1. * Run Command and Automation tasks only:*

   (Optional) In the **CloudWatch alarm** area, for **Alarm name**, choose an existing CloudWatch alarm to apply to your task for monitoring. 

   If the alarm activates, the task is stopped.
**Note**  
To attach a CloudWatch alarm to your task, the IAM principal that runs the task must have permission for the `iam:createServiceLinkedRole` action. For more information about CloudWatch alarms, see [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html).

1. Depending on your task type, choose one of the following:
   + **Register Run command task**
   + **Register Automation task**
   + **Register Lambda task**
   + **Register Step Functions task**

# Disable or enable a maintenance window using the console
Disable or enable a maintenance window

You can disable or enable a maintenance window in Maintenance Windows, a tool in AWS Systems Manager. You can choose one maintenance window at a time to either disable or enable the maintenance window from running. You can also select multiple or all maintenance windows to enable and disable.

This section describes how to disable or enable a maintenance window by using the Systems Manager console. For examples of how to do this by using the AWS Command Line Interface (AWS CLI), see [Tutorial: Update a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-update.md). 

**Topics**
+ [

## Disable a maintenance window using the console
](#sysman-maintenance-disable-mw)
+ [

## Enable a maintenance window using the console
](#sysman-maintenance-enable-mw)

## Disable a maintenance window using the console


You can disable a maintenance window to pause a task for a specified period, and it will remain available to enable again later.

**To disable a maintenance window**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. Using the check box next to the maintenance window that you want to disable, select one or more maintenance windows.

1. Choose **Disable maintenance window** in the **Actions** menu. The system prompts you to confirm your actions. 

## Enable a maintenance window using the console


You can enable a maintenance window to resume a task.

**Note**  
If the maintenance window uses a rate schedule and the start date is currently set to a past date and time, the current date and time is used as the start date for the maintenance window. You can change the start date of the maintenance window before or after enabling it. For information, see [Update or delete maintenance window resources using the console](sysman-maintenance-update.md).

**To enable a maintenance window**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. Select the check box next to the maintenance window to enable.

1. Choose **Actions, Enable maintenance window**. The system prompts you to confirm your actions. 

# Update or delete maintenance window resources using the console
Update or delete maintenance window resources

You can update or delete a maintenance window in Maintenance Windows, a tool in AWS Systems Manager. You can also update or delete the targets or tasks of a maintenance window. If you edit the details of a maintenance window, you can change the schedule, targets, and tasks. You can also specify names and descriptions for windows, targets, and tasks, which helps you better understand their purpose, and makes it easier to manage your queue of windows.

This section describes how to update or delete a maintenance window, targets, and tasks by using the Systems Manager console. For examples of how to do this by using the AWS Command Line Interface (AWS CLI), see [Tutorial: Update a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-update.md). 

**Topics**
+ [

## Updating or deleting a maintenance window using the console
](#sysman-maintenance-update-mw)
+ [

## Updating or deregistering maintenance window targets using the console
](#sysman-maintenance-update-target)
+ [

## Updating or deregistering maintenance window tasks using the console
](#sysman-maintenance-update-tasks)

## Updating or deleting a maintenance window using the console


You can update a maintenance window to change its name, description, and schedule, and whether the maintenance window should allow unregistered targets.

**To update or delete a maintenance window**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. Select the button next to the maintenance window that you want to update or delete, and then do one of the following:
   + Choose **Delete**. The system prompts you to confirm your actions. 
   + Choose **Edit**. On the **Edit maintenance window** page, change the values and options that you want, and then choose **Save changes**.

     For information about the configuration choices you can make, see [Create a maintenance window using the console](sysman-maintenance-create-mw.md).

## Updating or deregistering maintenance window targets using the console


You can update or deregister the targets of a maintenance window. If you choose to update a maintenance window target you can specify a new target name, description, and owner. You can also choose different targets. 

**To update or delete the targets of a maintenance window**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. Choose the name of the maintenance window that you want to update, choose the **Targets** tab, and then do one of the following:
   + To update targets, select the button next to the target to update, and then choose **Edit**.
   + To deregister targets, select the button next to the target to deregister, and then choose **Deregister target**. In the **Deregister maintenance windows target** dialog box, choose **Deregister**.

## Updating or deregistering maintenance window tasks using the console


You can update or deregister the tasks of a maintenance window. If you choose to update, you can specify a new task name, description, and owner. For Run Command and Automation tasks, you can choose a different SSM document for the tasks. You can't, however, edit a task to change its type. For example, if you created an Automation task, you can't edit that task and change it to a Run Command task. 

**To update or delete the tasks of a maintenance window using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**. 

1. Choose the name of the maintenance window that you want to update.

1. Choose the **Tasks** tab, and then select the button next to the task to update.

1. Do one of the following:
   + To deregister a task, choose **Deregister task**.
   + To edit the task, choose **Edit**. Change the values and options that you want, and then choose **Edit task**.

# Tutorials


The tutorials in this section show you how to perform common tasks when working with maintenance windows.

**Complete prerequisites**  
Before trying these tutorials, complete the following prerequisites.
+ **Configure the AWS CLI on your local machine** – Before you can run AWS CLI commands, you must install and configure the CLI on your local machine. For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).
+ **Verify maintenance window roles and permissions** – An AWS administrator in your account must grant you the AWS Identity and Access Management (IAM) permissions you need to manage maintenance windows using the CLI. For information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).
+ **Create or configure an instance that is compatible with Systems Manager ** – You need at least one Amazon Elastic Compute Cloud (Amazon EC2) instance that is configured for use with Systems Manager to complete the tutorials. This means that SSM Agent is installed on the instance, and an IAM instance profile for Systems Manager is attached to the instance. 

  We recommend launching an instance from one AWS managed Amazon Machine Image (AMI) with the agent preinstalled. For more information, see [Find AMIs with the SSM Agent preinstalled](ami-preinstalled-agent.md).

  For information about installing SSM Agent on an instance, see the following topics:
  + [Manually installing and uninstalling SSM Agent on EC2 instances for Windows Server](manually-install-ssm-agent-windows.md)
  + [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](manually-install-ssm-agent-linux.md)

  For information about configuring IAM permissions for Systems Manager to your instance, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md).
+ **Create additional resources as needed** – Run Command, a tool in Systems Manager, includes many tasks that don't require you to create resources other than those listed in this prerequisites topic. For that reason, we provide a simple Run Command task for you to use your first time through the tutorials. You also need an EC2 instance that is configured to use with Systems Manager, as described earlier in this topic. After you configure that instance, you can register a simple Run Command task. 

  The Systems Manager Maintenance Windows tool supports running the following four types of tasks: 
  + Run Command commands
  + Systems Manager Automation workflows
  + AWS Lambda functions
  + AWS Step Functions tasks

  In general, if a maintenance window task that you want to run requires additional resources, you should create them first. For example, if you want a maintenance window that runs an AWS Lambda function, create the Lambda function before you begin; for a Run Command task, create the S3 bucket that you can save command output to (if you plan to do so); and so on.

**Topics**
+ [

# Tutorials: Create and manage maintenance windows using the AWS CLI
](maintenance-window-tutorial-cli.md)
+ [

# Tutorial: Create a maintenance window for patching using the console
](maintenance-window-tutorial-patching.md)

# Tutorials: Create and manage maintenance windows using the AWS CLI
Create and manage maintenance windows using the AWS CLI

This section includes tutorials that help you learn how to use the AWS Command Line Interface (AWS CLI) to do the following:
+ Create and configure a maintenance window
+ View information about a maintenance window
+ View information about maintenance windows tasks and task executions
+ Update a maintenance window
+ Delete a maintenance window

**Keep track of resource IDs**  
As you complete the tasks in these AWS CLI tutorials, keep track of resource IDs generated by the commands you run. You use many of these as input for subsequent commands. For example, when you create the maintenance window, the system provides you with a maintenance window ID in the following format.

```
{
   "WindowId":"mw-0c50858d01EXAMPLE"
}
```

Make a note of the following system-generated IDs because the tutorials in this section use them:
+ `WindowId`
+ `WindowTargetId`
+ `WindowTaskId`
+ `WindowExecutionId`
+ `TaskExecutionId`
+ `InvocationId`
+ `ExecutionId`

You also need the ID of the EC2 instance that you plan to use in the tutorials. For example: `i-02573cafcfEXAMPLE`

**Topics**
+ [

# Tutorial: Create and configure a maintenance window using the AWS CLI
](maintenance-windows-cli-tutorials-create.md)
+ [

# Tutorial: View information about maintenance windows using the AWS CLI
](maintenance-windows-cli-tutorials-describe.md)
+ [

# Tutorial: View information about tasks and task executions using the AWS CLI
](mw-cli-tutorial-task-info.md)
+ [

# Tutorial: Update a maintenance window using the AWS CLI
](maintenance-windows-cli-tutorials-update.md)
+ [

# Tutorial: Delete a maintenance window using the AWS CLI
](mw-cli-tutorial-delete-mw.md)

# Tutorial: Create and configure a maintenance window using the AWS CLI
Create and configure a maintenance window using the AWS CLI

This tutorial demonstrates how to use the AWS Command Line Interface (AWS CLI) to create and configure a maintenance window, its targets, and its tasks. The main path through the tutorial consists of simple steps. You create a single maintenance window, identify a single target, and set up a simple task for the maintenance window to run. Along the way, we provide information you can use to try more complicated scenarios.

As you follow the steps in this tutorial, replace the values in italicized *red* text with your own options and IDs. For example, replace the maintenance window ID *mw-0c50858d01EXAMPLE* and the instance ID *i-02573cafcfEXAMPLE* with IDs of resources you create.

**Topics**
+ [

# Step 1: Create the maintenance window using the AWS CLI
](mw-cli-tutorial-create-mw.md)
+ [

# Step 2: Register a target node with the maintenance window using the AWS CLI
](mw-cli-tutorial-targets.md)
+ [

# Step 3: Register a task with the maintenance window using the AWS CLI
](mw-cli-tutorial-tasks.md)

# Step 1: Create the maintenance window using the AWS CLI


In this step, you create a maintenance window and specify its basic options, such as name, schedule, and duration. In later steps, you choose the instance it updates and the task it runs.

In our example, you create a maintenance window that runs every five minutes. Normally, you wouldn't run a maintenance window this frequently. However, with this rate you can see your tutorial results quickly. We will show you how to change to a less frequent rate after the task has run successfully.

**Note**  
For an explanation of how the various schedule-related options for maintenance windows relate to one another, see [Maintenance window scheduling and active period options](maintenance-windows-schedule-options.md).  
For more information about working with the `--schedule` option, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

**To create a maintenance window using the AWS CLI**

1. Open the AWS Command Line Interface (AWS CLI) and run the following command on your local machine to create a maintenance window that does the following:
   + Runs every five minutes for up to two hours (as needed).
   + Prevents new tasks from starting within one hour of the end of the maintenance window operation.
   + Allows unassociated targets (instances that you haven't registered with the maintenance window).
   + Indicates through the use of custom tags that its creator intends to use it in a tutorial.

------
#### [ Linux & macOS ]

   ```
   aws ssm create-maintenance-window \
       --name "My-First-Maintenance-Window" \
       --schedule "rate(5 minutes)" \
       --duration 2 \
       --cutoff 1 \
       --allow-unassociated-targets \
       --tags "Key=Purpose,Value=Tutorial"
   ```

------
#### [ Windows ]

   ```
   aws ssm create-maintenance-window ^
       --name "My-First-Maintenance-Window" ^
       --schedule "rate(5 minutes)" ^
       --duration 2 ^
       --cutoff 1 ^
       --allow-unassociated-targets ^
       --tags "Key"="Purpose","Value"="Tutorial"
   ```

------

   The system returns information similar to the following.

   ```
   {
      "WindowId":"mw-0c50858d01EXAMPLE"
   }
   ```

1. Now run the following command to view details about this and any other maintenance windows already in your account.

   ```
   aws ssm describe-maintenance-windows
   ```

   The system returns information similar to the following.

   ```
   {
      "WindowIdentities":[
         {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "Name": "My-First-Maintenance-Window",
               "Enabled": true,
               "Duration": 2,
               "Cutoff": 1,
               "NextExecutionTime": "2019-05-11T16:46:16.991Z"
         }
      ]
   }
   ```

Continue to [Step 2: Register a target node with the maintenance window using the AWS CLI](mw-cli-tutorial-targets.md).

# Step 2: Register a target node with the maintenance window using the AWS CLI


In this step, you register a target with your new maintenance window. In this case, you specify which node to update when the maintenance window runs. 

For an example of registering more than one node at a time using node IDs, examples of using tags to identify multiple nodes, and examples of specifying resource groups as targets, see [Examples: Register targets with a maintenance window](mw-cli-tutorial-targets-examples.md).

**Note**  
You should already have created an Amazon Elastic Compute Cloud (Amazon EC2) instance to use in this step, as described in the [Maintenance Windows tutorial prerequisites](maintenance-windows-tutorials.md).

**To register a target node with a maintenance window using the AWS CLI**

1. Run the following command on your local machine. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-target-with-maintenance-window \
       --window-id "mw-0c50858d01EXAMPLE" \
       --resource-type "INSTANCE" \
       --target "Key=InstanceIds,Values=i-02573cafcfEXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm register-target-with-maintenance-window ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --resource-type "INSTANCE" ^
       --target "Key=InstanceIds,Values=i-02573cafcfEXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
      "WindowTargetId":"e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
   }
   ```

1. Now run the following command on your local machine to view details about your maintenance window target.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-targets \
       --window-id "mw-0c50858d01EXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-targets ^
       --window-id "mw-0c50858d01EXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "Targets": [
           {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "WindowTargetId": "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE",
               "ResourceType": "INSTANCE",
               "Targets": [
                   {
                       "Key": "InstanceIds",
                       "Values": [
                           "i-02573cafcfEXAMPLE"
                       ]
                   }
               ]
           }
       ]
   }
   ```

Continue to [Step 3: Register a task with the maintenance window using the AWS CLI](mw-cli-tutorial-tasks.md). 

# Examples: Register targets with a maintenance window


You can register a single node as a target using its node ID, as demonstrated in [Step 2: Register a target node with the maintenance window using the AWS CLI](mw-cli-tutorial-targets.md). You can also register one or more nodes as targets using the command formats on this page.

In general, there are two methods for identifying the nodes you want to use as maintenance window targets: specifying individual nodes, and using resource tags. The resource tags method provides more options, as shown in examples 2-3. 

You can also specify one or more resource groups as the target of a maintenance window. A resource group can include nodes and many other types of supported AWS resources. Examples 4 and 5, next, demonstrate how to add resource groups to your maintenance window targets.

**Note**  
If a single maintenance window task is registered with multiple targets, its task invocations occur sequentially and not in parallel. If your task must run on multiple targets at the same time, register a task for each target individually and assign each task the same priority level.

For more information about creating and managing resource groups, see [What are resource groups?](https://docs.aws.amazon.com/ARG/latest/userguide/resource-groups.html) in the *AWS Resource Groups User Guide* and [Resource Groups and Tagging for AWS](https://aws.amazon.com/blogs/aws/resource-groups-and-tagging/) in the *AWS News Blog*.

For information about quotas for Maintenance Windows, a tool in AWS Systems Manager, in addition to those specified in the following examples, see [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*.

## Example 1: Register multiple targets using node IDs


Run the following command on your local machine format to register multiple nodes as targets using their node IDs. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm register-target-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --resource-type "INSTANCE" \
    --target "Key=InstanceIds,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE,i-07782c72faEXAMPLE"
```

------
#### [ Windows ]

```
aws ssm register-target-with-maintenance-window ^
    --window-id "mw-0c50858d01EXAMPLE ^
    --resource-type "INSTANCE" ^
    --target "Key=InstanceIds,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE,i-07782c72faEXAMPLE
```

------

**Recommended use**: Most useful when registering a unique group of nodes with any maintenance window for the first time and they do *not* share a common node tag.

**Quotas:** You can specify up to 50 nodes total for each maintenance window target.

## Example 2: Register targets using resource tags applied to nodes


Run the following command on your local machine to register nodes that are all already tagged with a key-value pair you have assigned. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm register-target-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --resource-type "INSTANCE" \
    --target "Key=tag:Region,Values=East"
```

------
#### [ Windows ]

```
aws ssm register-target-with-maintenance-window ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --resource-type "INSTANCE" ^
    --target "Key=tag:Region,Values=East"
```

------

**Recommended use**: Most useful when registering a unique group of nodes with any maintenance window for the first time and they *do* share a common node tag.

**Quotas:** You can specify up to five key-value pairs total for each target. If you specify more than one key-value pair, a node must be tagged with *all* the tag keys and values you specify to be included in the target group.

**Note**  
You can tag a group of nodes with the tag-key `Patch Group` or `PatchGroup` and assign the nodes a common key value, such as `my-patch-group`. (You must use `PatchGroup`, without a space, if you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS).) Patch Manager, a tool in Systems Manager, evaluates the `Patch Group` or `PatchGroup` key on nodes to help determine which patch baseline applies to them. If your task will run the `AWS-RunPatchBaseline` SSM document (or the legacy `AWS-ApplyPatchBaseline` SSM document), you can specify the same `Patch Group` or `PatchGroup` key-value pair when you register targets with a maintenance window. For example: `--target "Key=tag:PatchGroup,Values=my-patch-group`. Doing so allows you to use a maintenance window to update patches on a group of nodes that are already associated with the same patch baseline. For more information, see [Patch groups](patch-manager-patch-groups.md).

## Example 3: Register targets using a group of tag keys (without tag values)


Run the following command on your local machine to register nodes that all have one or more tag keys assigned to them, regardless of their key values. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

```
aws ssm register-target-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --resource-type "INSTANCE" \
    --target "Key=tag-key,Values=Name,Instance-Type,CostCenter"
```

------
#### [ Windows ]

```
aws ssm register-target-with-maintenance-window ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --resource-type "INSTANCE" ^
    --target "Key=tag-key,Values=Name,Instance-Type,CostCenter"
```

------

**Recommended use**: Useful when you want to target nodes by specifying multiple tag *keys* (without their values) rather than just one tag-key or a tag key-value pair.

**Quotas:** You can specify up to five tag-keys total for each target. If you specify more than one tag key, a node must be tagged with *all* the tag keys you specify to be included in the target group.

## Example 4: Register targets using a resource group name


Run the following command on your local machine to register a specified resource group, regardless of the type of resources it contains. Replace *mw-0c50858d01EXAMPLE* with your own information. If the tasks you assign to the maintenance window don't act on a type of resource included in this resource group, the system might report an error. Tasks for which a supported resource type is found continue to run despite these errors.

------
#### [ Linux & macOS ]

```
aws ssm register-target-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --resource-type "RESOURCE_GROUP" \
    --target "Key=resource-groups:Name,Values=MyResourceGroup"
```

------
#### [ Windows ]

```
aws ssm register-target-with-maintenance-window ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --resource-type "RESOURCE_GROUP" ^
    --target "Key=resource-groups:Name,Values=MyResourceGroup"
```

------

**Recommended use**: Useful when you want to quickly specify a resource group as a target without evaluating whether all of its resource types will be targeted by a maintenance window, or when you know that the resource group contains only the resource types that your tasks perform actions on.

**Quotas:** You can specify only one resource group as a target.

## Example 5: Register targets by filtering resource types in a resource group


Run the following command on your local machine to register only certain resource types that belong to a resource group that you specify. Replace *mw-0c50858d01EXAMPLE* with your own information. With this option, even if you add a task for a resource type that belongs to the resource group, the task won’t run if you haven’t explicitly added the resource type to the filter.

------
#### [ Linux & macOS ]

```
aws ssm register-target-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --resource-type "RESOURCE_GROUP" \
    --target "Key=resource-groups:Name,Values=MyResourceGroup" \
    "Key=resource-groups:ResourceTypeFilters,Values=AWS::EC2::Instance,AWS::ECS::Cluster"
```

------
#### [ Windows ]

```
aws ssm register-target-with-maintenance-window ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --resource-type "RESOURCE_GROUP" ^
    --target "Key=resource-groups:Name,Values=MyResourceGroup" ^
    "Key=resource-groups:ResourceTypeFilters,Values=AWS::EC2::Instance,AWS::ECS::Cluster"
```

------

**Recommended use**: Useful when you want to maintain strict control over the types of AWS resources your maintenance window can run actions on, or when your resource group contains a large number of resource types and you want to avoid unnecessary error reports in your maintenance window logs.

**Quotas:** You can specify only one resource group as a target.

# Step 3: Register a task with the maintenance window using the AWS CLI


In this step of the tutorial, you register an AWS Systems Manager Run Command task that runs the `df` command on your Amazon Elastic Compute Cloud (Amazon EC2) instance for Linux. The results of this standard Linux command show how much space is free and how much is used on the disk file system of your instance.

-or-

If you're targeting an Amazon EC2 instance for Windows Server instead of Linux, replace **df** in the following command with **ipconfig**. Output from this command lists details about the IP address, subnet mask, and default gateway for adapters on the target instance.

When you're ready to register other task types, or use more of the available Systems Manager Run Command options, see [Examples: Register tasks with a maintenance window](mw-cli-register-tasks-examples.md). There, we provide more information about all four task types, and some of their most important options, to help you plan for more extensive real-world scenarios. 

**To register a task with a maintenance window**

1. Run the following command on your local machine. Replace each *example resource placeholder* with your own information. The version to run from a local Windows machine includes the escape characters ("/") that you need to run the command from your command line tool.

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
       --window-id mw-0c50858d01EXAMPLE \
       --task-arn "AWS-RunShellScript" \
       --max-concurrency 1 --max-errors 1 \
       --priority 10 \
       --targets "Key=InstanceIds,Values=i-0471e04240EXAMPLE" \
       --task-type "RUN_COMMAND" \
       --task-invocation-parameters '{"RunCommand":{"Parameters":{"commands":["df"]}}}'
   ```

------
#### [ Windows ]

   ```
   aws ssm register-task-with-maintenance-window ^
       --window-id mw-0c50858d01EXAMPLE ^
       --task-arn "AWS-RunShellScript" ^
       --max-concurrency 1 --max-errors 1 ^
       --priority 10 ^
       --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE" ^
       --task-type "RUN_COMMAND" ^
       --task-invocation-parameters={\"RunCommand\":{\"Parameters\":{\"commands\":[\"df\"]}}}
   ```

------

   The system returns information similar to the following:

   ```
   {
       "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE"
   }
   ```

1. Now run the following command to view details about the maintenance window task you created. 

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-tasks \
       --window-id mw-0c50858d01EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-tasks ^
       --window-id mw-0c50858d01EXAMPLE
   ```

------

1. The system returns information similar to the following.

   ```
   {
       "Tasks": [
           {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
               "TaskArn": "AWS-RunShellScript",
               "Type": "RUN_COMMAND",
               "Targets": [
                   {
                       "Key": "InstanceIds",
                       "Values": [
                           "i-02573cafcfEXAMPLE"
                       ]
                   }
               ],
               "TaskParameters": {},
               "Priority": 10,
               "ServiceRoleArn": "arn:aws:iam::123456789012:role/MyMaintenanceWindowServiceRole",
               "MaxConcurrency": "1",
               "MaxErrors": "1"
           }
       ]
   }
   ```

1. Wait until the task has had time to run, based on the schedule you specified in [Step 1: Create the maintenance window using the AWS CLI](mw-cli-tutorial-create-mw.md). For example, if you specified **--schedule "rate(5 minutes)"**, wait five minutes. Then run the following command to view information about any executions that occurred for this task. 

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-executions \
       --window-id mw-0c50858d01EXAMPLE
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-executions ^
       --window-id mw-0c50858d01EXAMPLE
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowExecutions": [
           {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
               "Status": "SUCCESS",
               "StartTime": 1557593493.096,
               "EndTime": 1557593498.611
           }
       ]
   }
   ```

**Tip**  
After the task runs successfully, you can decrease the rate at which the maintenance window runs. For example, run the following command to decrease the frequency to once a week. Replace *mw-0c50858d01EXAMPLE* with your own information.  

```
aws ssm update-maintenance-window \
    --window-id mw-0c50858d01EXAMPLE \
    --schedule "rate(7 days)"
```

```
aws ssm update-maintenance-window ^
    --window-id mw-0c50858d01EXAMPLE ^
    --schedule "rate(7 days)"
```
For information about managing maintenance window schedules, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md) and [Maintenance window scheduling and active period options](maintenance-windows-schedule-options.md).  
For information about using the AWS Command Line Interface (AWS CLI) to modify a maintenance window, see [Tutorial: Update a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-update.md).

For practice running AWS CLI commands to view more details about your maintenance window task and its executions, continue to [Tutorial: View information about tasks and task executions using the AWS CLI](mw-cli-tutorial-task-info.md).

**Accessing tutorial command output**  
It's beyond the scope of this tutorial to use the AWS CLI to view the *output* of the Run Command command associated with your maintenance window task executions.

You could view this data, however, using the AWS CLI. (You could also view the output in the Systems Manager console or in a log file stored in an Amazon Simple Storage Service (Amazon S3) bucket, if you had configured the maintenance window to store command output there.) You would find that the output of the **df** command on an EC2 instance for Linux is similar to the following.

```
Filesystem 1K-blocks Used Available Use% Mounted on

devtmpfs 485716 0 485716 0% /dev

tmpfs 503624 0 503624 0% /dev/shm

tmpfs 503624 328 503296 1% /run

tmpfs 503624 0 503624 0% /sys/fs/cgroup

/dev/xvda1 8376300 1464160 6912140 18% /
```

The output of the **ipconfig** command on an EC2 instance for Windows Server is similar to the following:

```
Windows IP Configuration


Ethernet adapter Ethernet 2:

   Connection-specific DNS Suffix  . : example.com
   IPv4 Address. . . . . . . . . . . : 10.24.34.0/23
   Subnet Mask . . . . . . . . . . . : 255.255.255.255
   Default Gateway . . . . . . . . . : 0.0.0.0

Ethernet adapter Ethernet:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . : abc1.wa.example.net

Wireless LAN adapter Local Area Connection* 1:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . :

Wireless LAN adapter Wi-Fi:

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::100b:c234:66d6:d24f%4
   IPv4 Address. . . . . . . . . . . : 192.0.2.0
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.0.2.0

Ethernet adapter Bluetooth Network Connection:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . :
```

# Examples: Register tasks with a maintenance window


You can register a task in Run Command, a tool in AWS Systems Manager, with a maintenance window using the AWS Command Line Interface (AWS CLI), as demonstrated in [Register tasks with the maintenance window](mw-cli-tutorial-tasks.md). You can also register tasks for Systems Manager Automation workflows, AWS Lambda functions, and AWS Step Functions tasks, as demonstrated later in this topic.

**Note**  
Specify one or more targets for maintenance window Run Command-type tasks. Depending on the task, targets are optional for other maintenance window task types (Automation, AWS Lambda, and AWS Step Functions). For more information about running tasks that don't specify targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md).

In this topic, we provide examples of using the AWS Command Line Interface (AWS CLI) command `register-task-with-maintenance-window` to register each of the four supported task types with a maintenance window. The examples are for demonstration only, but you can modify them to create working task registration commands. 

**Using the --cli-input-json option**  
To better manage your task options, you can use the command option `--cli-input-json`, with option values referenced in a JSON file. 

To use the sample JSON file content we provide in the following examples, do the following on your local machine:

1. Create a file with a name such as `MyRunCommandTask.json`, `MyAutomationTask.json`, or another name that you prefer.

1. Copy the contents of our JSON sample into the file.

1. Modify the contents of the file for your task registration, and then save the file.

1. In the same directory where you stored the file, run the following command. Substitute your file name for *MyFile.json*. 

------
#### [ Linux & macOS ]

   ```
   aws ssm register-task-with-maintenance-window \
       --cli-input-json file://MyFile.json
   ```

------
#### [ Windows ]

   ```
   aws ssm register-task-with-maintenance-window ^
       --cli-input-json file://MyFile.json
   ```

------

**Pseudo parameters in maintenance window tasks**  
In some examples, we use *pseudo parameters* as the method to pass ID information to your tasks. For instance, `{{TARGET_ID}}` and `{{RESOURCE_ID}}` can be used to pass IDs of AWS resources to Automation, Lambda, and Step Functions tasks. For more information about pseudo parameters in `--task-invocation-parameters` content, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md). 

**More info**  
+ [Parameter options for the register-task-with-maintenance-windows command](mw-cli-task-options.md).
+ [https://docs.aws.amazon.com/cli/latest/reference/ssm/register-task-with-maintenance-window.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-task-with-maintenance-window.html) in the *AWS CLI Command Reference*
+ [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_RegisterTaskWithMaintenanceWindow.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_RegisterTaskWithMaintenanceWindow.html) in the *AWS Systems Manager API Reference*

## Task registration examples


The following sections provide a sample AWS CLI command for registering a supported task type and a JSON sample that can be used with the `--cli-input-json` option.

### Register a Systems Manager Run Command task


The following examples demonstrate how to register Systems Manager Run Command tasks with a maintenance window using the AWS CLI.

------
#### [ Linux & macOS ]

```
aws ssm register-task-with-maintenance-window \
    --window-id mw-0c50858d01EXAMPLE \
    --task-arn "AWS-RunShellScript" \
    --max-concurrency 1 --max-errors 1 --priority 10 \
    --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE" \
    --task-type "RUN_COMMAND" \
    --task-invocation-parameters '{"RunCommand":{"Parameters":{"commands":["df"]}}}'
```

------
#### [ Windows ]

```
aws ssm register-task-with-maintenance-window ^
    --window-id mw-0c50858d01EXAMPLE ^
    --task-arn "AWS-RunShellScript" ^
    --max-concurrency 1 --max-errors 1 --priority 10 ^
    --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE" ^
    --task-type "RUN_COMMAND" ^
    --task-invocation-parameters "{\"RunCommand\":{\"Parameters\":{\"commands\":[\"df\"]}}}"
```

------

**JSON content to use with `--cli-input-json` file option:**

```
{
    "TaskType": "RUN_COMMAND",
    "WindowId": "mw-0c50858d01EXAMPLE",
    "Description": "My Run Command task to update SSM Agent on an instance",
    "MaxConcurrency": "1",
    "MaxErrors": "1",
    "Name": "My-Run-Command-Task",
    "Priority": 10,
    "Targets": [
        {
            "Key": "WindowTargetIds",
            "Values": [
                "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
            ]
        }
    ],
    "TaskArn": "AWS-UpdateSSMAgent",
    "TaskInvocationParameters": {
        "RunCommand": {
            "Comment": "A TaskInvocationParameters test comment",
            "NotificationConfig": {
                "NotificationArn": "arn:aws:sns:region:123456789012:my-sns-topic-name",
                "NotificationEvents": [
                    "All"
                ],
                "NotificationType": "Invocation"
            },
            "OutputS3BucketName": "amzn-s3-demo-bucket",
            "OutputS3KeyPrefix": "S3-PREFIX",
            "TimeoutSeconds": 3600
        }
    }
}
```

### Register a Systems Manager Automation task


The following examples demonstrate how to register Systems Manager Automation tasks with a maintenance window using the AWS CLI: 

**AWS CLI command:**

------
#### [ Linux & macOS ]

```
aws ssm register-task-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --task-arn "AWS-RestartEC2Instance" \
    --service-role-arn arn:aws:iam::123456789012:role/MyMaintenanceWindowServiceRole \
    --task-type AUTOMATION \
    --task-invocation-parameters "Automation={DocumentVersion=5,Parameters={InstanceId='{{RESOURCE_ID}}'}}" \
    --priority 0 --name "My-Restart-EC2-Instances-Automation-Task" \
    --description "Automation task to restart EC2 instances"
```

------
#### [ Windows ]

```
aws ssm register-task-with-maintenance-window ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --task-arn "AWS-RestartEC2Instance" ^
    --service-role-arn arn:aws:iam::123456789012:role/MyMaintenanceWindowServiceRole ^
    --task-type AUTOMATION ^
    --task-invocation-parameters "Automation={DocumentVersion=5,Parameters={InstanceId='{{TARGET_ID}}'}}" ^
    --priority 0 --name "My-Restart-EC2-Instances-Automation-Task" ^
    --description "Automation task to restart EC2 instances"
```

------

**JSON content to use with `--cli-input-json` file option:**

```
{
    "WindowId": "mw-0c50858d01EXAMPLE",
        "TaskArn": "AWS-PatchInstanceWithRollback",
    "TaskType": "AUTOMATION","TaskInvocationParameters": {
        "Automation": {
            "DocumentVersion": "1",
            "Parameters": {
                "instanceId": [
                    "{{RESOURCE_ID}}"
                ]
            }
        }
    }
}
```

### Register an AWS Lambda task


The following examples demonstrate how to register Lambda function tasks with a maintenance window using the AWS CLI. 

For these examples, the user who created the Lambda function named it `SSMrestart-my-instances` and created two parameters called `instanceId` and `targetType`.

**Important**  
The IAM policy for Maintenance Windows requires that you add the prefix `SSM` to Lambda function (or alias) names. Before you proceed to register this type of task, update its name in AWS Lambda to include `SSM`. For example, if your Lambda function name is `MyLambdaFunction`, change it to `SSMMyLambdaFunction`.

**AWS CLI command:**

------
#### [ Linux & macOS ]

**Important**  
If you are using version 2 of the AWS CLI, you must include the option `--cli-binary-format raw-in-base64-out` in the following command if your Lambda payload is not base64 encoded. The `cli_binary_format` option is available only in version 2. For information about this and other AWS CLI `config` file settings, see [Supported `config` file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-settings) in the *AWS Command Line Interface User Guide*.

```
aws ssm register-task-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
    --priority 2 --max-concurrency 10 --max-errors 5 --name "My-Lambda-Example" \
    --description "A description for my LAMBDA example task" --task-type "LAMBDA" \
    --task-arn "arn:aws:lambda:region:123456789012:function:serverlessrepo-SSMrestart-my-instances-C4JF9EXAMPLE" \
    --task-invocation-parameters '{"Lambda":{"Payload":"{\"InstanceId\":\"{{RESOURCE_ID}}\",\"targetType\":\"{{TARGET_TYPE}}\"}","Qualifier": "$LATEST"}}'
```

------
#### [ PowerShell ]

**Important**  
If you are using version 2 of the AWS CLI, you must include the option `--cli-binary-format raw-in-base64-out` in the following command if your Lambda payload is not base64 encoded. The `cli_binary_format` option is available only in version 2. For information about this and other AWS CLI `config` file settings, see [Supported `config` file settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-settings) in the *AWS Command Line Interface User Guide*.

```
aws ssm register-task-with-maintenance-window `
    --window-id "mw-0c50858d01EXAMPLE" `
    --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" `
    --priority 2 --max-concurrency 10 --max-errors 5 --name "My-Lambda-Example" `
    --description "A description for my LAMBDA example task" --task-type "LAMBDA" `
    --task-arn "arn:aws:lambda:region:123456789012:function:serverlessrepo-SSMrestart-my-instances-C4JF9EXAMPLE" `
    --task-invocation-parameters '{\"Lambda\":{\"Payload\":\"{\\\"InstanceId\\\":\\\"{{RESOURCE_ID}}\\\",\\\"targetType\\\":\\\"{{TARGET_TYPE}}\\\"}\",\"Qualifier\": \"$LATEST\"}}'
```

------

**JSON content to use with `--cli-input-json` file option:**

```
{
    "WindowId": "mw-0c50858d01EXAMPLE",
    "Targets": [
        {
            "Key": "WindowTargetIds",
            "Values": [
                "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
            ]
        }
    ],
    "TaskArn": "SSM_RestartMyInstances",
    "TaskType": "LAMBDA",
    "MaxConcurrency": "10",
    "MaxErrors": "10",
    "TaskInvocationParameters": {
        "Lambda": {
            "ClientContext": "ew0KICAi--truncated--0KIEXAMPLE",
            "Payload": "{ \"instanceId\": \"{{RESOURCE_ID}}\", \"targetType\": \"{{TARGET_TYPE}}\" }",
            "Qualifier": "$LATEST"
        }
    },
    "Name": "My-Lambda-Task",
    "Description": "A description for my LAMBDA task",
    "Priority": 5
}
```

### Register a Step Functions task


The following examples demonstrate how to register Step Functions state machine tasks with a maintenance window using the AWS CLI.

**Note**  
Maintenance window tasks support Step Functions Standard state machine workflows only. They don't support Express state machine workflows. For information about state machine workflow types, see [Standard vs. Express Workflows](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html) in the *AWS Step Functions Developer Guide*.

For these examples, the user who created the Step Functions state machine created a state machine named `SSMMyStateMachine` with a parameter called `instanceId`.

**Important**  
The AWS Identity and Access Management (IAM) policy for Maintenance Windows requires that you prefix Step Functions state machine names with `SSM`. Before you proceed to register this type of task, you must update its name in AWS Step Functions to include `SSM`. For example, if your state machine name is `MyStateMachine`, change it to `SSMMyStateMachine`.

**AWS CLI command:**

------
#### [ Linux & macOS ]

```
aws ssm register-task-with-maintenance-window \
    --window-id "mw-0c50858d01EXAMPLE" \
    --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
    --task-arn arn:aws:states:region:123456789012:stateMachine:SSMMyStateMachine-MggiqEXAMPLE \
    --task-type STEP_FUNCTIONS \
    --task-invocation-parameters '{"StepFunctions":{"Input":"{\"InstanceId\":\"{{RESOURCE_ID}}\"}", "Name":"{{INVOCATION_ID}}"}}' \
    --priority 0 --max-concurrency 10 --max-errors 5 \
    --name "My-Step-Functions-Task" --description "A description for my Step Functions task"
```

------
#### [ PowerShell ]

```
aws ssm register-task-with-maintenance-window `
    --window-id "mw-0c50858d01EXAMPLE" `
    --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" `
    --task-arn arn:aws:states:region:123456789012:stateMachine:SSMMyStateMachine-MggiqEXAMPLE `
    --task-type STEP_FUNCTIONS `
    --task-invocation-parameters '{\"StepFunctions\":{\"Input\":\"{\\\"InstanceId\\\":\\\"{{RESOURCE_ID}}\\\"}\", \"Name\":\"{{INVOCATION_ID}}\"}}' `
    --priority 0 --max-concurrency 10 --max-errors 5 `
    --name "My-Step-Functions-Task" --description "A description for my Step Functions task"
```

------

**JSON content to use with `--cli-input-json` file option:**

```
{
    "WindowId": "mw-0c50858d01EXAMPLE",
    "Targets": [
        {
            "Key": "WindowTargetIds",
            "Values": [
                "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
            ]
        }
    ],
    "TaskArn": "SSM_MyStateMachine",
    "TaskType": "STEP_FUNCTIONS",
    "MaxConcurrency": "10",
    "MaxErrors": "10",
    "TaskInvocationParameters": {
        "StepFunctions": {
            "Input": "{ \"instanceId\": \"{{TARGET_ID}}\" }",
            "Name": "{{INVOCATION_ID}}"
        }
    },
    "Name": "My-Step-Functions-Task",
    "Description": "A description for my Step Functions task",
    "Priority": 5
}
```

# Parameter options for the register-task-with-maintenance-windows command


The **register-task-with-maintenance-window** command provides several options for configuring a task according to your needs. Some are required, some are optional, and some apply to only a single maintenance window task type.

This topic provides information about some of these options to help you work with samples in this tutorial section. For information about all command options, see **[https://docs.aws.amazon.com/cli/latest/reference/ssm/register-task-with-maintenance-window.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-task-with-maintenance-window.html)** in the *AWS CLI Command Reference*.

**Command option: `--task-arn`**  
The option `--task-arn` is used to specify the resource that the task operates on. The value that you specify depends on the type of task you're registering, as described in the following table.


**TaskArn formats for maintenance window tasks**  

| Maintenance window task type | TaskArn value | 
| --- | --- | 
|  **`RUN_COMMAND`** and ** `AUTOMATION`**  |  `TaskArn` is the SSM document name or Amazon Resource Name (ARN). For example:  `AWS-RunBatchShellScript`  -or- `arn:aws:ssm:region:111122223333:document/My-Document`.  | 
|  **`LAMBDA`**  |  `TaskArn` is the function name or ARN. For example:  `SSMMy-Lambda-Function` -or- `arn:aws:lambda:region:111122223333:function:SSMMyLambdaFunction`.  The IAM policy for Maintenance Windows requires that you add the prefix `SSM` to Lambda function (or alias) names. Before you proceed to register this type of task, update its name in AWS Lambda to include `SSM`. For example, if your Lambda function name is `MyLambdaFunction`, change it to `SSMMyLambdaFunction`.   | 
|  **`STEP_FUNCTIONS`**  |  `TaskArn` is the state machine ARN. For example:  `arn:aws:states:us-east-2:111122223333:stateMachine:SSMMyStateMachine`.  The IAM policy for maintenance windows requires that you prefix Step Functions state machine names with `SSM`. Before you register this type of task, you must update its name in AWS Step Functions to include `SSM`. For example, if your state machine name is `MyStateMachine`, change it to `SSMMyStateMachine`.   | 

**Command option: `--service-role-arn`**  
The role for AWS Systems Manager to assume when running the maintenance window task. 

For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md)

**Command option: `--task-invocation-parameters`**  
The `--task-invocation-parameters` option is used to specify the parameters that are unique to each of the four task types. The supported parameters for each of the four task types are described in the following table.

**Note**  
For information about using pseudo parameters in `--task-invocation-parameters` content, such as \$1\$1TARGET\$1ID\$1\$1, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md). 

Task invocation parameters options for maintenance window tasks


| Maintenance window task type | Available parameters  | Example | 
| --- | --- | --- | 
|  **`RUN_COMMAND`**  |  `Comment` `DocumentHash` `DocumentHashType` `NotificationConfig` `OutputS3BucketName` `OutPutS3KeyPrefix` `Parameters` `ServiceRoleArn` `TimeoutSeconds`  |  <pre>"TaskInvocationParameters": {<br />        "RunCommand": {<br />            "Comment": "My Run Command task comment",<br />            "DocumentHash": "6554ed3d--truncated--5EXAMPLE",<br />            "DocumentHashType": "Sha256",<br />            "NotificationConfig": {<br />                "NotificationArn": "arn:aws:sns:region:123456789012:my-sns-topic-name",<br />                "NotificationEvents": [<br />                    "FAILURE"<br />                ],<br />                "NotificationType": "Invocation"<br />            },<br />            "OutputS3BucketName": "amzn-s3-demo-bucket",<br />            "OutputS3KeyPrefix": "S3-PREFIX",<br />            "Parameters": {<br />                "commands": [<br />                    "Get-ChildItem$env: temp-Recurse|Remove-Item-Recurse-force"<br />                ]<br />            },<br />            "ServiceRoleArn": "arn:aws:iam::123456789012:role/MyMaintenanceWindowServiceRole",<br />            "TimeoutSeconds": 3600<br />        }<br />    }</pre>  | 
|  **`AUTOMATION`**  |  `DocumentVersion` `Parameters`  |  <pre>"TaskInvocationParameters": {<br />        "Automation": {<br />            "DocumentVersion": "3",<br />            "Parameters": {<br />                "instanceid": [<br />                    "{{TARGET_ID}}"<br />                ]<br />            }<br />        }<br />    }</pre>  | 
|  **`LAMBDA`**  |  `ClientContext` `Payload` `Qualifier`  |  <pre>"TaskInvocationParameters": {<br />        "Lambda": {<br />            "ClientContext": "ew0KICAi--truncated--0KIEXAMPLE",<br />            "Payload": "{ \"targetId\": \"{{TARGET_ID}}\", \"targetType\": \"{{TARGET_TYPE}}\" }",<br />            "Qualifier": "$LATEST"<br />        }<br />    }</pre>  | 
|  **`STEP_FUNCTIONS`**  |  `Input` `Name`  |  <pre>"TaskInvocationParameters": {<br />        "StepFunctions": {<br />            "Input": "{ \"targetId\": \"{{TARGET_ID}}\" }",<br />            "Name": "{{INVOCATION_ID}}"<br />        }<br />    }</pre>  | 

# Tutorial: View information about maintenance windows using the AWS CLI
View information about a maintenance windows using the AWS CLI

This tutorial includes commands to help you update or get information about your maintenance windows, tasks, executions, and invocations. The examples are organized by command to demonstrate how to use command options to filter for the type of detail you want to see.

As you follow the steps in this tutorial, replace the values in italicized *red* text with your own options and IDs. For example, replace the maintenance window ID *mw-0c50858d01EXAMPLE* and the instance ID *i-02573cafcfEXAMPLE* with IDs of resources you create.

For information about setting up and configuring the AWS Command Line Interface (AWS CLI), see [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).

**Topics**
+ [

## Examples for 'describe-maintenance-windows'
](#mw-cli-tutorials-describe-maintenance-windows)
+ [

## Examples for 'describe-maintenance-window-targets'
](#mw-cli-tutorials-describe-maintenance-window-targets)
+ [

## Examples for 'describe-maintenance-window-tasks'
](#mw-cli-tutorials-describe-maintenance-window-tasks)
+ [

## Examples for 'describe-maintenance-windows-for-target'
](#mw-cli-tutorials-describe-maintenance-windows-for-target)
+ [

## Examples for 'describe-maintenance-window-executions'
](#mw-cli-tutorials-describe-maintenance-window-executions)
+ [

## Examples for 'describe-maintenance-window-schedule'
](#mw-cli-tutorials-describe-maintenance-window-schedule)

## Examples for 'describe-maintenance-windows'


**List all maintenance windows in your AWS account**  
Run the following command.

```
aws ssm describe-maintenance-windows
```

The system returns information similar to the following.

```
{
   "WindowIdentities":[
      {
         "WindowId":"mw-0c50858d01EXAMPLE",
         "Name":"My-First-Maintenance-Window",
         "Enabled":true,
         "Duration":2,
         "Cutoff":0,
         "NextExecutionTime": "2019-05-18T17:01:01.137Z"        
      },
      {
         "WindowId":"mw-9a8b7c6d5eEXAMPLE",
         "Name":"My-Second-Maintenance-Window",
         "Enabled":true,
         "Duration":4,
         "Cutoff":1,
         "NextExecutionTime": "2019-05-30T03:30:00.137Z"        
      },
   ]
}
```

**List all enabled maintenance windows**  
Run the following command.

```
aws ssm describe-maintenance-windows --filters "Key=Enabled,Values=true"
```

The system returns information similar to the following.

```
{
   "WindowIdentities":[
      {
         "WindowId":"mw-0c50858d01EXAMPLE",
         "Name":"My-First-Maintenance-Window",
         "Enabled":true,
         "Duration":2,
         "Cutoff":0,
         "NextExecutionTime": "2019-05-18T17:01:01.137Z"        
      },
      {
         "WindowId":"mw-9a8b7c6d5eEXAMPLE",
         "Name":"My-Second-Maintenance-Window",
         "Enabled":true,
         "Duration":4,
         "Cutoff":1,
         "NextExecutionTime": "2019-05-30T03:30:00.137Z"        
      },
   ]
}
```

**List all disabled maintenance windows**  
Run the following command.

```
aws ssm describe-maintenance-windows --filters "Key=Enabled,Values=false"
```

The system returns information similar to the following.

```
{
    "WindowIdentities": [
        {
            "WindowId": "mw-6e5c9d4b7cEXAMPLE",
            "Name": "My-Disabled-Maintenance-Window",
            "Enabled": false,
            "Duration": 2,
            "Cutoff": 1
        }
    ]
}
```

**List all maintenance windows having names that start with a certain prefix**  
Run the following command.

```
aws ssm describe-maintenance-windows --filters "Key=Name,Values=My"
```

The system returns information similar to the following.

```
{
    "WindowIdentities": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "Enabled": true,
            "Duration": 2,
            "Cutoff": 0,
            "NextExecutionTime": "2019-05-18T17:01:01.137Z"
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "Name": "My-Second-Maintenance-Window",
            "Enabled": true,
            "Duration": 4,
            "Cutoff": 1,
            "NextExecutionTime": "2019-05-30T03:30:00.137Z"
        },
        {
            "WindowId": "mw-6e5c9d4b7cEXAMPLE",
            "Name": "My-Disabled-Maintenance-Window",
            "Enabled": false,
            "Duration": 2,
            "Cutoff": 1
        }
    ]
}
```

## Examples for 'describe-maintenance-window-targets'


**Display the targets for a maintenance window matching a specific owner information value**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-targets \
    --window-id "mw-6e5c9d4b7cEXAMPLE" \
    --filters "Key=OwnerInformation,Values=CostCenter1"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-targets ^
    --window-id "mw-6e5c9d4b7cEXAMPLE" ^
    --filters "Key=OwnerInformation,Values=CostCenter1"
```

------

**Note**  
The supported filter keys are `Type`, `WindowTargetId` and `OwnerInformation`.

The system returns information similar to the following.

```
{
    "Targets": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowTargetId": "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE",
            "ResourceType": "INSTANCE",
            "Targets": [
                {
                    "Key": "tag:Name",
                    "Values": [
                        "Production"
                    ]
                }
            ],
            "OwnerInformation": "CostCenter1",
            "Name": "Target1"
        }
    ]
}
```

## Examples for 'describe-maintenance-window-tasks'


**Show all registered tasks that invoke the SSM command document `AWS-RunPowerShellScript`**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-tasks \
    --window-id "mw-0c50858d01EXAMPLE" \
    --filters "Key=TaskArn,Values=AWS-RunPowerShellScript"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-tasks ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --filters "Key=TaskArn,Values=AWS-RunPowerShellScript"
```

------

The system returns information similar to the following.

```
{
   "Tasks":[
      {
         "ServiceRoleArn": "arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
         "MaxErrors":"1",
         "TaskArn":"AWS-RunPowerShellScript",
         "MaxConcurrency":"1",
         "WindowTaskId":"4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
         "TaskParameters":{
            "commands":{
               "Values":[
                  "driverquery.exe"
               ]
            }
         },
         "Priority":3,
         "Type":"RUN_COMMAND",
         "Targets":[
            {
               "TaskTargetId":"i-02573cafcfEXAMPLE",
               "TaskTargetType":"INSTANCE"
            }
         ]
      },
      {
         "ServiceRoleArn":"arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
         "MaxErrors":"1",
         "TaskArn":"AWS-RunPowerShellScript",
         "MaxConcurrency":"1",
         "WindowTaskId":"4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
         "TaskParameters":{
            "commands":{
               "Values":[
                  "ipconfig"
               ]
            }
         },
         "Priority":1,
         "Type":"RUN_COMMAND",
         "Targets":[
            {
               "TaskTargetId":"i-02573cafcfEXAMPLE",
               "TaskTargetType":"WINDOW_TARGET"
            }
         ]
      }
   ]
}
```

**Show all registered tasks that have a priority of "3"**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-tasks \
    --window-id "mw-9a8b7c6d5eEXAMPLE" \
    --filters "Key=Priority,Values=3"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-tasks ^
    --window-id "mw-9a8b7c6d5eEXAMPLE" ^
    --filters "Key=Priority,Values=3"
```

------

The system returns information similar to the following.

```
{
   "Tasks":[
      {
         "ServiceRoleArn":"arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
         "MaxErrors":"1",
         "TaskArn":"AWS-RunPowerShellScript",
         "MaxConcurrency":"1",
         "WindowTaskId":"4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
         "TaskParameters":{
            "commands":{
               "Values":[
                  "driverquery.exe"
               ]
            }
         },
         "Priority":3,
         "Type":"RUN_COMMAND",
         "Targets":[
            {
               "TaskTargetId":"i-02573cafcfEXAMPLE",
               "TaskTargetType":"INSTANCE"
            }
         ]
      }
   ]
}
```

**Show all registered tasks that have a priority of "1" and use Run Command**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-tasks \
    --window-id "mw-0c50858d01EXAMPLE" \
    --filters "Key=Priority,Values=1" "Key=TaskType,Values=RUN_COMMAND"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-tasks ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --filters "Key=Priority,Values=1" "Key=TaskType,Values=RUN_COMMAND"
```

------

The system returns information similar to the following.

```
{
    "Tasks": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
            "TaskArn": "AWS-RunShellScript",
            "Type": "RUN_COMMAND",
            "Targets": [
                {
                    "Key": "InstanceIds",
                    "Values": [
                        "i-02573cafcfEXAMPLE"
                    ]
                }
            ],
            "TaskParameters": {},
            "Priority": 1,
            "ServiceRoleArn": "arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
            "MaxConcurrency": "1",
            "MaxErrors": "1"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowTaskId": "8a5c4629-31b0-4edd-8aea-33698EXAMPLE",
            "TaskArn": "AWS-UpdateSSMAgent",
            "Type": "RUN_COMMAND",
            "Targets": [
                {
                    "Key": "InstanceIds",
                    "Values": [
                        "i-0471e04240EXAMPLE"
                    ]
                }
            ],
            "TaskParameters": {},
            "Priority": 1,
            "ServiceRoleArn": "arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
            "MaxConcurrency": "1",
            "MaxErrors": "1",
            "Name": "My-Run-Command-Task",
            "Description": "My Run Command task to update SSM Agent on an instance"
        }
    ]
}
```

## Examples for 'describe-maintenance-windows-for-target'


**List information about the maintenance window targets or tasks associated with a specific node**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-windows-for-target \
    --resource-type INSTANCE \
    --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE" \
    --max-results 10
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-windows-for-target ^
    --resource-type INSTANCE ^
    --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE" ^
    --max-results 10
```

------

The system returns information similar to the following.

```
{
    "WindowIdentities": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window"
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "Name": "My-Second-Maintenance-Window"
        }
    ]
}
```

## Examples for 'describe-maintenance-window-executions'


**List all tasks run before a certain date**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-executions \
    --window-id "mw-9a8b7c6d5eEXAMPLE" \
    --filters "Key=ExecutedBefore,Values=2019-05-12T05:00:00Z"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-executions ^
    --window-id "mw-9a8b7c6d5eEXAMPLE" ^
    --filters "Key=ExecutedBefore,Values=2019-05-12T05:00:00Z"
```

------

The system returns information similar to the following.

```
{
    "WindowExecutions": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
            "Status": "FAILED",
            "StatusDetails": "The following SSM parameters are invalid: LevelUp",
            "StartTime": 1557617747.993,
            "EndTime": 1557617748.101
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "WindowExecutionId": "791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
            "Status": "SUCCESS",
            "StartTime": 1557594085.428,
            "EndTime": 1557594090.978
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowExecutionId": "ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
            "Status": "SUCCESS",
            "StartTime": 1557593793.483,
            "EndTime": 1557593798.978
        }
    ]
}
```

**List all tasks run after a certain date**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-executions \
    --window-id "mw-9a8b7c6d5eEXAMPLE" \
    --filters "Key=ExecutedAfter,Values=2018-12-31T17:00:00Z"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-executions ^
    --window-id "mw-9a8b7c6d5eEXAMPLE" ^
    --filters "Key=ExecutedAfter,Values=2018-12-31T17:00:00Z"
```

------

The system returns information similar to the following.

```
{
    "WindowExecutions": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
            "Status": "FAILED",
            "StatusDetails": "The following SSM parameters are invalid: LevelUp",
            "StartTime": 1557617747.993,
            "EndTime": 1557617748.101
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "WindowExecutionId": "791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
            "Status": "SUCCESS",
            "StartTime": 1557594085.428,
            "EndTime": 1557594090.978
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "WindowExecutionId": "ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
            "Status": "SUCCESS",
            "StartTime": 1557593793.483,
            "EndTime": 1557593798.978
        }
    ]
}
```

## Examples for 'describe-maintenance-window-schedule'


**Display the next ten scheduled maintenance window runs for a particular node**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-schedule \
    --resource-type INSTANCE \
    --targets "Key=InstanceIds,Values=i-07782c72faEXAMPLE" \
    --max-results 10
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-schedule ^
    --resource-type INSTANCE ^
    --targets "Key=InstanceIds,Values=i-07782c72faEXAMPLE" ^
    --max-results 10
```

------

The system returns information similar to the following.

```
{
    "ScheduledWindowExecutions": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-05-18T23:35:24.902Z"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-05-25T23:35:24.902Z"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-06-01T23:35:24.902Z"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-06-08T23:35:24.902Z"
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "Name": "My-Second-Maintenance-Window",
            "ExecutionTime": "2019-06-15T23:35:24.902Z"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-06-22T23:35:24.902Z"
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "Name": "My-Second-Maintenance-Window",
            "ExecutionTime": "2019-06-29T23:35:24.902Z"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-07-06T23:35:24.902Z"
        },
        {
            "WindowId": "mw-9a8b7c6d5eEXAMPLE",
            "Name": "My-Second-Maintenance-Window",
            "ExecutionTime": "2019-07-13T23:35:24.902Z"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "My-First-Maintenance-Window",
            "ExecutionTime": "2019-07-20T23:35:24.902Z"
        }
    ],
    "NextToken": "AAEABUXdceT92FvtKld/dGHELj5Mi+GKW/EXAMPLE"
}
```

**Display the maintenance window schedule for nodes tagged with a certain key-value pair**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-schedule \
    --resource-type INSTANCE \
    --targets "Key=tag:prod,Values=rhel7"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-schedule ^
    --resource-type INSTANCE ^
    --targets "Key=tag:prod,Values=rhel7"
```

------

The system returns information similar to the following.

```
{
    "ScheduledWindowExecutions": [
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "DemoRateStartDate",
            "ExecutionTime": "2019-10-20T05:34:56-07:00"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "DemoRateStartDate",
            "ExecutionTime": "2019-10-21T05:34:56-07:00"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "DemoRateStartDate",
            "ExecutionTime": "2019-10-22T05:34:56-07:00"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "DemoRateStartDate",
            "ExecutionTime": "2019-10-23T05:34:56-07:00"
        },
        {
            "WindowId": "mw-0c50858d01EXAMPLE",
            "Name": "DemoRateStartDate",
            "ExecutionTime": "2019-10-24T05:34:56-07:00"
        }
    ],
    "NextToken": "AAEABccwSXqQRGKiTZ1yzGELR6cxW4W/EXAMPLE"
}
```

**Display start times for next four runs of a maintenance window**  
Run the following command.

------
#### [ Linux & macOS ]

```
aws ssm describe-maintenance-window-schedule \
    --window-id "mw-0c50858d01EXAMPLE" \
    --max-results "4"
```

------
#### [ Windows ]

```
aws ssm describe-maintenance-window-schedule ^
    --window-id "mw-0c50858d01EXAMPLE" ^
    --max-results "4"
```

------

The system returns information similar to the following.

```
{
    "WindowSchedule": [
        {
            "ScheduledWindowExecutions": [
                {
                    "ExecutionTime": "2019-10-04T10:10:10Z",
                    "Name": "My-First-Maintenance-Window",
                    "WindowId": "mw-0c50858d01EXAMPLE"
                },
                {
                    "ExecutionTime": "2019-10-11T10:10:10Z",
                    "Name": "My-First-Maintenance-Window",
                    "WindowId": "mw-0c50858d01EXAMPLE"
                },
                {
                    "ExecutionTime": "2019-10-18T10:10:10Z",
                    "Name": "My-First-Maintenance-Window",
                    "WindowId": "mw-0c50858d01EXAMPLE"
                },
                {
                    "ExecutionTime": "2019-10-25T10:10:10Z",
                    "Name": "My-First-Maintenance-Window",
                    "WindowId": "mw-0c50858d01EXAMPLE"
                }
            ]
        }
    ]
}
```

# Tutorial: View information about tasks and task executions using the AWS CLI
View information about tasks and task executions using the AWS CLI

This tutorial demonstrates how to use the AWS Command Line Interface (AWS CLI) to view details about your completed maintenance window tasks. 

If you're continuing directly from [Tutorial: Create and configure a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-create.md), make sure you have allowed enough time for your maintenance window to run at least once in order to see its execution results.

As you follow the steps in this tutorial, replace the values in italicized *red* text with your own options and IDs. For example, replace the maintenance window ID *mw-0c50858d01EXAMPLE* and the instance ID *i-02573cafcfEXAMPLE* with IDs of resources you create.

**To view information about tasks and task executions using the AWS CLI**

1. Run the following command to view a list of task executions for a specific maintenance window.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-executions \
       --window-id "mw-0c50858d01EXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-executions ^
       --window-id "mw-0c50858d01EXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowExecutions": [
           {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
               "Status": "SUCCESS",
               "StartTime": 1557593793.483,
               "EndTime": 1557593798.978
           },
           {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "WindowExecutionId": "791b72e0-f0da-4021-8b35-f95dfEXAMPLE",
               "Status": "SUCCESS",
               "StartTime": 1557593493.096,
               "EndTime": 1557593498.611
           },
           {
               "WindowId": "mw-0c50858d01EXAMPLE",
               "WindowExecutionId": "ecec60fa-6bb0-4d26-98c7-140308EXAMPLE",
               "Status": "SUCCESS",
               "StatusDetails": "No tasks to execute.",
               "StartTime": 1557593193.309,
               "EndTime": 1557593193.334
           }
       ]
   }
   ```

1. Run the following command to get information about a maintenance window task execution.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-maintenance-window-execution \
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm get-maintenance-window-execution ^
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
       "TaskIds": [
           "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE"
       ],
       "Status": "SUCCESS",
       "StartTime": 1557593493.096,
       "EndTime": 1557593498.611
   }
   ```

1. Run the following command to list the tasks run as part of a maintenance window execution.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-execution-tasks \
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-execution-tasks ^
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowExecutionTaskIdentities": [
           {
               "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
               "TaskExecutionId": "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE",
               "Status": "SUCCESS",
               "StartTime": 1557593493.162,
               "EndTime": 1557593498.57,
               "TaskArn": "AWS-RunShellScript",
               "TaskType": "RUN_COMMAND"
           }
       ]
   }
   ```

1. Run the following command to get the details of a task execution.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-maintenance-window-execution-task \
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE" \
       --task-id "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm get-maintenance-window-execution-task ^
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE" ^
       --task-id "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
       "TaskExecutionId": "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE",
       "TaskArn": "AWS-RunShellScript",
       "ServiceRole": "arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
       "Type": "RUN_COMMAND",
       "TaskParameters": [
           {
               "aws:InstanceId": {
                   "Values": [
                       "i-02573cafcfEXAMPLE"
                   ]
               },
               "commands": {
                   "Values": [
                       "df"
                   ]
               }
           }
       ],
       "Priority": 10,
       "MaxConcurrency": "1",
       "MaxErrors": "1",
       "Status": "SUCCESS",
       "StartTime": 1557593493.162,
       "EndTime": 1557593498.57
   }
   ```

1. Run the following command to get the specific task invocations performed for a task execution.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-maintenance-window-execution-task-invocations \
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE" \
       --task-id "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-maintenance-window-execution-task-invocations ^
       --window-execution-id "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE" ^
       --task-id "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowExecutionTaskInvocationIdentities": [
           {
               "WindowExecutionId": "14bea65d-5ccc-462d-a2f3-e99c8EXAMPLE",
               "TaskExecutionId": "c9b05aba-197f-4d8d-be34-e73fbEXAMPLE",
               "InvocationId": "c336d2ab-09de-44ba-8f6a-6136cEXAMPLE",
               "ExecutionId": "76a5a04f-caf6-490c-b448-92c02EXAMPLE",
               "TaskType": "RUN_COMMAND",
               "Parameters": "{\"documentName\":\"AWS-RunShellScript\",\"instanceIds\":[\"i-02573cafcfEXAMPLE\"],\"maxConcurrency\":\"1\",\"maxErrors\":\"1\",\"parameters\":{\"commands\":[\"df\"]}}",
               "Status": "SUCCESS",
               "StatusDetails": "Success",
               "StartTime": 1557593493.222,
               "EndTime": 1557593498.466
           }
       ]
   }
   ```

# Tutorial: Update a maintenance window using the AWS CLI
Update a maintenance window using the AWS CLI

This tutorial demonstrates how to use the AWS Command Line Interface (AWS CLI) to update a maintenance window. It also shows you how to update different task types, including those for AWS Systems Manager Run Command and Automation, AWS Lambda, and AWS Step Functions. 

The examples in this section use the following Systems Manager actions for updating a maintenance window:
+ [UpdateMaintenanceWindow](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateMaintenanceWindow.html)
+ [UpdateMaintenanceWindowTarget](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateMaintenanceWindowTarget.html)
+ [UpdateMaintenanceWindowTask](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateMaintenanceWindowTask.html)
+ [DeregisterTargetFromMaintenanceWindow](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeregisterTargetFromMaintenanceWindow.html)

For information about using the Systems Manager console to update a maintenance window, see [Update or delete maintenance window resources using the console](sysman-maintenance-update.md). 

As you follow the steps in this tutorial, replace the values in italicized *red* text with your own options and IDs. For example, replace the maintenance window ID *mw-0c50858d01EXAMPLE* and the instance ID *i-02573cafcfEXAMPLE* with IDs of resources you create.

**To update a maintenance window using the AWS CLI**

1. Open the AWS CLI and run the following command to update a target to include a name and a description.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-target \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-target-id "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --name "My-Maintenance-Window-Target" \
       --description "Description for my maintenance window target"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-target ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-target-id "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --name "My-Maintenance-Window-Target" ^
       --description "Description for my maintenance window target"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTargetId": "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE",
       "Targets": [
           {
               "Key": "InstanceIds",
               "Values": [
                   "i-02573cafcfEXAMPLE"
               ]
           }
       ],
       "Name": "My-Maintenance-Window-Target",
       "Description": "Description for my maintenance window target"
   }
   ```

1. Run the following command to use the `replace` option to remove the description field and add an additional target. The description field is removed, because the update doesn't include the field (a null value). Be sure to specify an additional node that has been configured for use with Systems Manager.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-target \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-target-id "d208dedf-3f6b-41ff-ace8-8e751EXAMPLE" \
       --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE" \
       --name "My-Maintenance-Window-Target" \
       --replace
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-target ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-target-id "d208dedf-3f6b-41ff-ace8-8e751EXAMPLE" ^
       --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE,i-0471e04240EXAMPLE" ^
       --name "My-Maintenance-Window-Target" ^
       --replace
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTargetId": "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE",
       "Targets": [
           {
               "Key": "InstanceIds",
               "Values": [
                   "i-02573cafcfEXAMPLE",
                   "i-0471e04240EXAMPLE"
               ]
           }
       ],
       "Name": "My-Maintenance-Window-Target"
   }
   ```

1. The `start-date` option allows you to delay activation of a maintenance window until a specified future date. The `end-date` option allows you to set a date and time in the future after which the maintenance window no longer runs. Specify the options in ISO-8601 Extended format.

   Run the following command to specify a date and time range for regularly scheduled maintenance window executions.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window \
       --window-id "mw-0c50858d01EXAMPLE" \
       --start-date "2020-10-01T10:10:10Z" \
       --end-date "2020-11-01T10:10:10Z"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --start-date "2020-10-01T10:10:10Z" ^
       --end-date "2020-11-01T10:10:10Z"
   ```

------

1. Run the following command to update a Run Command task.
**Tip**  
If your target is an Amazon Elastic Compute Cloud (Amazon EC2) instance for Windows Server, change `df` to `ipconfig`, and `AWS-RunShellScript` to `AWS-RunPowerShellScript` in the following command.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-task \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" \
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --task-arn "AWS-RunShellScript" \
       --service-role-arn "arn:aws:iam::account-id:role/MaintenanceWindowsRole" \
       --task-invocation-parameters "RunCommand={Comment=Revising my Run Command task,Parameters={commands=df}}" \
       --priority 1 --max-concurrency 10 --max-errors 4 \
       --name "My-Task-Name" --description "A description for my Run Command task"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-task ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" ^
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --task-arn "AWS-RunShellScript" ^
       --service-role-arn "arn:aws:iam::account-id:role/MaintenanceWindowsRole" ^
       --task-invocation-parameters "RunCommand={Comment=Revising my Run Command task,Parameters={commands=df}}" ^
       --priority 1 --max-concurrency 10 --max-errors 4 ^
       --name "My-Task-Name" --description "A description for my Run Command task"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
       "Targets": [
           {
               "Key": "WindowTargetIds",
               "Values": [
                   "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
               ]
           }
       ],
       "TaskArn": "AWS-RunShellScript",
       "ServiceRoleArn": "arn:aws:iam::111122223333:role/MaintenanceWindowsRole",
       "TaskParameters": {},
       "TaskInvocationParameters": {
           "RunCommand": {
               "Comment": "Revising my Run Command task",
               "Parameters": {
                   "commands": [
                       "df"
                   ]
               }
           }
       },
       "Priority": 1,
       "MaxConcurrency": "10",
       "MaxErrors": "4",
       "Name": "My-Task-Name",
       "Description": "A description for my Run Command task"
   }
   ```

1. Adapt and run the following command to update a Lambda task.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-task \
       --window-id mw-0c50858d01EXAMPLE \
       --window-task-id 4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE \
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --task-arn "arn:aws:lambda:region:111122223333:function:SSMTestLambda" \
       --service-role-arn "arn:aws:iam:account-id:role/MaintenanceWindowsRole" \
       --task-invocation-parameters '{"Lambda":{"Payload":"{\"InstanceId\":\"{{RESOURCE_ID}}\",\"targetType\":\"{{TARGET_TYPE}}\"}"}}' \
       --priority 1 --max-concurrency 10 --max-errors 5 \
       --name "New-Lambda-Task-Name" \
       --description "A description for my Lambda task"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-task ^
       --window-id mw-0c50858d01EXAMPLE ^
       --window-task-id 4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE ^
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --task-arn --task-arn "arn:aws:lambda:region:111122223333:function:SSMTestLambda" ^
       --service-role-arn "arn:aws:iam:account-id:role/MaintenanceWindowsRole" ^
       --task-invocation-parameters '{"Lambda":{"Payload":"{\"InstanceId\":\"{{RESOURCE_ID}}\",\"targetType\":\"{{TARGET_TYPE}}\"}"}}' ^
       --priority 1 --max-concurrency 10 --max-errors 5 ^
       --name "New-Lambda-Task-Name" ^
       --description "A description for my Lambda task"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
       "Targets": [
           {
               "Key": "WindowTargetIds",
               "Values": "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
           }
       ],
       "TaskArn": "arn:aws:lambda:us-east-2:111122223333:function:SSMTestLambda",
       "ServiceRoleArn": "arn:aws:iam::111122223333:role/MaintenanceWindowsRole",
       "TaskParameters": {},
       "TaskInvocationParameters": {
           "Lambda": {
               "Payload": "e30="
           }
       },
       "Priority": 1,
       "MaxConcurrency": "10",
       "MaxErrors": "5",
       "Name": "New-Lambda-Task-Name",
       "Description": "A description for my Lambda task"
   }
   ```

1. If you're updating a Step Functions task, adapt and run the following command to update its task-invocation-parameters.

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-task \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" \
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --task-arn "arn:aws:states:region:execution:SSMStepFunctionTest" \
       --service-role-arn "arn:aws:iam:account-id:role/MaintenanceWindowsRole" \
       --task-invocation-parameters '{"StepFunctions":{"Input":"{\"InstanceId\":\"{{RESOURCE_ID}}\"}"}}' \
       --priority 0 --max-concurrency 10 --max-errors 5 \
       --name "My-Step-Functions-Task" \
       --description "A description for my Step Functions task"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-task ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" ^
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --task-arn "arn:aws:states:region:execution:SSMStepFunctionTest" ^
       --service-role-arn "arn:aws:iam:account-id:role/MaintenanceWindowsRole" ^
       --task-invocation-parameters '{"StepFunctions":{"Input":"{\"InstanceId\":\"{{RESOURCE_ID}}\"}"}}' ^
       --priority 0 --max-concurrency 10 --max-errors 5 ^
       --name "My-Step-Functions-Task" ^
       --description "A description for my Step Functions task"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
       "Targets": [
           {
               "Key": "WindowTargetIds",
               "Values": [
                   "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
               ]
           }
       ],
       "TaskArn": "arn:aws:states:us-east-2:111122223333:execution:SSMStepFunctionTest",
       "ServiceRoleArn": "arn:aws:iam::111122223333:role/MaintenanceWindowsRole",
       "TaskParameters": {},
       "TaskInvocationParameters": {
           "StepFunctions": {
               "Input": "{\"instanceId\":\"{{RESOURCE_ID}}\"}"
           }
       },
       "Priority": 0,
       "MaxConcurrency": "10",
       "MaxErrors": "5",
       "Name": "My-Step-Functions-Task",
       "Description": "A description for my Step Functions task"
   }
   ```

1. Run the following command to unregister a target from a maintenance window. This example uses the `safe` parameter to determine if the target is referenced by any tasks and therefore safe to unregister.

------
#### [ Linux & macOS ]

   ```
   aws ssm deregister-target-from-maintenance-window \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-target-id "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --safe
   ```

------
#### [ Windows ]

   ```
   aws ssm deregister-target-from-maintenance-window ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-target-id "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --safe
   ```

------

   The system returns information similar to the following.

   ```
   An error occurred (TargetInUseException) when calling the DeregisterTargetFromMaintenanceWindow operation: 
   This Target cannot be deregistered because it is still referenced in Task: 4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE
   ```

1. Run the following command to unregister a target from a maintenance window even if the target is referenced by a task. You can force the unregister operation by using the `no-safe` parameter.

------
#### [ Linux & macOS ]

   ```
   aws ssm deregister-target-from-maintenance-window \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-target-id "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" \
       --no-safe
   ```

------
#### [ Windows ]

   ```
   aws ssm deregister-target-from-maintenance-window ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-target-id "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE" ^
       --no-safe
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTargetId": "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
   }
   ```

1. Run the following command to update a Run Command task. This example uses a Systems Manager Parameter Store parameter called `UpdateLevel`, which is formatted as follows: '`{{ssm:UpdateLevel}}`'

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-task \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" \
       --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE"  \
       --task-invocation-parameters "RunCommand={Comment=A comment for my task update,Parameters={UpdateLevel='{{ssm:UpdateLevel}}'}}"
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-task ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" ^
       --targets "Key=InstanceIds,Values=i-02573cafcfEXAMPLE"  ^
       --task-invocation-parameters "RunCommand={Comment=A comment for my task update,Parameters={UpdateLevel='{{ssm:UpdateLevel}}'}}"
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
       "Targets": [
           {
               "Key": "InstanceIds",
               "Values": [
                   "i-02573cafcfEXAMPLE"
               ]
           }
       ],
       "TaskArn": "AWS-RunShellScript",
       "ServiceRoleArn": "arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
       "TaskParameters": {},
       "TaskInvocationParameters": {
           "RunCommand": {
               "Comment": "A comment for my task update",
               "Parameters": {
                   "UpdateLevel": [
                       "{{ssm:UpdateLevel}}"
                   ]
               }
           }
       },
       "Priority": 10,
       "MaxConcurrency": "1",
       "MaxErrors": "1"
   }
   ```

1. Run the following command to update an Automation task to specify `WINDOW_ID` and `WINDOW_TASK_ID` parameters for the `task-invocation-parameters` parameter:

------
#### [ Linux & macOS ]

   ```
   aws ssm update-maintenance-window-task \
       --window-id "mw-0c50858d01EXAMPLE" \
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" \
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE \
       --task-arn "AutoTestDoc" \
       --service-role-arn "arn:aws:iam:account-id:role/MyMaintenanceWindowServiceRole \
       --task-invocation-parameters "Automation={Parameters={InstanceId='{{RESOURCE_ID}}',initiator='{{WINDOW_ID}}.Task-{{WINDOW_TASK_ID}}'}}" \
       --priority 3 --max-concurrency 10 --max-errors 5
   ```

------
#### [ Windows ]

   ```
   aws ssm update-maintenance-window-task ^
       --window-id "mw-0c50858d01EXAMPLE" ^
       --window-task-id "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE" ^
       --targets "Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE ^
       --task-arn "AutoTestDoc" ^
       --service-role-arn "arn:aws:iam:account-id:role/MyMaintenanceWindowServiceRole ^
       --task-invocation-parameters "Automation={Parameters={InstanceId='{{RESOURCE_ID}}',initiator='{{WINDOW_ID}}.Task-{{WINDOW_TASK_ID}}'}}" ^
       --priority 3 --max-concurrency 10 --max-errors 5
   ```

------

   The system returns information similar to the following.

   ```
   {
       "WindowId": "mw-0c50858d01EXAMPLE",
       "WindowTaskId": "4f7ca192-7e9a-40fe-9192-5cb15EXAMPLE",
       "Targets": [
           {
               "Key": "WindowTargetIds",
               "Values": [
                   "e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE"
               ]
           }
       ],
       "TaskArn": "AutoTestDoc",
       "ServiceRoleArn": "arn:aws:iam::111122223333:role/MyMaintenanceWindowServiceRole",
       "TaskParameters": {},
       "TaskInvocationParameters": {
           "Automation": {
               "Parameters": {
                   "multi": [
                       "{{WINDOW_TASK_ID}}"
                   ],
                   "single": [
                       "{{WINDOW_ID}}"
                   ]
               }
           }
       },
       "Priority": 0,
       "MaxConcurrency": "10",
       "MaxErrors": "5",
       "Name": "My-Automation-Task",
       "Description": "A description for my Automation task"
   }
   ```

# Tutorial: Delete a maintenance window using the AWS CLI
Delete a maintenance window using the AWS CLI

To delete a maintenance window you created in these tutorials, run the following command.

```
aws ssm delete-maintenance-window --window-id "mw-0c50858d01EXAMPLE"
```

The system returns information similar to the following.

```
{
   "WindowId":"mw-0c50858d01EXAMPLE"
}
```

# Tutorial: Create a maintenance window for patching using the console
Create a maintenance window for patching using the console

**Important**  
You can continue to use this legacy topic to create a maintenance window for patching. However, we recommend that you use a patch policy instead. For more information, see [Patch policy configurations in Quick Setup](patch-manager-policies.md) and [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md). 

To minimize the impact on your server availability, we recommend that you configure a maintenance window to run patching during times that won't interrupt your business operations.

You must configure roles and permissions for Maintenance Windows, a tool in AWS Systems Manager, before beginning this procedure. For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md). 

**To create a maintenance window for patching**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Maintenance Windows**.

1. Choose **Create maintenance window**.

1. For **Name**, enter a name that designates this as a maintenance window for patching critical and important updates.

1. (Optional) For **Description**, enter a description. 

1. Choose **Allow unregistered targets** if you want to allow a maintenance window task to run on managed nodes, even if you haven't registered those nodes as targets.

   If you choose this option, then you can choose the unregistered nodes (by node ID) when you register a task with the maintenance window.

   If you don't choose this option, then you must choose previously-registered targets when you register a task with the maintenance window. 

1. In the top of the **Schedule** section, specify a schedule for the maintenance window by using one of the three scheduling options.

   For information about building cron/rate expressions, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

1. For **Duration**, enter the number of hours the maintenance window will run. The value you specify determines the specific end time for the maintenance window based on the time it begins. No maintenance window tasks are permitted to start after the resulting endtime minus the number of hours you specify for **Stop initiating tasks** in the next step. 

   For example, if the maintenance window starts at 3 PM, the duration is three hours, and the **Stop initiating tasks** value is one hour, no maintenance window tasks can start after 5 PM. 

1. For **Stop initiating tasks**, enter the number of hours before the end of the maintenance window that the system should stop scheduling new tasks to run. 

1. (Optional) For **Window start date**, specify a date and time, in ISO-8601 Extended format, for when you want the maintenance window to become active. This allows you to delay activation of the maintenance window until the specified future date.

1. (Optional) For **Window end date**, specify a date and time, in ISO-8601 Extended format, for when you want the maintenance window to become inactive. This allows you to set a date and time in the future after which the maintenance window no longer runs.

1. (Optional) For **Schedule timezone**, specify the time zone to base scheduled maintenance window executions on, in Internet Assigned Numbers Authority (IANA) format. For example: "America/Los\$1Angeles", "etc/UTC", or "Asia/Seoul".

   For more information about valid formats, see the [Time Zone Database](https://www.iana.org/time-zones) on the IANA website.

1. (Optional) In the **Manage tags** area, apply one or more tag key name/value pairs to the maintenance window.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag this maintenance window to identify the type of tasks it runs. In this case, you could specify the following key name/value pair:
   + `Key=TaskType,Value=Patching`

1. Choose **Create maintenance window**.

1. In the maintenance windows list, choose the maintenance window you just created, and then choose **Actions**, **Register targets**.

1. (Optional) In the **Maintenance window target details** section, provide a name, a description, and owner information (your name or alias) for this target.

1. For **Target selection**, choose **Specify instance tags**.

1. For **Specify instance tags**, enter a tag key and a tag value to identify the nodes to register with the maintenance window, and then choose **Add**.

1. Choose **Register target**. The system creates a maintenance window target.

1. In the details page of the maintenance window you created, choose **Actions**, **Register Run command task**.

1. (Optional) For **Maintenance window task details**, provide a name and description for this task.

1. For **Command document**, choose `AWS-RunPatchBaseline`.

1. For **Task priority**, choose a priority. Zero (`0`) is the highest priority.

1. For **Targets**, under **Target by**, choose the maintenance window target you created earlier in this procedure.

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **IAM service role**, choose a role to provide permissions for Systems Manager to assume when running a maintenance window task.

   If you don't specify a service role ARN, Systems Manager uses a service-linked role in your account. If no appropriate service-linked role for Systems Manager exists in your account, it's created when the task is registered successfully.
**Note**  
For an improved security posture, we strongly recommend creating a custom policy and custom service role for running your maintenance window tasks. The policy can be crafted to provide only the permissions needed for your particular maintenance window tasks. For more information, see [Setting up Maintenance Windows](setting-up-maintenance-windows.md).

1. (Optional) For **Output options**, to save the command output to a file, select the **Enable writing output to S3** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile assigned to the managed node, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, verify that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

   To stream the output to an Amazon CloudWatch Logs log group, select the **CloudWatch output** box. Enter the log group name in the box.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. For **Parameters**:
   + For **Operation**, choose **Scan** to scan for missing patches, or choose **Install** to scan for and install missing patches.
   + You don't need to enter anything in the **Snapshot Id** field. This system automatically generates and provides this parameter.
   + You don't need to enter anything in the **Install Override List** field unless you want Patch Manager to use a different patch set than is specified for the patch baseline. For information, see [Parameter name: `InstallOverrideList`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-installoverridelist).
   + For **RebootOption**, specify whether you want nodes to reboot if patches are installed during the `Install` operation, or if Patch Manager detects other patches that were installed since the last node reboot. For information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).
   + (Optional) For **Comment**, enter a tracking note or reminder about this command.
   + For **Timeout (seconds)**, enter the number of seconds the system should wait for the operation to finish before it is considered unsuccessful.

1. Choose **Register Run command task**.

After the maintenance window task is complete, you can view patch compliance details in the Systems Manager console in the [Fleet Manager](fleet-manager.md) tool. 

You can also view compliance information in the [Patch Manager](patch-manager.md) tool, on the **Compliance reporting** tab. 

You can also use the [DescribePatchGroupState](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchGroupState.html) and [DescribeInstancePatchStatesForPatchGroup](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeInstancePatchStatesForPatchGroup.html) APIs to view compliance details. For information about patch compliance data, see [About patch compliance](compliance-about.md#compliance-monitor-patch).

# Patching schedules using maintenance windows


After you configure a patch baseline (and optionally a patch group), you can apply patches to your node by using a maintenance window. A maintenance window can reduce the impact on server availability by letting you specify a time to perform the patching process that doesn't interrupt business operations. A maintenance window works like this:

1. Create a maintenance window with a schedule for your patching operations.

1. Choose the targets for the maintenance window by specifying the `Patch Group` or `PatchGroup` tag for the tag name, and any value for which you have defined Amazon Elastic Compute Cloud (Amazon EC2) tags, for example, "web servers" or "US-EAST-PROD. (You must use `PatchGroup`, without a space, if you have [allowed tags in EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#allow-access-to-tags-in-IMDS).

1. Create a new maintenance window task, and specify the `AWS-RunPatchBaseline` document. 

When you configure the task, you can choose to either scan nodes or scan and install patches on the nodes. If you choose to scan nodes, Patch Manager, a tool in AWS Systems Manager, scans each node and generates a list of missing patches for you to review.

If you choose to scan and install patches, Patch Manager scans each node and compares the list of installed patches against the list of approved patches in the baseline. Patch Manager identifies missing patches, and then downloads and installs all missing and approved patches.

If you want to perform a one-time scan or install to fix an issue, you can use Run Command to call the `AWS-RunPatchBaseline` document directly.

**Important**  
After installing patches, Systems Manager reboots each node. The reboot is required to make sure that patches are installed correctly and to ensure that the system didn't leave the node in a potentially bad state. (Exception: If the `RebootOption` parameter is set to `NoReboot` in the `AWS-RunPatchBaseline` document, the managed node isn't rebooted after Patch Manager runs. For more information, see [Parameter name: `RebootOption`](patch-manager-aws-runpatchbaseline.md#patch-manager-aws-runpatchbaseline-parameters-norebootoption).) 

# Using pseudo parameters when registering maintenance window tasks


When you register a task in Maintenance Windows, a tool in AWS Systems Manager, you specify the parameters that are unique to each of the four task types. (In CLI commands, these are provided using the `--task-invocation-parameters` option.)

 You can also reference certain values using *pseudo parameter* syntax, such as `{{RESOURCE_ID}}`, `{{TARGET_TYPE}}`, and `{{WINDOW_TARGET_ID}}`. When the maintenance window task runs, it passes the correct values instead of the pseudo parameter placeholders. The full list of pseudo parameters you can use is provided later in this topic in [Supported pseudo parameters](#pseudo-parameters).

**Important**  
For the target type `RESOURCE_GROUP`, depending on the ID format needed for the task, you can choose between using `{{TARGET_ID}}` and `{{RESOURCE_ID}}` to reference the resource when your task runs. `{{TARGET_ID}}` returns the full ARN of the resource. `{{RESOURCE_ID}}` returns only a shorter name or ID of the resource, as shown in these examples.  
`{{TARGET_ID}}` format: `arn:aws:ec2:us-east-1:123456789012:instance/i-02573cafcfEXAMPLE`
`{{RESOURCE_ID}}` format: `i-02573cafcfEXAMPLE`
For target type `INSTANCE`, both the `{{TARGET_ID}}` and `{{RESOURCE_ID}}` parameters yield the instance ID only. For more information, see [Supported pseudo parameters](#pseudo-parameters).  
`{{TARGET_ID}}` and `{{RESOURCE_ID}}` can be used to pass IDs of AWS resources only to Automation, Lambda, and Step Functions tasks. These two pseudo parameters can't be used with Run Command tasks.

## Pseudo parameter examples


Suppose that your payload for an AWS Lambda task needs to reference an instance by its ID.

Whether you’re using an `INSTANCE` or a `RESOURCE_GROUP` maintenance window target, this can be achieved by using the `{{RESOURCE_ID}}` pseudo parameter. For example:

```
"TaskArn": "arn:aws:lambda:us-east-2:111122223333:function:SSMTestFunction",
    "TaskType": "LAMBDA",
    "TaskInvocationParameters": {
        "Lambda": {
            "ClientContext": "ew0KICAi--truncated--0KIEXAMPLE",
            "Payload": "{ \"instanceId\": \"{{RESOURCE_ID}}\" }",
            "Qualifier": "$LATEST"
        }
    }
```

If your Lambda task is intended to run against another supported target type in addition to Amazon Elastic Compute Cloud (Amazon EC2) instances, such as an Amazon DynamoDB table, the same syntax can be used, and `{{RESOURCE_ID}}` yields the name of the table only. However, if you require the full ARN of the table, use `{{TARGET_ID}}`, as shown in the following example.

```
"TaskArn": "arn:aws:lambda:us-east-2:111122223333:function:SSMTestFunction",
    "TaskType": "LAMBDA",
    "TaskInvocationParameters": {
        "Lambda": {
            "ClientContext": "ew0KICAi--truncated--0KIEXAMPLE",
            "Payload": "{ \"tableArn\": \"{{TARGET_ID}}\" }",
            "Qualifier": "$LATEST"
        }
    }
```

The same syntax works for targeting instances or other resource types. When multiple resource types have been added to a resource group, the task runs against each of the appropriate resources. 

**Important**  
Not all resource types that might be included in a resource group yield a value for the `{{RESOURCE_ID}}` parameter. For a list of supported resource types, see [Supported pseudo parameters](#pseudo-parameters).

As another example, to run an Automation task that stops your EC2 instances, you specify the `AWS-StopEC2Instance` Systems Manager document (SSM document) as the `TaskArn` value and use the `{{RESOURCE_ID}}` pseudo parameter:

```
"TaskArn": "AWS-StopEC2Instance",
    "TaskType": "AUTOMATION"
    "TaskInvocationParameters": {
        "Automation": {
            "DocumentVersion": "1",
            "Parameters": {
                "instanceId": [
                    "{{RESOURCE_ID}}"
                ]
            }
        }
    }
```

To run an Automation task that copies a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume, you specify the `AWS-CopySnapshot` SSM document as the `TaskArn` value and use the `{{RESOURCE_ID}}` pseudo parameter.

```
"TaskArn": "AWS-CopySnapshot",
    "TaskType": "AUTOMATION"
    "TaskInvocationParameters": {
        "Automation": {
            "DocumentVersion": "1",
            "Parameters": {
                "SourceRegion": "us-east-2",
                "targetType":"RESOURCE_GROUP",
                "SnapshotId": [
                    "{{RESOURCE_ID}}"
                ]
            }
        }
    }
```

## Supported pseudo parameters


The following list describes the pseudo parameters that you can specify using the `{{PSEUDO_PARAMETER}}` syntax in the `--task-invocation-parameters` option.
+ **`WINDOW_ID`**: The ID of the target maintenance window.
+ **`WINDOW_TASK_ID`**: The ID of the window task that is running.
+ **`WINDOW_TARGET_ID`**: The ID of the window target that includes the target (target ID).
+ **`WINDOW_EXECUTION_ID`**: The ID of the current window execution.
+ **`TASK_EXECUTION_ID`**: The ID of the current task execution.
+ **`INVOCATION_ID`**: The ID of the current invocation.
+ **`TARGET_TYPE`**: The type of target. Supported types include `RESOURCE_GROUP` and `INSTANCE`.
+ **`TARGET_ID`**: 

  If the target type you specify is `INSTANCE`, the `TARGET_ID` pseudo parameter is replaced by the ID of the instance. For example, `i-078a280217EXAMPLE`.

  If the target type you specify is `RESOURCE_GROUP`, the value referenced for the task execution is the full ARN of the resource. For example: `arn:aws:ec2:us-east-1:123456789012:instance/i-078a280217EXAMPLE`. The following table provides sample `TARGET_ID` values for particular resource types in a resource group. 
**Note**  
`TARGET_ID` isn't supported for Run Command tasks.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/maintenance-window-tasks-pseudo-parameters.html)
+ **`RESOURCE_ID`**: The short ID of a resource type contained in a resource group. The following table provides sample `RESOURCE_ID` values for particular resource types in a resource group. 
**Note**  
`RESOURCE_ID` isn't supported for Run Command tasks.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/maintenance-window-tasks-pseudo-parameters.html)
**Note**  
If the AWS resource group you specify includes resource types that don't yield a `RESOURCE_ID` value, and aren't listed in the preceding table, then the `RESOURCE_ID` parameter isn't populated. An execution invocation will still occur for that resource. In these cases, use the `TARGET_ID` pseudo parameter instead, which will be replaced with the full ARN of the resource.

# Maintenance window scheduling and active period options
Maintenance window scheduling and active period options

When you create a maintenance window, you must specify how often the maintenance window runs by using a [Cron or rate expression](reference-cron-and-rate-expressions.md). Optionally, you can specify a date range during which the maintenance window can run on its regular schedule and a time zone on which to base that regular schedule. 

Be aware, however, that the time zone option and the start date and end date options don't influence each other. Any start date and end date times that you specify (with or without an offset for your time zone) determine only the *valid period* during which the maintenance window can run on its schedule. A time zone option determines the international time zone that the maintenance window schedule is based on *during* its valid period.

**Note**  
You specify start and end dates in ISO-8601 timestamp format. For example: `2021-04-07T14:29:00-08:00`  
You specify time zones in Internet Assigned Numbers Authority (IANA) format. For example: `America/Chicago`, `Europe/Berlin` or `Asia/Tokyo`

**Topics**
+ [

## Example 1: Specify a maintenance window start date
](#schedule-example-start-date)
+ [

## Example 2: Specify a maintenance window start date and end date
](#schedule-example-start-end-date)
+ [

## Example 3: Create a maintenance window that runs only once
](#schedule-example-one-time)
+ [

## Example 4: Specify the number of schedule offset days for a maintenance window
](#schedule-example-schedule-offset)

## Example 1: Specify a maintenance window start date


Say that you use the AWS Command Line Interface (AWS CLI) to create a maintenance window with the following options:
+ `--start-date 2021-01-01T00:00:00-08:00`
+ `--schedule-timezone "America/Los_Angeles"`
+ `--schedule "cron(0 09 ? * WED *)"`

For example:

------
#### [ Linux & macOS ]

```
aws ssm create-maintenance-window \
    --name "My-LAX-Maintenance-Window" \
    --allow-unassociated-targets \
    --duration 3 \
    --cutoff 1 \
    --start-date 2021-01-01T00:00:00-08:00 \
    --schedule-timezone "America/Los_Angeles" \
    --schedule "cron(0 09 ? * WED *)"
```

------
#### [ Windows ]

```
aws ssm create-maintenance-window ^
    --name "My-LAX-Maintenance-Window" ^
    --allow-unassociated-targets ^
    --duration 3 ^
    --cutoff 1 ^
    --start-date 2021-01-01T00:00:00-08:00 ^
    --schedule-timezone "America/Los_Angeles" ^
    --schedule "cron(0 09 ? * WED *)"
```

------

This means that the first run of the maintenance window won't occur until *after* its specified start date and time, which is at 12:00 AM US Pacific Time on Friday, January 1, 2021. (This time zone is eight hours behind UTC time.) In this case, the start date and time of the window period don't represent when the maintenance windows first runs. Taken together, the `--schedule-timezone` and `--schedule` values mean that the maintenance window runs at 9 AM every Wednesday in the US Pacific Time Zone (represented by "America/Los Angeles" in IANA format). The first execution in the allowed period will be on Wednesday, January 4, 2021, at 9 AM US Pacific Time.

## Example 2: Specify a maintenance window start date and end date


Suppose that next you create a maintenance window with these options:
+ `--start-date 2019-01-01T00:03:15+09:00`
+ `--end-date 2019-06-30T00:06:15+09:00`
+ `--schedule-timezone "Asia/Tokyo"`
+ `--schedule "rate(7 days)"`

For example:

------
#### [ Linux & macOS ]

```
aws ssm create-maintenance-window \
    --name "My-NRT-Maintenance-Window" \
    --allow-unassociated-targets \
    --duration 3 \
    --cutoff 1 \
    --start-date 2019-01-01T00:03:15+09:00 \
    --end-date 2019-06-30T00:06:15+09:00 \
    --schedule-timezone "Asia/Tokyo" \
    --schedule "rate(7 days)"
```

------
#### [ Windows ]

```
aws ssm create-maintenance-window ^
    --name "My-NRT-Maintenance-Window" ^
    --allow-unassociated-targets ^
    --duration 3 ^
    --cutoff 1 ^
    --start-date 2019-01-01T00:03:15+09:00 ^
    --end-date 2019-06-30T00:06:15+09:00 ^
    --schedule-timezone "Asia/Tokyo" ^
    --schedule "rate(7 days)"
```

------

The allowed period for this maintenance window begins at 3:15 AM Japan Standard Time on January 1, 2019. The valid period for this maintenance window ends at 6:15 AM Japan Standard Time on Sunday, June 30, 2019. (This time zone is nine hours ahead of UTC time.) Taken together, the `--schedule-timezone` and `--schedule` values mean that the maintenance window runs at 3:15 AM every Tuesday in the Japan Standard Time Zone (represented by "Asia/Tokyo" in IANA format). This is because the maintenance window runs every seven days, and it becomes active at 3:15 AM on Tuesday, January 1. The last execution is at 3:15 AM Japan Standard Time on Tuesday, June 25, 2019. This is the last Tuesday before the allowed maintenance window period ends five days later.

## Example 3: Create a maintenance window that runs only once


Now you create a maintenance window with this option:
+ `--schedule "at(2020-07-07T15:55:00)"`

For example:

------
#### [ Linux & macOS ]

```
aws ssm create-maintenance-window \
    --name "My-One-Time-Maintenance-Window" \
    --schedule "at(2020-07-07T15:55:00)" \
    --duration 5 \
    --cutoff 2 \
    --allow-unassociated-targets
```

------
#### [ Windows ]

```
aws ssm create-maintenance-window ^
    --name "My-One-Time-Maintenance-Window" ^
    --schedule "at(2020-07-07T15:55:00)" ^
    --duration 5 ^
    --cutoff 2 ^
    --allow-unassociated-targets
```

------

This maintenance window runs just once, at 3:55 PM UTC time on July 7, 2020. The maintenance window is allowed to run up to five hours, as needed, but new tasks are prevented from starting two hours before the end of the maintenance window period.

## Example 4: Specify the number of schedule offset days for a maintenance window


Now you create a maintenance window with this option:

```
--schedule-offset 2
```

For example:

------
#### [ Linux & macOS ]

```
aws ssm create-maintenance-window \
    --name "My-Cron-Offset-Maintenance-Window" \
    --schedule "cron(0 30 23 ? * TUE#3 *)" \
    --duration 4 \
    --cutoff 1 \
    --schedule-offset 2 \
    --allow-unassociated-targets
```

------
#### [ Windows ]

```
aws ssm create-maintenance-window ^
    --name "My-Cron-Offset-Maintenance-Window" ^
    --schedule "cron(0 30 23 ? * TUE#3 *)" ^
    --duration 4 ^
    --cutoff 1 ^
    --schedule-offset 2 ^
    --allow-unassociated-targets
```

------

A schedule offset is the number of days to wait after the date and time specified by a CRON expression before running the maintenance window.

In the preceding example, the CRON expression schedules a maintenance window to run the third Tuesday of every month at 11:30 PM: 

```
--schedule "cron(0 30 23 ? * TUE#3 *)
```

However, including `--schedule-offset 2` means that the maintenance window won't run until 11:30 PM two days *after* the third Tuesday of each month. 

Schedule offsets are supported for CRON expressions only. 

**More info**  
+ [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md)
+ [Create a maintenance window using the console](sysman-maintenance-create-mw.md)
+ [Tutorial: Create and configure a maintenance window using the AWS CLI](maintenance-windows-cli-tutorials-create.md)
+ [CreateMaintenanceWindow](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateMaintenanceWindow.html) in the *AWS Systems Manager API Reference*
+ [https://docs.aws.amazon.com/cli/latest/reference/ssm/create-maintenance-window.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/create-maintenance-window.html) in the *AWS Systems Manager section of the AWS CLI Command Reference*
+ [Time Zone Database](https://www.iana.org/time-zones) on the IANA website

# Registering maintenance window tasks without targets


For each maintenance window you create, you can specify one or more tasks to perform when the maintenance window runs. In most cases, you must specify the resources, or targets, that the task is to run on. In some cases, however, you don't need to specify targets explicitly in the task.

One or more targets must be specified for maintenance window Systems Manager Run Command-type tasks. Depending on the nature of the task, targets are optional for other maintenance window task types (Systems Manager Automation, AWS Lambda, and AWS Step Functions). 

For Lambda and Step Functions task types, whether a target is required depends on the content of the function or state machine you have created.

**Note**  
When a task has registered targets, Automation, AWS Lambda, and AWS Step Functions tasks resolve the targets from resource groups and tags and send one invocation per resolved resource, which results in multiple task invocations. But say, for example, that you want only one invocation for a Lambda task that's registered with a resource group containing more than one instance. In this case, if you are working in the AWS Management Console, choose the option **Task target not required** in the **Register Lambda task** or **Edit Lambda task** page. If you are using the AWS CLI command, do not specify targets using the `--targets` parameter when running the [https://docs.aws.amazon.com/cli/latest/reference/ssm/register-task-with-maintenance-window.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/register-task-with-maintenance-window.html) command or [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-maintenance-window-task.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-maintenance-window-task.html) command.

In many cases, you don't need to explicitly specify a target for an automation task. For example, say that you're creating an Automation-type task to update an Amazon Machine Image (AMI) for Linux using the `AWS-UpdateLinuxAmi` runbook. When the task runs, the AMI is updated with the latest available Linux distribution packages and Amazon software. New instances created from the AMI already have these updates installed. Because the ID of the AMI to be updated is specified in the input parameters for the runbook, there is no need to specify a target again in the maintenance window task.

Similarly, suppose you're using the AWS Command Line Interface (AWS CLI) to register a maintenance window Automation task that uses the `AWS-RestartEC2Instance` runbook. Because the node to restart is specified in the `--task-invocation-parameters` argument, you don't need to also specify a `--targets` option. 

**Note**  
For maintenance window tasks without a target specified, you can't supply values for `--max-errors` and `--max-concurrency`. Instead, the system inserts a placeholder value of `1`, which might be reported in the response to commands such as [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-maintenance-window-tasks.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-maintenance-window-tasks.html) and [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-maintenance-window-task.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-maintenance-window-task.html). These values don't affect the running of your task and can be ignored.

The following example demonstrates omitting the `--targets`, `--max-errors`, and `--max-concurrency` options for a targetless maintenance window task.

------
#### [ Linux & macOS ]

```
aws ssm register-task-with-maintenance-window \
    --window-id "mw-ab12cd34eEXAMPLE" \
    --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowAndAutomationRole" \
    --task-type "AUTOMATION" \
    --name "RestartInstanceWithoutTarget" \
    --task-arn "AWS-RestartEC2Instance" \
    --task-invocation-parameters "{\"Automation\":{\"Parameters\":{\"InstanceId\":[\"i-02573cafcfEXAMPLE\"]}}}" \
    --priority 10
```

------
#### [ Windows ]

```
aws ssm register-task-with-maintenance-window ^
    --window-id "mw-ab12cd34eEXAMPLE" ^
    --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowAndAutomationRole" ^
    --task-type "AUTOMATION" ^
    --name "RestartInstanceWithoutTarget" ^
    --task-arn "AWS-RestartEC2Instance" ^
    --task-invocation-parameters "{\"Automation\":{\"Parameters\":{\"InstanceId\":[\"i-02573cafcfEXAMPLE\"]}}}" ^
    --priority 10
```

------

**Note**  
For maintenance window tasks registered before December 23, 2020: If you specified targets for the task and one is no longer required, you can update that task to remove the targets using the Systems Manager console or the [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-maintenance-window-task.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-maintenance-window-task.html) AWS CLI command.

**More info**  
+ [Error messages: "Maintenance window tasks without targets don't support MaxConcurrency values" and "Maintenance window tasks without targets don't support MaxErrors values"](troubleshooting-maintenance-windows.md#maxconcurrency-maxerrors-not-supported)

# Troubleshooting maintenance windows


Use the following information to help you troubleshoot problems with maintenance windows.

**Topics**
+ [

## Edit task error: On the page for editing a maintenance window task, the IAM role list returns an error message: "We couldn't find the IAM maintenance window role specified for this task. It might have been deleted, or it might not have been created yet."
](#maintenance-window-role-troubleshooting)
+ [

## Not all maintenance window targets are updated
](#targets-not-updated)
+ [

## Task fails with task invocation status: "The provided role does not contain the correct SSM permissions."
](#incorrect-ssm-permissions)
+ [

## Task fails with error message: "Step fails when it is validating and resolving the step inputs"
](#step-fails)
+ [

## Error messages: "Maintenance window tasks without targets don't support MaxConcurrency values" and "Maintenance window tasks without targets don't support MaxErrors values"
](#maxconcurrency-maxerrors-not-supported)

## Edit task error: On the page for editing a maintenance window task, the IAM role list returns an error message: "We couldn't find the IAM maintenance window role specified for this task. It might have been deleted, or it might not have been created yet."


**Problem 1**: The AWS Identity and Access Management (IAM) maintenance window role you originally specified was deleted after you created the task.

**Possible fix**: 1) Select a different IAM maintenance window role, if one exists in your account, or create a new one and select it for the task. 

**Problem 2**: If the task was created using the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or an AWS SDK, a non-existent IAM maintenance window role name could have been specified. For example, the IAM maintenance window role could have been deleted before you created the task, or the role name could have been typed incorrectly, such as **myrole** instead of **my-role**.

**Possible fix**: Select the correct name of the IAM maintenance window role you want to use, or create a new one to specify for the task. 

## Not all maintenance window targets are updated


**Problem:** You notice that maintenance window tasks didn't run on all the resources targeted by your maintenance window. For example, in the maintenance window run results, the task for that resource is marked as failed or timed out.

**Solution:** The most common reasons for a maintenance window task not running on a target resource involve connectivity and availability. For example:
+ Systems Manager lost connection to the resource before or during the maintenance window operation.
+ The resource was offline or stopped during the maintenance window operation.

You can wait for the next scheduled maintenance window time to run tasks on the resources. You can manually run the maintenance window tasks on the resources that weren't available or were offline.

## Task fails with task invocation status: "The provided role does not contain the correct SSM permissions."


**Problem**: You have specified a maintenance window service role for a task, but the task fails to run successfully and the task invocation status reports that "The provided role does not contain the correct SSM permissions." 
+ **Solution**: In [Task 1: Create a custom policy for your maintenance window service role using the console](configuring-maintenance-window-permissions-console.md#create-custom-policy-console), we provide a basic policy you can attach to your [custom maintenance window service role](configuring-maintenance-window-permissions-console.md#create-custom-role-console). The policy includes the permissions needed for many task scenarios. However, due to the wide variety of tasks you can run, you might need to provide additional permissions in the policy for your maintenance window role.

  For example, some Automation actions work with AWS CloudFormation stacks. Therefore, you might need to add the additional permissions `cloudformation:CreateStack`, `cloudformation:DescribeStacks`, and `cloudformation:DeleteStack` to the policy for your maintenance window service role. 

  For another example, the Automation runbook `AWS-CopySnapshot` requires permissions to create an Amazon Elastic Block Store (Amazon EBS) snapshot. Therefore, you might need to add the permission `ec2:CreateSnapshot`.

  For information about the role permissions needed by an AWS managed Automation runbook, see the runbook descriptions in the [AWS Systems Manager Automation Runbook Reference](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-runbook-reference.html).

  For information about the role permissions needed by an AWS managed SSM document, review the content of the document in the [Documents](https://console.aws.amazon.com//systems-manager/documents) section Systems Manager console.

  For information about the role permissions needed for Step Functions tasks, Lambda tasks, and custom Automation runbooks and SSM documents, verify permission requirements with the author of those resources.

## Task fails with error message: "Step fails when it is validating and resolving the step inputs"


**Problem**: An Automation runbook or Systems Manager Command document you're using in a task requires that you specify inputs such as `InstanceId` or `SnapshotId`, but a value isn't supplied or isn't supplied correctly.
+ **Solution 1**: If your task is targeting a single resource, such as a single node or single snapshot, enter its ID in the input parameters for the task.
+ **Solution 2**: If your task is targeting multiple resources, such as creating images from multiple nodes when you use the runbook `AWS-CreateImage`, you can use one of the pseudo parameters supported for maintenance window tasks in the input parameters to represent node IDs in the command. 

  The following commands register a Systems Manager Automation task with a maintenance window using the AWS CLI. The `--targets` value indicates a maintenance window target ID. Also, even though the `--targets` parameter specifies a window target ID, parameters of the Automation runbook require that a node ID be provided. In this case, the command uses the pseudo parameter `{{RESOURCE_ID}}` as the `InstanceId` value.

  **AWS CLI command:**

------
#### [ Linux & macOS ]

  The following example command restarts Amazon Elastic Compute Cloud (Amazon EC2) instances that belong to the maintenance window target group with the ID e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE.

  ```
  aws ssm register-task-with-maintenance-window \
      --window-id "mw-0c50858d01EXAMPLE" \
      --targets Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE \
      --task-arn "AWS-RestartEC2Instance" \
      --service-role-arn arn:aws:iam::123456789012:role/MyMaintenanceWindowServiceRole \
      --task-type AUTOMATION \
      --task-invocation-parameters "Automation={DocumentVersion=5,Parameters={InstanceId='{{RESOURCE_ID}}'}}" \
      --priority 0 --max-concurrency 10 --max-errors 5 --name "My-Restart-EC2-Instances-Automation-Task" \
      --description "Automation task to restart EC2 instances"
  ```

------
#### [ Windows ]

  ```
  aws ssm register-task-with-maintenance-window ^
      --window-id "mw-0c50858d01EXAMPLE" ^
      --targets Key=WindowTargetIds,Values=e32eecb2-646c-4f4b-8ed1-205fbEXAMPLE ^
      --task-arn "AWS-RestartEC2Instance" ^
      --service-role-arn arn:aws:iam::123456789012:role/MyMaintenanceWindowServiceRole ^
      --task-type AUTOMATION ^
      --task-invocation-parameters "Automation={DocumentVersion=5,Parameters={InstanceId='{{RESOURCE_ID}}'}}" ^
      --priority 0 --max-concurrency 10 --max-errors 5 --name "My-Restart-EC2-Instances-Automation-Task" ^
      --description "Automation task to restart EC2 instances"
  ```

------

  For more information about working with pseudo parameters for maintenance window tasks, see [Using pseudo parameters when registering maintenance window tasks](maintenance-window-tasks-pseudo-parameters.md) and [Task registration examples](mw-cli-register-tasks-examples.md#task-examples).

## Error messages: "Maintenance window tasks without targets don't support MaxConcurrency values" and "Maintenance window tasks without targets don't support MaxErrors values"


**Problem:** When you register a Run Command-type task, you must specify at least one target for the task to run on. For other task types (Automation, AWS Lambda, and AWS Step Functions), depending on the nature of the task, targets are optional. The options `MaxConcurrency` (the number of resources to run a task on at the same time) and `MaxErrors` (the number of failures to run the task on target resources before the task fails) aren't required or supported for maintenance window tasks that don't specify targets. The system generates these error messages if values are specified for either of these options when no task target is specified.

**Solution**: If you receive either of these errors, remove the values for concurrency and error threshold before continuing to register or update the maintenance window task.

For more information about running tasks that don't specify targets, see [Registering maintenance window tasks without targets](maintenance-windows-targetless-tasks.md) in the *AWS Systems Manager User Guide*.

# AWS Systems Manager Quick Setup
Quick Setup

Use Quick Setup, a tool in AWS Systems Manager, to quickly configure frequently used Amazon Web Services services and features with recommended best practices. Quick Setup simplifies setting up services, including Systems Manager, by automating common or recommended tasks. These tasks include, for example, creating required AWS Identity and Access Management (IAM) instance profile roles and setting up operational best practices, such as periodic patch scans and inventory collection. There is no cost to use Quick Setup. However, costs can be incurred based on the type of services you set up and the usage limits with no fees for the services used to set up your service. To get started with Quick Setup, open the [Systems Manager console](https://console.aws.amazon.com/systems-manager/quick-setup). In the navigation pane, choose **Quick Setup**.

**Note**  
If you were directed to Quick Setup to help you configure your instances to be managed by Systems Manager, complete the procedure in [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md).

## What are the benefits of Quick Setup?


Benefits of Quick Setup include the following:
+ **Simplify service and feature configuration**

  Quick Setup walks you through configuring operational best practices and automatically deploys those configurations. The Quick Setup dashboard displays a real-time view of your configuration deployment status. 
+ **Deploy configurations automatically across multiple accounts**

  You can use Quick Setup in an individual AWS account or across multiple AWS accounts and AWS Regions by integrating with AWS Organizations. Using Quick Setup across multiple accounts helps to ensure that your organization maintains consistent configurations.
+ **Eliminate configuration drift**

  Configuration drift occurs whenever a user makes any change to a service or feature that conflicts with the selections made through Quick Setup. Quick Setup periodically checks for configuration drift and attempts to remediate it.

## Who should use Quick Setup?


Quick Setup is most beneficial for customers who already have some experience with the services and features they're setting up, and want to simplify their setup process. If you're unfamiliar with the AWS service you're configuring with Quick Setup, we recommend that you learn more about the service. Review the content in the relevant User Guide before you create a configuration with Quick Setup.

## Availability of Quick Setup in AWS Regions


In the following AWS Regions, you can use all Quick Setup configuration types for an entire organization, as configured in AWS Organizations, or for only the organizational accounts and Regions you choose. You can also use Quick Setup with just a single account in these Regions.
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Europe (Frankfurt)
+ Europe (Stockholm)
+ Europe (Ireland)
+ Europe (London)
+ Europe (Paris)
+ South America (São Paulo)

In the following Regions, only the [Host Management](quick-setup-host-management.md) configuration type is available for individual accounts:
+ Europe (Milan)
+ Asia Pacific (Hong Kong)
+ Middle East (Bahrain)
+ China (Beijing)
+ China (Ningxia)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

 For a list of all supported Regions for Systems Manager, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

# Getting started with Quick Setup


Use the information in this topic to help you prepare to use Quick Setup.

**Topics**
+ [

## IAM roles and permissions for Quick Setup onboarding
](#quick-setup-getting-started-iam)
+ [

## Manual onboarding for working with Quick Setup API programmatically
](#quick-setup-api-manual-onboarding)

## IAM roles and permissions for Quick Setup onboarding


Quick Setup launched a new console experience and a new API. Now you can interact with this API using the console, AWS CLI, CloudFormation, and SDKs. If you opt in to the new experience, your existing configurations are recreated using the new API. Depending on the number of existing configurations in your account, this process can take several minutes.

To use the new Quick Setup console, you must have permissions for the following actions:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm-quicksetup:*",
                "cloudformation:DescribeStackSetOperation",
                "cloudformation:ListStacks",
                "cloudformation:DescribeStacks",
                "cloudformation:DescribeStackResources",
                "cloudformation:ListStackSetOperations",
                "cloudformation:ListStackInstances",
                "cloudformation:DescribeStackSet",
                "cloudformation:ListStackSets",
                "cloudformation:DescribeStackInstance",
                "cloudformation:DescribeOrganizationsAccess",
                "cloudformation:ActivateOrganizationsAccess",
                "cloudformation:GetTemplate",
                "cloudformation:ListStackSetOperationResults",
                "cloudformation:DescribeStackEvents",
                "cloudformation:UntagResource",
                "ec2:DescribeInstances",
                "ssm:DescribeAutomationExecutions",
                "ssm:GetAutomationExecution",
                "ssm:ListAssociations",
                "ssm:DescribeAssociation",
                "ssm:GetDocument",
                "ssm:ListDocuments",
                "ssm:DescribeDocument",
                "ssm:ListResourceDataSync",
                "ssm:DescribePatchBaselines",
                "ssm:GetPatchBaseline",
                "ssm:DescribeMaintenanceWindows",
                "ssm:DescribeMaintenanceWindowTasks",
                "ssm:GetOpsSummary",
                "organizations:DeregisterDelegatedAdministrator",
                "organizations:DescribeAccount",
                "organizations:DescribeOrganization",
                "organizations:ListDelegatedAdministrators",
                "organizations:ListRoots",
                "organizations:ListParents",
                "organizations:ListOrganizationalUnitsForParent",
                "organizations:DescribeOrganizationalUnit",
                "organizations:ListAWSServiceAccessForOrganization",
                "s3:GetBucketLocation",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "resource-groups:ListGroups",
                "iam:ListRoles",
                "iam:ListRolePolicies",
                "iam:GetRole",
                "iam:CreatePolicy",
                "organizations:RegisterDelegatedAdministrator",
                "organizations:EnableAWSServiceAccess",
                "cloudformation:TagResource"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudformation:RollbackStack",
                "cloudformation:CreateStack",
                "cloudformation:UpdateStack",
                "cloudformation:DeleteStack"
            ],
            "Resource": [
                "arn:aws:cloudformation:*:*:stack/StackSet-AWS-QuickSetup-*",
                "arn:aws:cloudformation:*:*:stack/AWS-QuickSetup-*",
                "arn:aws:cloudformation:*:*:type/resource/*",
                "arn:aws:cloudformation:*:*:stack/StackSet-SSMQuickSetup"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateStackSet",
                "cloudformation:UpdateStackSet",
                "cloudformation:DeleteStackSet",
                "cloudformation:DeleteStackInstances",
                "cloudformation:CreateStackInstances",
                "cloudformation:StopStackSetOperation"
            ],
            "Resource": [
                "arn:aws:cloudformation:*:*:stackset/AWS-QuickSetup-*",
                "arn:aws:cloudformation:*:*:stackset/SSMQuickSetup",
                "arn:aws:cloudformation:*:*:type/resource/*",
                "arn:aws:cloudformation:*:*:stackset-target/AWS-QuickSetup-*:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateRole",
                "iam:DeleteRole",
                "iam:AttachRolePolicy",
                "iam:DetachRolePolicy",
                "iam:GetRolePolicy",
                "iam:PutRolePolicy"
            ],
            "Resource": [
                "arn:aws:iam::*:role/AWS-QuickSetup-*",
                "arn:aws:iam::*:role/service-role/AWS-QuickSetup-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::111122223333:role/AWS-QuickSetup-*",
            "Condition": {
                "StringEquals": {
	            "iam:PassedToService": [
	                "ssm-quicksetup.amazonaws.com",
	                "cloudformation.amazonaws.com"
	            ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DeleteAssociation",
                "ssm:CreateAssociation",
                "ssm:StartAssociationsOnce"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ssm:StartAutomationExecution",
            "Resource": [
                "arn:aws:ssm:*:*:document/AWS-EnableExplorer",
                "arn:aws:ssm:*:*:automation-execution/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetOpsSummary",
                "ssm:CreateResourceDataSync",
                "ssm:UpdateResourceDataSync"
            ],
            "Resource": "arn:aws:ssm:*:*:resource-data-sync/AWS-QuickSetup-*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole"
            ],
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": [
                        "accountdiscovery.ssm.amazonaws.com",
                        "ssm.amazonaws.com",
                        "ssm-quicksetup.amazonaws.com",
                        "stacksets.cloudformation.amazonaws.com"
                    ]
                }
            },
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole"
            ],
            "Resource": "arn:aws:iam::*:role/aws-service-role/stacksets.cloudformation.amazonaws.com/AWSServiceRoleForCloudFormationStackSetsOrgAdmin"
        }
    ]
}
```

------

To restrict users to read-only permissions, only allow `ssm-quicksetup:List*` and `ssm-quicksetup:Get*` operations for the Quick Setup API.

During onboarding, Quick Setup creates the following AWS Identity and Access Management (IAM) roles on your behalf:
+ `AWS-QuickSetup-LocalExecutionRole` – Grants CloudFormation permissions to use any template, excluding the patch policy template, and create the necessary resources.
+ `AWS-QuickSetup-LocalAdministrationRole` – Grants permissions to AWS CloudFormation to assume `AWS-QuickSetup-LocalExecutionRole`.
+ `AWS-QuickSetup-PatchPolicy-LocalExecutionRole` – Grants permissions to AWS CloudFormation to use the patch policy template, and create the necessary resources.
+ `AWS-QuickSetup-PatchPolicy-LocalAdministrationRole` – Grants permissions to AWS CloudFormation to assume `AWS-QuickSetup-PatchPolicy-LocalExecutionRole`.

If you're onboarding a management account—the account that you use to create an organization in AWS Organizations—Quick Setup also creates the following roles on your behalf:
+ `AWS-QuickSetup-SSM-RoleForEnablingExplorer` – Grants permissions to the `AWS-EnableExplorer` automation runbook. The `AWS-EnableExplorer` runbook configures Explorer, a tool in Systems Manager, to display information for multiple AWS accounts and AWS Regions.
+ `AWSServiceRoleForAmazonSSM` – A service-linked role that grants access to AWS resources managed and used by Systems Manager.
+ `AWSServiceRoleForAmazonSSM_AccountDiscovery` – A service-linked role that grants permissions to Systems Manager to call AWS services to discover AWS account information when synchronizing data. For more information, see [Using roles to collect AWS account information for OpsCenter and Explorer](using-service-linked-roles-service-action-2.md).

When onboarding a management account, Quick Setup enables trusted access between AWS Organizations and CloudFormation to deploy Quick Setup configurations across your organization. To enable trusted access, your management account must have administrator permissions. After onboarding, you no longer need administrator permissions. For more information, see [Enable trusted access with Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html).

For information about AWS Organizations account types, see [AWS Organizations terminology and concepts](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html) in the *AWS Organizations User Guide*.

**Note**  
Quick Setup uses CloudFormation StackSets to deploy your configurations across AWS accounts and Regions. If the number of target accounts multiplied by the number of Regions exceeds 10,000, the configuration fails to deploy. We recommend reviewing your use case and creating configurations that use fewer targets to accommodate the growth of your organization. Stack instances aren't deployed to your organization's management account. For more information, see [Considerations when creating a stack set with service-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-getting-started-create.html?icmpid=docs_cfn_console#stacksets-orgs-considerations). 

## Manual onboarding for working with Quick Setup API programmatically


If you use the console to work with Quick Setup, the service handles onboarding steps for you. If you plan to use SDKs or the AWS CLI to work with the Quick Setup API, you can still use the console to complete onboarding steps for you so you don't have to perform them manually. However, some customers need to complete onboarding steps for Quick Setup programmatically without interacting with the console. If this method fits your use case, you must complete the following steps. All of these steps must be completed from your AWS Organizations management account.

**To complete manual onboarding for Quick Setup**

1. Activate trusted access for CloudFormation with Organizations. This provides the management account with the permissions needed to create and manage StackSets for your organization. You can use CloudFormation's `ActivateOrganizationsAccess` API action to complete this step. For more information, see [ActivateOrganizationsAccess](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_ActivateOrganizationsAccess.html) in the *AWS CloudFormation API Reference*.

1. Enable the integration of Systems Manager with Organizations. This allows Systems Manager to create a service-linked role in all the accounts in your organization. This also allows Systems Manager to perform operations on your behalf in your organization and its accounts. You can use AWS Organizations's `EnableAWSServiceAccess` API action to complete this step. The service principal for Systems Manager is `ssm.amazonaws.com`.For more information, see [EnableAWSServiceAccess](https://docs.aws.amazon.com/organizations/latest/APIReference/API_EnableAWSServiceAccess.html) in the *AWS Organizations API Reference*.

1. Create the required IAM role for Explorer. This allows Quick Setup to create dashboards for your configurations so you can view deployment and association statuses. Create an IAM role and attach the `AWSSystemsManagerEnableExplorerExecutionPolicy` managed policy. Modify the trust policy for the role to match the following. Replace each *account ID* with your information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "Service": "ssm.amazonaws.com"
               },
               "Action": "sts:AssumeRole",
               "Condition": {
                   "StringEquals": {
                       "aws:SourceAccount": "111122223333"
                   },
                   "ArnLike": {
                       "aws:SourceArn": "arn:*:ssm:*:111122223333:automation-execution/*"
                   }
               }
           }
       ]
   }
   ```

------

1. Update the Quick Setup service setting for Explorer. You can use Quick Setup's `UpdateServiceSettings` API action to complete this step. Specify the ARN for the IAM role you created in the previous step for the `ExplorerEnablingRoleArn` request parameter. For more information, see [UpdateServiceSettings](https://docs.aws.amazon.com/quick-setup/latest/APIReference/API_UpdateServiceSettings.html) in the *Quick Setup API Reference*.

1. Create the required IAM roles for CloudFormation StackSets to use. You must create an *execution* role and an *administration* role.

   1. Create the execution role. The execution role should have at least one of the `AWSQuickSetupDeploymentRolePolicy` or `AWSQuickSetupPatchPolicyDeploymentRolePolicy` managed policies attached. If you're only creating patch policy configurations, you can use `AWSQuickSetupPatchPolicyDeploymentRolePolicy` managed policy. All other configurations use the `AWSQuickSetupDeploymentRolePolicy` policy. Modify the trust policy for the role to match the following. Replace each *account ID* and *administration role name* with your information.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::111122223333:role/administration role name"
                  },
                  "Action": "sts:AssumeRole"
              }
          ]
      }
      ```

------

   1. Create the administration role. The permissions policy must match the following. Replace each *account ID* and *execution role name* with your information.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Action": [
                      "sts:AssumeRole"
                  ],
                  "Resource": "arn:*:iam::111122223333:role/execution role name",
                  "Effect": "Allow"
              }
          ]
      }
      ```

------

      Modify the trust policy for the role to match the following. Replace each *account ID* with your information.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Service": "cloudformation.amazonaws.com"
                  },
                  "Action": "sts:AssumeRole",
                  "Condition": {
                      "StringEquals": {
                          "aws:SourceAccount": "111122223333"
                      },
                      "ArnLike": {
                          "aws:SourceArn": "arn:aws:cloudformation:*:111122223333:stackset/AWS-QuickSetup-*"
                      }
                  }
              }
          ]
      }
      ```

------

# Configuration for Assume Role for Systems Manager
Configure Assume Role

## To create an assume role for Systems Manager Quick Setup:


Systems Manager Quick Setup requires a role that allows Systems Manager to securely perform actions in your account. This role grants Systems Manager the permissions needed to run commands on your instances and configure EC2 instances, IAM roles, and other Systems Manager resources on your behalf.

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, and then **Create Policy**

1. Add the `SsmOnboardingInlinePolicy` policy using the JSON below. (This policy enables actions required in order to attach instance profile permissions to instances you specify. For example allowing creation of instance profiles and associating them with EC2 instances).

1. Once complete, in the navigation pane, choose **Roles**, and then choose **Create role**.

1. For **Trusted entity type**, keep it as default (service).

1. Under **Use case**, choose **Systems Manager**, then choose **Next**.

1. On the **Add permissions** page:

1. Add the `SsmOnboardingInlinePolicy` policy

1. Choose **Next**

1. For **Role name**, enter a descriptive name (for example, `AmazonSSMRoleForAutomationAssumeQuickSetup`).

1. (Optional) Add tags to help identify and organize the role.

1. Choose **Create role**.

**Important**  
The role must include a trust relationship with `ssm.amazonaws.com`. This is automatically configured when you select Systems Manager as the service in step 4.

After creating the role, you can select it when configuring Quick Setup. The role enables Systems Manager to manage EC2 instances, IAM roles, and other Systems Manager resources and run commands on your behalf while maintaining security through specific, limited permissions.

## Permissions Policies


**`SsmOnboardingInlinePolicy`**  
The following policy defines the permissions for Systems Manager Quick Setup:

```
{
    "Version": "2012-10-17" 		 	 	 ,
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:ListInstanceProfilesForRole",
                "ec2:DescribeIamInstanceProfileAssociations",
                "iam:GetInstanceProfile",
                "iam:AddRoleToInstanceProfile"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AssociateIamInstanceProfile"
            ],
            "Resource": "arn:aws:ec2:*:*:instance/*",
            "Condition": {
                "Null": {
                    "ec2:InstanceProfile": "true"
                },
                "ArnLike": {
                    "ec2:NewInstanceProfile": "arn:aws:iam::*:instance-profile/[INSTANCE_PROFILE_ROLE_NAME]"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::*:role/[INSTANCE_PROFILE_ROLE_NAME]",
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": "ec2.amazonaws.com"
                }
            }
        }
    ]
        }
```

**Trust Relationship**  
*This is added automatically via the above steps*

```
{
    "Version": "2012-10-17" 		 	 	 ,
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ssm.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
     ]
        }
```

# Using a delegated administrator for Quick Setup


After you register a delegated administrator account for Quick Setup, users with the appropriate permissions in that account can create, update, view, and delete configuration managers that target organizational units within your AWS Organizations structure. This delegated administrator account can also manage configuration managers previously created by your organization's management account.

The management account in Organizations can designate one account within your organization as a delegated administrator. When you register an account as a delegated administrator for Quick Setup, this account automatically becomes a delegated administrator for AWS CloudFormation StackSets and Systems Manager Explorer as well, since these services are required to deploy and monitor Quick Setup configurations.

**Note**  
At this time, the patch policy configuration type isn't supported by the delegated administrator for Quick Setup. Patch policy configurations for an organization must be created and maintained in the management account for an organization. For more information, see [Creating a patch policy](quick-setup-patch-manager.md#create-patch-policy).

The following topics describe how to register and deregister a delegated administrator for Quick Setup.

**Topics**
+ [

# Register a delegated administrator for Quick Setup
](quick-setup-register-delegated-administrator.md)
+ [

# Deregister a delegated administrator for Quick Setup
](quick-setup-deregister-delegated-administrator.md)

# Register a delegated administrator for Quick Setup


Use the following procedure to register a delegated administrator for Quick Setup.

**To register a Quick Setup delegated administrator**

1. Log into your AWS Organizations management account.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. Choose **Settings**.

1. In the **Delegated administrator for Quick Setup** section, verify that you have configured the required service-linked role and service access options. If necessary, choose the **Create role** and **Enable access** buttons to configure these options.

1. For **Account ID**, enter the AWS account ID. This account must be a member account in AWS Organizations.

1. Choose **Register delegated administrator**.

# Deregister a delegated administrator for Quick Setup


Use the following procedure to deregister a delegated administrator for Quick Setup.

**To deregister a Quick Setup delegated administrator**

1. Log into your AWS Organizations management account.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. Choose **Settings**.

1. In the **Delegated administrator for Quick Setup** section, choose **Deregister** from the **Actions** dropdown.

1. Select **Confirm**.

# Learn Quick Setup terminology and details


Quick Setup, a tool in AWS Systems Manager, displays the results of all configuration managers you've created across all AWS Regions in the **Configuration managers** table on the Quick Setup home page. From this page, you can **View details** of each configuration, delete configurations from the **Actions** drop down, or **Create** configurations. The **Configuration managers** table contains the following information:
+ **Name** – The name of the configuration manager if provided when created.
+ **Configuration type** – The configuration type chosen when creating the configuration. 
+ **Version** – The version of the configuration type currently deployed.
+ **Organizational units** – Displays the organizational units (OUs) that the configuration is deployed to if you chose a **Custom** set of targets. Organizational units and custom targets are only available to the management account of your organization. The management account is the account that you use to create an organization in AWS Organizations.
+ **Deployment type** – Indicates whether the deployment applies to the entire organization (`Organizational`) or only your account (`Local`).
+ **Regions** – The Regions that the configuration is deployed to if you chose a **Custom** set of targets or targets within your **Current account**. 
+ **Deployment status** – The deployment status indicates if AWS CloudFormation successfully deployed the target or stack instance. The target and stack instances contain the configuration options that you chose during configuration creation.
+ **Association status** – The association status is the state of all associations created by the configuration that you created. The associations for all targets must run successfully; otherwise, the status is **Failed**.

  Quick Setup creates and runs a State Manager association for each configuration target. State Manager is a tool in AWS Systems Manager.

To view configurations deployed to the Region you're currently browsing, select the **Configurations** tab.

## Configuration details


The **Configuration details** page displays information about the deployment of the configuration and its related associations. From this page, you can edit configuration options, update targets, or delete the configuration. You can also view the details of each configuration deployment to get more information about the associations. 

Depending on the type of configuration, one or more of the following status graphs are displayed:

**Configuration deployment status**  
Displays the number of deployments that have succeeded, failed, or are running or pending. Deployments occur in the specified target accounts and Regions that contain nodes affected by the configuration. 

**Configuration association status**  
Displays the number of State Manager associations that have succeeded, failed, or are pending. Quick Setup creates an association in each deployment for the configuration options selected.

**Setup status**  
Displays the number of actions performed by the configuration type and their current statuses. 

**Resource compliance**  
Displays the number of resources that are compliant to the configurations specified policy.

The **Configuration details** table displays information about the deployment of your configuration. You can view more details about each deployment by selecting the deployment and then choosing **View details**. The details page of each deployment displays the associations deployed to the nodes in that deployment.

## Editing and deleting your configuration


You can edit configuration options of a configuration from the **Configuration details** page by choosing **Actions** and then **Edit configuration options**. When you add new options to the configuration, Quick Setup runs your deployments and creates new associations. When you remove options from a configuration, Quick Setup runs your deployments and removes any related associations.

**Note**  
You can edit Quick Setup configurations for your account at anytime. To edit an **Organization** configuration, the **Configuration status** must be **Success** or **Failed**. 

You can also update the targets included in your configurations by choosing **Actions** and **Add OUs**, **Add Regions**, **Remove OUs**, or **Remove Regions**. If your account isn't configured as the management account or you created the configuration for only the current account, you can't update the target organizational units (OUs). Removing a Region or OU removes the associations from those Regions or OUs. 

Periodically, Quick Setup releases new versions of configurations. You can select the **Upgrade configuration** option to upgrade your configuration to the latest version.

You can delete a configuration from Quick Setup by choosing the configuration, then **Actions**, and then **Delete configuration**. Or, you can delete the configuration from the **Configuration details** page under the **Actions** dropdown and then **Delete configuration**. Quick Setup then prompts you to **Remove all OUs and Regions** which might take some time to complete. Deleting a configuration also deletes all related associations. This two-step deletion process removes all deployed resources from all accounts and Regions and then deletes the configuration.

## Configuration compliance


You can view whether your instances are compliant with the associations created by your configurations in either Explorer or Compliance, which are both tools in AWS Systems Manager. To learn more about compliance, see [Learn details about Compliance](compliance-about.md). To learn more about viewing compliance in Explorer, see [AWS Systems Manager Explorer](Explorer.md).

# Using the Quick Setup API to manage configurations and deployments


You can use the API provided by Quick Setup to create and manage configurations and deployments using the AWS CLI or your preferred SDK. You can also use CloudFormation to create a configuration manager resource that deploys configurations. Using the API, you create configuration managers that deploy configuration *definitions*. Configuration definitions contain all of the necessary information to deploy a particular configuration type. For more information about the Quick Setup API, see the [Quick Setup API Reference](https://docs.aws.amazon.com/quick-setup/latest/APIReference/).

The following examples demonstrate how to create configuration managers using the AWS CLI and CloudFormation.

------
#### [ AWS CLI ]

```
aws ssm-quicksetup create-configuration-manager \
--name configuration manager name \
--description Description of your configuration manager
--configuration-definitions JSON string containing configuration defintion
```

The following is an example JSON string containing a configuration definition for patch policy.

```
'{"Type":"AWSQuickSetupType-PatchPolicy","LocalDeploymentAdministrationRoleArn":"arn:aws:iam::123456789012:role/AWS-QuickSetup-StackSet-Local-AdministrationRole","LocalDeploymentExecutionRoleName":"AWS-QuickSetup-StackSet-Local-ExecutionRole","Parameters":{"ConfigurationOptionsInstallNextInterval":"true","ConfigurationOptionsInstallValue":"cron(0 2 ? * SAT#1 *)","ConfigurationOptionsPatchOperation":"ScanAndInstall","ConfigurationOptionsScanNextInterval":"false","ConfigurationOptionsScanValue":"cron(0 1 * * ? *)","HasDeletedBaseline":"false","IsPolicyAttachAllowed":"true","OutputBucketRegion":"","OutputLogEnableS3":"false","OutputS3BucketName":"","OutputS3KeyPrefix":"","PatchBaselineRegion":"us-east-1","PatchBaselineUseDefault":"custom","PatchPolicyName":"dev-patch-policy","RateControlConcurrency":"5","RateControlErrorThreshold":"0%","RebootOption":"RebootIfNeeded","ResourceGroupName":"","SelectedPatchBaselines":"{\"ALMA_LINUX\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-0cb0c4966f86b059b\",\"label\":\"AWS-AlmaLinuxDefaultPatchBaseline\",\"description\":\"Default Patch Baseline for Alma Linux Provided by AWS.\",\"disabled\":false},\"AMAZON_LINUX_2\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-0be8c61cde3be63f3\",\"label\":\"AWS-AmazonLinux2DefaultPatchBaseline\",\"description\":\"Baseline containing all Security and Bugfix updates approved for Amazon Linux 2 instances\",\"disabled\":false},\"AMAZON_LINUX_2023\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-05c9c9bf778d4c4d0\",\"label\":\"AWS-AmazonLinux2023DefaultPatchBaseline\",\"description\":\"Default Patch Baseline for Amazon Linux 2023 Provided by AWS.\",\"disabled\":false},\"DEBIAN\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-09a5f8eb62bde80b1\",\"label\":\"AWS-DebianDefaultPatchBaseline\",\"description\":\"Default Patch Baseline for Debian Provided by AWS.\",\"disabled\":false},\"MACOS\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-0ee4f94581368c0d4\",\"label\":\"AWS-MacOSDefaultPatchBaseline\",\"description\":\"Default Patch Baseline for MacOS Provided by AWS.\",\"disabled\":false},\"ORACLE_LINUX\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-06bff38e95fe85c02\",\"label\":\"AWS-OracleLinuxDefaultPatchBaseline\",\"description\":\"Default Patch Baseline for Oracle Linux Server Provided by AWS.\",\"disabled\":false},\"REDHAT_ENTERPRISE_LINUX\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-0cbb3a633de00f07c\",\"label\":\"AWS-RedHatDefaultPatchBaseline\",\"description\":\"Default Patch Baseline for Redhat Enterprise Linux Provided by AWS.\",\"disabled\":false},\"ROCKY_LINUX\":{\"value\":\"arn:aws:ssm:us-east-1:123456789012:patchbaseline/pb-03ec98bc512aa3ac0\",\"label\":\"AWS-RockyLinuxDefaultPatchBaseline\",\"description\":\"Default Patch Baseline for Rocky Linux Provided by AWS.\",\"disabled\":false},\"UBUNTU\":{\"value\":\"pb-06e3563bd35503f2b\",\"label\":\"custom-UbuntuServer-Blog-Baseline\",\"description\":\"Default Patch Baseline for Ubuntu Provided by AWS.\",\"disabled\":false},\"WINDOWS\":{\"value\":\"pb-016889927b2bb8542\",\"label\":\"custom-WindowsServer-Blog-Baseline\",\"disabled\":false}}","TargetInstances":"","TargetOrganizationalUnits":"ou-9utf-example","TargetRegions":"us-east-1,us-east-2","TargetTagKey":"Patch","TargetTagValue":"true","TargetType":"Tags"}}' \
```

------
#### [ CloudFormation ]

```
AWSTemplateFormatVersion: '2010-09-09'
Resources:
SSMQuickSetupTestConfigurationManager:
Type: "AWS::SSMQuickSetup::ConfigurationManager"
Properties:
    Name: "MyQuickSetup"
    Description: "Test configuration manager"
    ConfigurationDefinitions:
    - Type: "AWSQuickSetupType-CFGRecording"
      Parameters:
        TargetAccounts:
            Ref: AWS::AccountId
        TargetRegions:
            Ref: AWS::Region
        LocalDeploymentAdministrationRoleArn: !Sub "arn:aws:iam::${AWS::AccountId}:role/AWS-QuickSetup-StackSet-ContractTest-AdministrationRole"
        LocalDeploymentExecutionRoleName: "AWS-QuickSetup-StackSet-ContractTest-ExecutionRole"
    Tags:
        foo1: "bar1"
```

------

# Supported Quick Setup configuration types


**Supported configuration types**  
Quick Setup walks you through configuring operational best practices for a number of Systems Manager and other AWS services, and automatically deploying those configurations. The Quick Setup dashboard displays a real-time view of your configuration deployment status. 

You can use Quick Setup in an individual AWS account or across multiple AWS accounts and Regions by integrating with AWS Organizations. Using Quick Setup across multiple accounts helps to ensure that your organization maintains consistent configurations.

Quick Setup provides support for the following configuration types.
+ [Set up Amazon EC2 host management using Quick Setup](quick-setup-host-management.md)
+ [Set up the Default Host Management Configuration for an organization using Quick Setup](quick-setup-default-host-management-configuration.md)
+ [Create an AWS Config configuration recorder using Quick Setup](quick-setup-config.md)
+ [Deploy AWS Config conformance pack using Quick Setup](quick-setup-cpack.md)
+ [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md)
+ [Change Manager organization setup](change-manager-organization-setup.md)
+ [Set up DevOps Guru using Quick Setup](quick-setup-devops.md)
+ [Deploy Distributor packages using Quick Setup](quick-setup-distributor.md)
+ [Stop and start EC2 instances automatically on a schedule using Quick Setup](quick-setup-scheduler.md)
+ [OpsCenter organization setup](OpsCenter-quick-setup-cross-account.md)
+ [Configure AWS Resource Explorer using Quick Setup](Resource-explorer-quick-setup.md)

# Set up Amazon EC2 host management using Quick Setup
Set up Amazon EC2 host management

Use Quick Setup, a tool in AWS Systems Manager, to quickly configure required security roles and commonly used Systems Manager tools on your Amazon Elastic Compute Cloud (Amazon EC2) instances. You can use Quick Setup in an individual account or across multiple accounts and AWS Regions by integrating with AWS Organizations. These tools help you manage and monitor the health of your instances while providing the minimum required permissions to get started. 

If you're unfamiliar with Systems Manager services and features, we recommend that you review the *AWS Systems Manager User Guide* before creating a configuration with Quick Setup. For more information about Systems Manager, see [What is AWS Systems Manager?](what-is-systems-manager.md).

**Important**  
Quick Setup might not be the right tool to use for EC2 management if either of the following applies to you:  
You’re trying to create an EC2 instance for the first time to try out AWS capabilities.
You’re still new to EC2 instance management.
Instead, we recommend that you explore the following content:   
[Getting Started with Amazon EC2](https://aws.amazon.com/ec2/getting-started)
[Launch an instance using the new launch instance wizard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*
[Tutorial: Get started with Amazon EC2 Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) in the *Amazon EC2 User Guide*
If you’re already familiar with EC2 instance management and want to streamline configuration and management for multiple EC2 instances, use Quick Setup. Whether your organization has dozens, thousands, or millions of EC2 instances, use the following Quick Setup procedure to configure multiple options for them, all at once.

**Note**  
This configuration type lets you set multiple options for an entire organization defined in AWS Organizations, only some organizational accounts and Regions, or a single account. One of these options is to check for and apply updates to SSM Agent every two weeks. If you are an organization administrator, you can also choose to update *all* EC2 instances in your organization with agent updates every two weeks using the Default Host Management Configuration type. For information, see [Set up the Default Host Management Configuration for an organization using Quick Setup](quick-setup-default-host-management-configuration.md).

## Configuring host management options for EC2 instances


To set up host management, perform the following tasks in the AWS Systems Manager Quick Setup console.

**To open the Host Management configuration page**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Host Management** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

**To configure Systems Manager host management options**
+ To configure Systems Manager functionality, in the **Configuration options** section, choose the options in the **Systems Manager** group that you want to enable for your configuration:

     
**Update Systems Manager (SSM) Agent every two weeks**  
Enables Systems Manager to check every two weeks for a new version of the agent. If there is a new version, then Systems Manager automatically updates the agent on your managed node to the latest released version. Quick Setup doesn't install the agent on instances where it's not already present. For information about which AMIs have SSM Agent preinstalled, see [Find AMIs with the SSM Agent preinstalled](ami-preinstalled-agent.md).  
We encourage you to choose this option to ensure that your nodes are always running the most up-to-date version of SSM Agent. For more information about SSM Agent, including information about how to manually install the agent, see [Working with SSM Agent](ssm-agent.md).  
**Collect inventory from your instances every 30 minutes**  
Enables Quick Setup to configure collection of the following types of metadata:  
  + **AWS components** – EC2 driver, agents, versions, and more.
  + **Applications** – Application names, publishers, versions, and more.
  + **Node details** – System name, operating system (OS) name, OS version, last boot, DNS, domain, work group, OS architecture, and more.
  + **Network configuration** – IP address, MAC address, DNS, gateway, subnet mask, and more. 
  + **Services** – Name, display name, status, dependent services, service type, start type, and more (Windows Server nodes only).
  + **Windows roles** – Name, display name, path, feature type, installed state, and more (Windows Server nodes only).
  + **Windows updates** – Hotfix ID, installed by, installed date, and more (Windows Server nodes only).
For more information about Inventory, a tool in AWS Systems Manager, see [AWS Systems Manager Inventory](systems-manager-inventory.md).  
The **Inventory collection** option can take up to 10 minutes to complete, even if you only selected a few nodes.  
**Scan instances for missing patches daily**  
Enables Patch Manager, a tool in Systems Manager, to scan your nodes daily and generate a report in the **Compliance** page. The report shows how many nodes are patch-compliant according to the *default patch baseline*. The report includes a list of each node and its compliance status.   
For information about patching operations and patch baselines, see [AWS Systems Manager Patch Manager](patch-manager.md).   
For information about patch compliance, see the Systems Manager [Compliance](https://console.aws.amazon.com/systems-manager/compliance) page.  
For information about patching managed nodes in multiple accounts and Regions in one configuration, see [Patch policy configurations in Quick Setup](patch-manager-policies.md) and [Configure patching for instances in an organization using a Quick Setup patch policy](quick-setup-patch-manager.md).  
Systems Manager supports several methods for scanning managed nodes for patch compliance. If you implement more than one of these methods at a time, the patch compliance information you see is always the result of the most recent scan. Results from previous scans are overwritten. If the scanning methods use different patch baselines, with different approval rules, the patch compliance information can change unexpectedly. For more information, see [Identifying the execution that created patch compliance data](patch-manager-compliance-data-overwrites.md).

**To configure Amazon CloudWatch host management options**
+ To configure CloudWatch functionality, in the **Configuration options** section, choose the options in the **Amazon CloudWatch** group that you want to enable for your configuration:

     
**Install and configure the CloudWatch agent**  
Installs the basic configuration of the unified CloudWatch agent on your Amazon EC2 instances. The agent collects metrics and log files from your instances for Amazon CloudWatch. This information is consolidated so you can quickly determine the health of your instances. For more information about the CloudWatch agent basic configuration, see [CloudWatch agent predefined metric sets](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html#cloudwatch-agent-preset-metrics). There might be added cost. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).  
**Update the CloudWatch agent once every 30 days**  
Enables Systems Manager to check every 30 days for a new version of the CloudWatch agent. If there is a new version, Systems Manager updates the agent on your instance. We encourage you to choose this option to ensure that your instances are always running the most up-to-date version of the CloudWatch agent.

**To configure Amazon EC2 Launch Agent host management options**
+ To configure Amazon EC2 Launch Agent functionality, in the **Configuration options** section, choose the options in the ** Amazon EC2 Launch Agent** group that you want to enable for your configuration:

     
**Update the EC2 launch agent once every 30 days**  
Enables Systems Manager to check every 30 days for a new version of the launch agent installed on your instance. If a new version is available, Systems Manager updates the agent on your instance. We encourage you to choose this option to ensure that your instances are always running the most up-to-date version of the applicable launch agent. For Amazon EC2 Windows instances, this option supports EC2Launch, EC2Launch v2, and EC2Config. For Amazon EC2 Linux instances, this option supports `cloud-init`. For Amazon EC2 Mac instances, this option supports `ec2-macos-init`. Quick Setup doesn't support updating launch agents that are installed on operating systems not supported by the launch agent, or on AL2023.  
For more information about these initialization agents see the following topics:  
  +  [Configure a Windows instance using EC2Launch v2](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch-v2.html) 
  +  [Configure a Windows instance using EC2Launch](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html) 
  +  [Configure a Windows instance using the EC2Config service](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2config-service.html) 
  +  [cloud-init Documentation](https://cloudinit.readthedocs.io/en/22.2.2/index.html) 
  +  [ec2-macos-init](https://github.com/aws/ec2-macos-init) 

**To select the EC2 instances to be updated by the host management configuration**
+ In the **Targets** section, choose the method to determine the accounts and Regions where the configuration is to be deployed:
**Note**  
You can't create multiple Quick Setup Host Management configurations that target the same AWS Region.

------
#### [ Entire organization ]

  Your configuration is deployed to all organizational units (OUs) and AWS Regions in your organization.

**Note**  
The **Entire organization** option is only available if you're configuring host management from your organization's management account.

------
#### [ Custom ]

  1. In the **Target OUs** section, select the OUs where you want to deploy this host management configuration.

  1. In the **Target Regions** section, select the Regions where you want to deploy this host management configuration.

------
#### [ Current account ]

  Choose one of the Region options and follow the steps for that option.

   

**Current Region**  
Choose how to target instances in the current Region only:  
  + **All instances** – The host management configuration automatically targets every EC2 in the current Region.
  + **Tag** – Choose **Add** and enter the key and optional value that is added to the instances to be targeted.
  + **Resource group** – For **Resource group**, select an existing resource group that contains the EC2 instances to be targeted.
  + **Manual** – In the **Instances** section, select the check box of each EC2 instance to be targeted.

**Choose Regions**  
Choose how to target instances in the Region you specify by choosing one of the following:  
  + **All instances** – All instances in the Regions you specify are targeted.
  + **Tag** – Choose **Add** and enter the key and optional value that has been added to the instances to be targeted.
In the **Target Regions** section, select the Regions where you want to deploy this host management configuration.

------

**To specify an instance profile option**
+ ***Entire organization** and **Custom** targets only.*

  In the **Instance profile options** section, choose whether you want to add the required IAM policies to the existing instance profiles attached to your instances, or to allow Quick Setup to create the IAM policies and instance profiles with the permissions needed for the configuration you choose.

After specifying all your configuration choices, choose **Create**.

# Set up the Default Host Management Configuration for an organization using Quick Setup
Set up the Default Host Management Configuration for an organization

With Quick Setup, a tool in AWS Systems Manager, you can activate Default Host Management Configuration for all accounts and Regions that have been added to your organization in AWS Organizations. This ensures that SSM Agent is kept up to date on all Amazon Elastic Compute Cloud (EC2) instances in the organization, and that they can connect to Systems Manager.

**Before you begin**  
Ensure that the following requirements are met before enabling this setting.
+ The latest version of SSM Agent is already installed on all EC2 instances to be managed in your organization.
+ Your EC2 instances to be managed are using Instance Metadata Service Version 2 (IMDSv2).
+ You are signed in to the management account for your organization, as specified in AWS Organizations, using an AWS Identity and Access Management (IAM) identity (user, role, or group) with administrator permissions.

**Using the default EC2 instance management role**  
Default Host Management Configuration makes use of the `default-ec2-instance-management-role` service setting for Systems Manager. This is a role with permissions that you want made available to all accounts in your organization to allow communication between SSM Agent on the instance and the Systems Manager service in the cloud.

If you have already set this role using the [https://docs.aws.amazon.com/cli/latest/reference/ssm/update-service-setting.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/update-service-setting.html) CLI command, Default Host Management Configuration uses that role. If you have not set this role yet, Quick Setup will create and apply the role for you. 

To check whether this role has already been specified for your organization, use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-service-setting.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-service-setting.html) command.

## Enable automatic updates of SSM Agent every two weeks


Use the following procedure to enable the Default Host Management Configuration option for your entire AWS Organizations organization.

**To enable automatic updates of SSM Agent every two weeks**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Default Host Management Configuration** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. In the **Configuration options** section, select **Enable automatic updates of SSM Agent every two weeks**.

1. Choose **Create**

# Create an AWS Config configuration recorder using Quick Setup
Create an AWS Config configuration recorder

With Quick Setup, a tool in AWS Systems Manager, you can quickly create a configuration recorder powered by AWS Config. Use the configuration recorder to detect changes in your resource configurations and capture the changes as configuration items. If you're unfamiliar with AWS Config, we recommend learning more about the service by reviewing the content in the *AWS Config Developer Guide* before creating a configuration with Quick Setup. For more information about AWS Config, see [What is AWS Config?](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) in the *AWS Config Developer Guide*.

By default, the configuration recorder records all supported resources in the AWS Region where AWS Config is running. You can customize the configuration so that only the resource types you specify are recorded. For more information, see [Selecting which resources AWS Config records](https://docs.aws.amazon.com/config/latest/developerguide/select-resources.html) in the *AWS Config Developer Guide*.

You're charged service usage fees when AWS Config starts recording configurations. For pricing information, see [AWS Config pricing](https://aws.amazon.com/config/pricing/).

**Note**  
If you've already created a configuration recorder, Quick Setup doesn't stop recording or make any changes to resource types that you're already recording. If you choose to record additional resource types using Quick Setup, the service appends them to your existing recorder groups. Deleting the Quick Setup **Config recording** configuration type doesn't stop the configuration recorder. Changes continue to be recorded, and service usage fees apply until you stop the configuration recorder. To learn more about managing the configuration recorder, see [Managing the Configuration Recorder](https://docs.aws.amazon.com/config/latest/developerguide/stop-start-recorder.html) in the *AWS Config Developer Guide*.

To set up AWS Config recording, perform the following tasks in the AWS Systems Manager console.

**To set up AWS Config recording with Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Config Recording** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. In the **Configuration options** section, do the following:

   1. For **Choose the AWS resource types to record**, specify whether to record all supported resources or only the resource types you choose.

   1. For **Delivery settings, specify whether to create a new Amazon Simple Storage Service (Amazon S3) bucket, or choose an existing bucket to send configuration snapshots to. **

   1. For **Notification options**, choose the notification option you prefer. AWS Config uses Amazon Simple Notification Service (Amazon SNS) to notify you about important AWS Config events related to your resources. If you choose the **Use existing SNS topics** option, you must provide the AWS account ID and name of the existing Amazon SNS topic in that account you want to use. If you target multiple AWS Regions, the topic names must be identical in each Region.

1. In the **Schedule** section, choose how frequently you want Quick Setup to remediate changes made to resources that differ from your configuration. The **Default** option runs once. If you don't want Quick Setup to remediate changes made to resources that differ from your configuration, choose **Disable remediation** under **Custom**.

1. In the **Targets** section, choose one of the following to identify the accounts and Regions for recording.
**Note**  
If you are working in a single account, options for working with organizations and organizational units (OUs) are not available. You can choose whether to apply this configuration to all AWS Regions in your account or only the Regions you select.
   + **Entire organization** – All accounts and Regions in your organization.
   + **Custom** – Only the OUs and Regions that you specify.
     + In the **Target OUs** section, select the OUs where you want to allow recording. 
     + In the **Target Regions** section, select the Regions where you want to allow recording. 
   + **Current account** – Only the Regions you specify in the account you are currently signed into are targeted. Choose one of the following:
     + **Current Region** – Only managed nodes in the Region selected in the console are targeted. 
     + **Choose Regions** – Choose the individual Regions to apply the recording configuration to.

1. Choose **Create**.

# Deploy AWS Config conformance pack using Quick Setup
Deploy AWS Config conformance pack

A conformance pack is a collection of AWS Config rules and remediation actions. With Quick Setup, you can deploy a conformance pack as a single entity in an account and an AWS Region or across an organization in AWS Organizations. This helps you manage configuration compliance of your AWS resources at scale, from policy definition to auditing and aggregated reporting, by using a common framework and packaging model. 

To deploy conformance packs, perform the following tasks in the AWS Systems Manager Quick Setup console.

**Note**  
You must enable AWS Config recording before deploying this configuration. For more information, see [Conformance packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html) in the *AWS Config Developer Guide*.

**To deploy conformance packs with Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Conformance Packs** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. In the **Choose conformance packs** section, choose the conformance packs you want to deploy.

1. In the **Schedule** section, choose how frequently you want Quick Setup to remediate changes made to resources that differ from your configuration. The **Default** option runs once. If you don't want Quick Setup to remediate changes made to resources that differ from your configuration, choose **Disabled** under **Custom**.

1. In the **Targets** section, choose whether to deploy conformance packs to your entire organization, some AWS Regions, or the account you're currently logged in to.

   If you choose **Entire organization**, continue to step 8.

   If you choose **Custom**, continue to step 7.

1. In the **Target Regions** section, select the check boxes of the Regions you want to deploy conformance packs to.

1. Choose **Create**.

# Configure patching for instances in an organization using a Quick Setup patch policy
Configure patching for instances in an organization using patch policies

With Quick Setup, a tool in AWS Systems Manager, you can create patch policies powered by Patch Manager. A patch policy defines the schedule and baseline to use when automatically patching your Amazon Elastic Compute Cloud (Amazon EC2) instances and other managed nodes. Using a single patch policy configuration, you can define patching for all accounts in multiple AWS Regions in your organization, for only the accounts and Regions you choose, or for a single account-Region pair. For more information about patch policies, see [Patch policy configurations in Quick Setup](patch-manager-policies.md).

**Prerequisite**  
To define a patch policy for a node using Quick Setup, the node must be a *managed node*. For more information about managing your nodes, see [Setting up Systems Manager unified console for an organization](systems-manager-setting-up-organizations.md).

**Important**  
**Patch compliance scanning methods** – Systems Manager supports several methods for scanning managed nodes for patch compliance. If you implement more than one of these methods at a time, the patch compliance information you see is always the result of the most recent scan. Results from previous scans are overwritten. If the scanning methods use different patch baselines, with different approval rules, the patch compliance information can change unexpectedly. For more information, see [Identifying the execution that created patch compliance data](patch-manager-compliance-data-overwrites.md).  
**Association compliance status and patch policies** – The patching status for a managed node that's under a Quick Setup patch policy matches the status of the State Manager association execution for that node. If the association execution status is `Compliant`, the patching status for the managed node is also marked `Compliant`. If the association execution status is `Non-Compliant`, the patching status for the managed node is also marked `Non-Compliant`. 

## Supported Regions for patch policy configurations


Patch policy configurations in Quick Setup are currently supported in the following Regions:
+ US East (Ohio) (us-east-2)
+ US East (N. Virginia) (us-east-1)
+ US West (N. California) (us-west-1)
+ US West (Oregon) (us-west-2)
+ Asia Pacific (Mumbai) (ap-south-1)
+ Asia Pacific (Seoul) (ap-northeast-2)
+ Asia Pacific (Singapore) (ap-southeast-1)
+ Asia Pacific (Sydney) (ap-southeast-2)
+ Asia Pacific (Tokyo) (ap-northeast-1)
+ Canada (Central) (ca-central-1)
+ Europe (Frankfurt) (eu-central-1)
+ Europe (Ireland) (eu-west-1)
+ Europe (London) (eu-west-2)
+ Europe (Paris) (eu-west-3)
+ Europe (Stockholm) (eu-north-1)
+ South America (São Paulo) (sa-east-1)

## Permissions for the patch policy S3 bucket


When you create a patch policy, Quick Setup creates an Amazon S3 bucket that contains a file named `baseline_overrides.json`. This file stores information about the patch baselines that you specified for your patch policy.

The S3 bucket is named in the format `aws-quicksetup-patchpolicy-account-id-quick-setup-configuration-id`. 

For example: `aws-quicksetup-patchpolicy-123456789012-abcde`

If you're creating a patch policy for an organization, the bucket is created in your organization's management account. 

There are two use cases when you must provide other AWS resources with permission to access this S3 bucket using AWS Identity and Access Management (IAM) policies:
+ [Case 1: Use your own instance profile or service role with your managed nodes instead of one provided by Quick Setup](#patch-policy-instance-profile-service-role)
+ [Case 2: Use VPC endpoints to connect to Systems Manager](#patch-policy-vpc)

The permissions policy you need in either case is located in the section below, [Policy permissions for Quick Setup S3 buckets](#patch-policy-bucket-permissions).

### Case 1: Use your own instance profile or service role with your managed nodes instead of one provided by Quick Setup


Patch policy configurations include an option to **Add required IAM policies to existing instance profiles attached to your instances**. 

If you don't choose this option but want Quick Setup to patch your managed nodes using this patch policy, you must ensure that the following are implemented:
+ The IAM managed policy `AmazonSSMManagedInstanceCore` must be attached to the [IAM instance profile](setup-instance-permissions.md) or [IAM service role](hybrid-multicloud-service-role.md) that's used to provide Systems Manager permissions to your managed nodes.
+ You must add permissions to access your patch policy bucket as an inline policy to the IAM instance profile or IAM service role. You can provide wildcard access to all `aws-quicksetup-patchpolicy` buckets or only the specific bucket created for your organization or account, as shown in the earlier code samples.
+ You must tag your IAM instance profile or IAM service role with the following key-value pair.

  `Key: QSConfigId-quick-setup-configuration-id, Value: quick-setup-configuration-id`

  *quick-setup-configuration-id* represents the value of the parameter applied to the AWS CloudFormation stack that is used in creating your patch policy configuration. To retrieve this ID, do the following:

  1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

  1. Select the name of the stack that is used to create your patch policy. The name is in a format such as `StackSet-AWS-QuickSetup-PatchPolicy-LA-q4bkg-52cd2f06-d0f9-499e-9818-d887cEXAMPLE`.

  1. Choose the **Parameters** tab.

  1. In the **Parameters** list, in the **Key** column, locate the key **QSConfigurationId**. In the **Value** column for its row, locate the configuration ID, such as `abcde`.

     In this example, for the tag to apply to your instance profile or service role, the key is `QSConfigId-abcde`, and the value is `abcde`.

For information about adding tags to an IAM role, see [Tagging IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags_roles.html#id_tags_roles_procs-console) and [Managing tags on instance profiles (AWS CLI or AWS API)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags_instance-profiles.html#id_tags_instance-profile_procs-cli-api) in the *IAM User Guide*.

### Case 2: Use VPC endpoints to connect to Systems Manager


If you use VPC endpoints to connect to Systems Manager, your VPC endpoint policy for S3 must allow access to your Quick Setup patch policy S3 bucket.

For information about adding permissions to a VPC endpoint policy for S3, see [Controlling access from VPC endpoints with bucket policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies-vpc-endpoint.html) in the *Amazon S3 User Guide*.

### Policy permissions for Quick Setup S3 buckets


You can provide wildcard access to all `aws-quicksetup-patchpolicy` buckets or only the specific bucket created for your organization or account. To provide the necessary permissions for the two cases described below, use either format.

------
#### [ All patch policy buckets ]

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AccessToAllPatchPolicyRelatedBuckets",
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::aws-quicksetup-patchpolicy-*"
    }
  ]
}
```

------

------
#### [ Specific patch policy bucket ]

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AccessToMyPatchPolicyRelatedBucket",
      "Effect": "Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::aws-quicksetup-patchpolicy-111122223333-quick-setup-configuration-id"
    }
  ]
}
```

------

**Note**  
After the patch policy configuration is created, you can locate the full name of your bucket in the S3 console. For example: `aws-quicksetup-patchpolicy-123456789012-abcde`

------

## Random patch baseline IDs in patch policy operations


Patching operations for patch policies utilize the `BaselineOverride` parameter in the `AWS-RunPatchBaseline` SSM Command document. 

When you use `AWS-RunPatchBaseline` for patching *outside* of a patch policy, you can use `BaselineOverride` to specify a list of patch baselines to use during the operation that are different from the specified defaults. You create this list in a file named `baseline_overrides.json` and manually add it to an Amazon S3 bucket that you own, as explained in [Using the BaselineOverride parameter](patch-manager-baselineoverride-parameter.md).

For patching operations based on patch policies, however, Systems Manager automatically creates an S3 bucket and adds a `baseline_overrides.json` file to it. Then, every time Quick Setup runs a patching operation (using the Run Command tool, the system generates a random ID for each patch baseline. This ID is different for every patch policy patching operation, and the patch baseline it represents is not stored or accessible to you in your account. 

As a result, you will not see the ID of the patch baseline selected in your configuration in patching logs. This applies to both AWS managed patch baselines and custom patch baselines you might have selected. The baseline ID reported in the log is instead that one that was generated for that specific patching operation.

In addition, if you attempt to view details in Patch Manager about a patch baseline that was generated with a random ID, the system reports that the patch baseline doesn't exist. This is expected behavior and can be ignored.

## Creating a patch policy


To create a patch policy, perform the following tasks in the Systems Manager console.

**To create a patch policy with Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

   If you are setting up patching for an organization, make sure you are signed in to the management account for the organization. You can't set up the policy using the delegated administrator account or a member account.

1. In the navigation pane, choose **Quick Setup**.

1. On the **Patch Manager** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. For **Configuration name**, enter a name to help identify the patch policy.

1. In the **Scanning and installation** section, under **Patch operation**, choose whether the patch policy will **Scan** the specified targets or **Scan and install** patches on the specified targets.

1. Under **Scanning schedule**, choose **Use recommended defaults** or **Custom scan schedule**. The default scan schedule will scan your targets daily at 1:00 AM UTC.
   + If you choose **Custom scan schedule**, select the **Scanning frequency**.
   + If you choose **Daily**, enter the time, in UTC, that you want to scan your targets. 
   + If you choose **Custom CRON Expression**, enter the schedule as a **CRON expression**. For more information about formatting CRON expressions for Systems Manager, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

     Also, select **Wait to scan targets until first CRON interval**. By default, Patch Manager immediately scans nodes as they become targets.

1. If you chose **Scan and install**, choose the **Installation schedule** to use when installing patches to the specified targets. If you choose **Use recommended defaults**, Patch Manager will install weekly patches at 2:00 AM UTC on Sunday.
   + If you choose **Custom install schedule**, select the **Installation Frequency**.
   + If you choose **Daily**, enter the time, in UTC, that you want to install updates on your targets.
   + If you choose **Custom CRON expression**, enter the schedule as a **CRON expression**. For more information about formatting CRON expressions for Systems Manager, see [Reference: Cron and rate expressions for Systems Manager](reference-cron-and-rate-expressions.md).

     Also, clear **Wait to install updates until first CRON interval** to immediately install updates on nodes as they become targets. By default, Patch Manager waits until the first CRON interval to install updates.
   + Choose **Reboot if needed** to reboot the nodes after patch installation. Rebooting after installation is recommended but can cause availability issues.

1. In the **Patch baseline** section, choose the patch baselines to use when scanning and updating your targets. 

   By default, Patch Manager uses the predefined patch baselines. For more information, see [Predefined baselines](patch-manager-predefined-and-custom-patch-baselines.md#patch-manager-baselines-pre-defined).

   If you choose **Custom patch baseline**, change the selected patch baseline for operating systems that you don't want to use a predefined AWS patch baseline.
**Note**  
If you use VPC endpoints to connect to Systems Manager, make sure your VPC endpoint policy for S3 allows access to this S3 bucket. For more information, see [Permissions for the patch policy S3 bucket](#patch-policy-s3-bucket-permissions). 
**Important**  
If you are using a [patch policy configuration](patch-manager-policies.md) in Quick Setup, updates you make to custom patch baselines are synchronized with Quick Setup once an hour.   
If a custom patch baseline that was referenced in a patch policy is deleted, a banner displays on the Quick Setup **Configuration details** page for your patch policy. The banner informs you that the patch policy references a patch baseline that no longer exists, and that subsequent patching operations will fail. In this case, return to the Quick Setup **Configurations** page, select the Patch Manager configuration , and choose **Actions**, **Edit configuration**. The deleted patch baseline name is highlighted, and you must select a new patch baseline for the affected operating system.

1. (Optional) In the **Patching log storage** section, select **Write output to S3 bucket** to store patching operation logs in an Amazon S3 bucket. 
**Note**  
If you are setting up a patch policy for an organization, the management account for your organization must have at least read-only permissions for this bucket. All organization units included in the policy must have write-access to the bucket. For information about granting bucket access to different accounts, see [Example 2: Bucket owner granting cross-account bucket permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html) in the *Amazon Simple Storage Service User Guide*.

1. Choose **Browse S3** to select the bucket that you want to store patch log output in. The management account must have read access to this bucket. All non-management accounts and targets configured in the **Targets** section must have write access to the provided S3 bucket for logging.

1. In the **Targets** section, choose one of the following to identify the accounts and Regions for this patch policy operation.
**Note**  
If you are working in a single account, options for working with organizations and organizational units (OUs) are not available. You can choose whether to apply this configuration to all AWS Regions in your account or only the Regions you select.  
If you previously specified a Home Region for you account and haven't onboarded to the new Quick Setup console experience, you can't exclude that Region from the **Targets** configuration.
   + **Entire organization** – All accounts and Regions in your organization.
   + **Custom** – Only the OUs and Regions that you specify.
     + In the **Target OUs** section, select the OUs where you want to set up the patch policy. 
     + In the **Target Regions** section, select the Regions where you want to apply the patch policy. 
   + **Current account** – Only the Regions you specify in the account you are currently signed into are targeted. Choose one of the following:
     + **Current Region** – Only managed nodes in the Region selected in the console are targeted. 
     + **Choose Regions** – Choose the individual Regions to apply the patch policy to.

1. For **Choose how you want to target instances**, choose one of the following to identify the nodes to patch: 
   + **All managed nodes** – All managed nodes in the selected OUs and Regions.
   + **Specify the resource group** – Choose the name of a resource group from the list to target its associated resources.
**Note**  
Currently, selecting resource groups is supported only for single account configurations. To patch resources in multiple accounts, choose a different targeting option.
   + **Specify a node tag** – Only nodes tagged with the key-value pair that you specify are patched in all accounts and Regions you have targeted. 
   + **Manual** – Choose managed nodes from all specified accounts and Regions manually from a list.
**Note**  
This option currently supports only Amazon EC2 instances. You can add a maximum of 25 instances manually in a patch policy configuration.

1. In the **Rate control** section, do the following:
   + For **Concurrency**, enter a number or percentage of nodes to run the patch policy on at the same time.
   + For **Error threshold**, enter the number or percentage of nodes that can experience an error before the patch policy fails.

1. (Optional) Select the **Add required IAM policies to existing instance profiles attached to your instances** check box.

   This selection applies the IAM policies created by this Quick Setup configuration to nodes that already have an instance profile attached (EC2 instances) or a service role attached (hybrid-activated nodes). We recommend this selection when your managed nodes already have an instance profile or service role attached, but it doesn't contain all the permissions required for working with Systems Manager.

   Your selection here is applied to managed nodes created later in the accounts and Regions that this patch policy configuration applies to.
**Important**  
If you don't select this check box but want Quick Setup to patch your managed nodes using this patch policy, you must do the following:  
Add permissions to your [IAM instance profile](setup-instance-permissions.md) or [IAM service role](hybrid-multicloud-service-role.md) to access the S3 bucket created for your patch policy  
Tag your IAM instance profile or IAM service role with a specific key-value pair.  
For information, see [Case 1: Use your own instance profile or service role with your managed nodes instead of one provided by Quick Setup](#patch-policy-instance-profile-service-role).

1. Choose **Create**.

   To review patching status after the patch policy is created, you can access the configuration from the [https://console.aws.amazon.com/systems-manager/quick-setup](https://console.aws.amazon.com/systems-manager/quick-setup) page.

# Set up DevOps Guru using Quick Setup
Set up DevOps Guru

You can quickly configure DevOps Guru options by using Quick Setup. Amazon DevOps Guru is a machine learning (ML) powered service that makes it easy to improve an application's operational performance and availability. DevOps Guru detects behaviors that are different from normal operating patterns so you can identify operational issues long before they impact your customers. DevOps Guru automatically ingests operational data from your AWS applications and provides a single dashboard to visualize issues in your operational data. You can get started with DevOps Guru to improve application availability and reliability with no manual setup or machine learning expertise.

Configuring DevOps Guru with Quick Setup is available in the following AWS Regions:
+ US East (N. Virginia)
+ US East (Ohio)
+ US West (Oregon)
+ Europe (Frankfurt)
+ Europe (Ireland)
+ Europe (Stockholm)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)

For pricing information, see [Amazon DevOps Guru pricing](https://aws.amazon.com/devops-guru/pricing/).

To set up DevOps Guru, perform the following tasks in the AWS Systems Manager Quick Setup console.

**To set up DevOps Guru with Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **DevOps Guru** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. In the **Configuration options** section, choose the AWS resource types you want to analyze and your notification preferences.

   If you don't select the **Analyze all AWS resources in all the accounts in my organization** option, you can choose AWS resources to analyze later in the DevOps Guru console. DevOps Guru analyzes different AWS resource types (such as Amazon Simple Storage Service (Amazon S3) buckets and Amazon Elastic Compute Cloud (Amazon EC2) instances), which are categorized into two pricing groups. You pay for the AWS resource hours analyzed, for each active resource. A resource is only active if it produces metrics, events, or log entries within an hour. The rate you're charged for a specific AWS resource type depends on the price group.

   If you select the **Enable SNS notifications** option, an Amazon Simple Notification Service (Amazon SNS) topic is created in each AWS account in the organizational units (OUs) you target with your configuration. DevOps Guru uses the topic to notify you about important DevOps Guru events, such as the creation of a new insight. If you don't enable this option, you can add a topic later in the DevOps Guru console.

   If you select the **Enable AWS Systems Manager OpsItems** option, operational work items (OpsItems) will be created for related Amazon EventBridge events and Amazon CloudWatch alarms.

1. In the **Schedule** section, choose how frequently you want Quick Setup to remediate changes made to resources that differ from your configuration. The **Default** option runs once. If you don't want Quick Setup to remediate changes made to resources that differ from your configuration, choose **Disabled** under **Custom**.

1. In the **Targets** section, choose whether to allow DevOps Guru to analyze resources in some of your organizational units (OUs), or the account you're currently logged in to.

   If you choose **Custom**, continue to step 8.

   If you choose **Current account**, continue to step 9.

1. In the **Target OUs** and **Target Regions** sections, select the check boxes of the OUs and Regions where you want to use DevOps Guru.

1. Choose the Regions where you want to use DevOps Guru in the current account.

1. Choose **Create**.

# Deploy Distributor packages using Quick Setup
Deploy Distributor packages

Distributor is a tool in AWS Systems Manager. A Distributor package is a collection of installable software or assets that can be deployed as a single entity. With Quick Setup, you can deploy a Distributor package in an AWS account and an AWS Region or across an organization in AWS Organizations. Currently, only the EC2Launch v2 agent, Amazon Elastic File System (Amazon EFS) utilities package and Amazon CloudWatch agent can be deployed with Quick Setup. For more information about Distributor, see [AWS Systems Manager Distributor](distributor.md).

To deploy Distributor packages, perform the following tasks in the AWS Systems Manager Quick Setup console.

**To deploy Distributor packages with Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Distributor** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. In the **Configuration options** section, choose the package you want to deploy.

1. In the **Targets** section, choose whether to deploy the package to your entire organization, some of your organizational units (OUs), or the account you're currently logged in to.

   If you choose **Entire organization**, continue to step 8.

   If you choose **Custom**, continue to step 7.

1. In the **Target OUs** section, select the check boxes of the OUs and Regions you want to deploy the package to.

1. Choose **Create**.

# Stop and start EC2 instances automatically on a schedule using Quick Setup
Stop and start EC2 instances automatically on a schedule

With Quick Setup, a tool in AWS Systems Manager, you can configure Resource Scheduler to automate the starting and stopping of Amazon Elastic Compute Cloud (Amazon EC2) instances.

This Quick Setup configuration helps you reduce operational costs by starting and stopping instances according to the schedule that you specify. This tool helps you avoid incurring unnecessary costs for running instances when they’re not needed. 

For example, you currently might leave your instances running constantly, even though they’re only used 10 hours a day, 5 days a week. Instead, you can schedule your instances to stop every day after business hours. As a result, there would be 70 percent savings for those instances because the running time is reduced from 168 hours to 50 hours. There is no cost to use Quick Setup. However, costs can be incurred by the resources you set up and the usage limits with no fees for the services used to set up your configuration.

Using Resource Scheduler, you can choose to automatically stop and start instances across multiple AWS Regions and AWS accounts according to a schedule you define. The Quick Setup configuration targets Amazon EC2 instances using the tag key and value that you specify. Only the instances with a tag matching the value that you specify in your configuration are stopped or started by Resource Scheduler. Note that if the Amazon EBS volumes attached to the instance are encrypted, you must add the required permissions for the AWS KMS key to the IAM role for Resource Scheduler to start the instance.

**Maximum instances per configuration**  
An individual configuration supports scheduling up to 5,000 instances per Region. If your case requires more than 5,000 instances to be scheduled in a given Region, you must create multiple configurations. Tag your instances accordingly so each configuration is managing up to 5,000 instances. When creating multiple Resource Scheduler Quick Setup configurations, you must specify different tag key values. For example, one configuration can use the tag key `Environment` with the value `Production`, while another uses `Environment` and `Development`.

**Scheduling behaviors**  
The following points describe certain behaviors of schedule configurations:
+ Resource Scheduler starts the tagged instances only if they are in the `Stopped` state. Similarly, instances are only stopped if they are in the `running` state. Resource Scheduler operates on an event driven model and only starts or stops instances at the times that you specify. For example, you create a schedule that starts instances at 9 AM. Resource Scheduler starts all instances associated with the tag you specify that are in the `Stopped` state at 9 AM. If the instances are manually stopped at a later time, Resource Scheduler will not start them again to maintain the `Running` state. Similarly, if an instance is started manually after it was stopped according to your schedule, Resource Scheduler will not stop the instance again.
+ If you create a schedule with a start time that is later in a 24-hour day than the stop time, Resource Scheduler assumes your instances are to run overnight. For example, you create a schedule that starts instances at 9 PM, and stops instances at 7 AM. Resource Scheduler starts all instances associated with the tag you specify that are in the `Stopped` state at 9 PM, and stops them at 7 AM the following day. For overnight schedules, the start time applies to the days you select for your schedule. However, the stop time applies to the following day in your schedule.
+ When you create a schedule configuration, the current state of your instances might be changed to match the requirements of the schedule.

  For example, say that today is a Wednesday, and you specify a schedule for your managed instances to start at 9 AM and stop at 5 PM on Tuesdays and Thursdays *only*. Because your current time is outside of the prescribed hours for the instances to be running, they will be stopped after the configuration is created. The instances won't run again until the next prescribed hour, 9 AM on Thursday. 

  If your instances are currently in a `Stopped` state, and you specify a schedule in which they would be running at the current time, Resource Scheduler starts them after the configuration is created.

If you delete your configuration, instances are no longer stopped and started according to the previously defined schedule. In rare cases, instances might not successfully stop or start due to API operation failures.

To set up scheduling for Amazon EC2 instances, perform the following tasks in the AWS Systems Manager Quick Setup console.

**To set up instance scheduling with Quick Setup**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Resource Scheduler** card, choose **Create**.
**Tip**  
If you already have one or more configurations in your account, first choose the **Library** tab or the **Create** button in the **Configurations** section to view the cards.

1. In the **Instance tag** section, specify the tag key and value applied to the instances you want to associate with your schedule.

1. In the **Schedule options** section, specify the time zone, days, and times you want to start and stop your instances.

1. In the **Targets** section, choose whether to set scheduling for a **Custom** group of organizational units (OUs), or the **Current account** you're signed in to:
   + **Custom** – In the **Target OUs** section, select the OUs where you want to set up scheduling. Next, in the **Target Regions** section, select the Regions where you want to set up scheduling.
   + **Current account** – Select **Current Region** or **Choose Regions**. If you selected **Choose Regions**, choose the **Target Regions** where you want to set up scheduling.

1. Verify the schedule information in the **Summary** section.

1. Choose **Create**.

# Configure AWS Resource Explorer using Quick Setup
Configure AWS Resource Explorer

With Quick Setup, a tool in AWS Systems Manager, you can quickly configure AWS Resource Explorer to search and discover resources in your AWS account or across an entire AWS organization. You can search for your resources using metadata like names, tags, and IDs. AWS Resource Explorer provides fast responses to your search queries by using *indexes*. Resource Explorer creates and maintains indexes using a variety of data sources to gather information about resources in your AWS account. 

Quick Setup for Resource Explorer automates the index configuration process. For more information about AWS Resource Explorer, see [ What is AWS Resource Explorer?](https://docs.aws.amazon.com/resource-explorer/latest/userguide/welcome.html) in the AWS Resource Explorer User Guide.

During Quick Setup, Resource Explorer does the following: 
+ Creates an index in every AWS Region in your AWS account.
+ Updates the index in the Region you specify to be the aggregator index for the account.
+ Creates a default view in the aggregator index Region. This view has no filters so it returns all resources found in the index.

**Minimum permissions**

To perform the steps in the following procedure, you must have the following permissions:
+ **Action**: `resource-explorer-2:*` – **Resource**: no specific resource (`*`)
+ **Action**: `iam:CreateServiceLinkedRole` – **Resource**: no specific resource (`*`)

**To configure Resource Explorer**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Resource Explorer** card, choose **Create**.

1. In the **Aggregator Index Region** section, choose which Region you want to contain the **aggregator index**. You should select the Region that is appropriate for the geographic location for your users.

1. (Optional) Select the **Replace existing aggregator indexes in Regions other than the one selected above** check box. 

1. In the **Targets** section, choose the target **organization** or specific **Organizational Units (OUs)** containing the resources you want to discover. 

1. In the **Regions** section, choose which **Regions** to include in the configuration. 

1. Review the configuration summary, and then choose **Create**. 

On the **Resource Explorer** page, you can monitor the configuration status.

# Troubleshooting Quick Setup results


Use the following information to help you troubleshoot problems with Quick Setup, a tool in AWS Systems Manager. This topic includes specific tasks to resolve issues based on the type of Quick Setup issue.

**Issue: Failed deployment**  
A deployment fails if the CloudFormation stack set failed during creation. Use the following steps to investigate a deployment failure.

1. Navigate to the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation). 

1. Choose the stack created by your Quick Setup configuration. The **Stack name** includes `QuickSetup` followed by the type of configuration you chose, such as `SSMHostMgmt`. 
**Note**  
CloudFormation sometimes deletes failed stack deployments. If the stack isn't available in the **Stacks** table, choose **Deleted** from the filter list.

1. View the **Status** and **Status reason**. For more information about stack statuses, see [Stack status codes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html#cfn-console-view-stack-data-resources-status-codes) in the *AWS CloudFormation User Guide*. 

1. To understand the exact step that failed, view the **Events** tab and review each event's **Status**. 

1. Review [Troubleshooting](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) in the *AWS CloudFormation User Guide*.

1. If you are unable to resolve the deployment failure using the CloudFormation troubleshooting steps, delete the configuration and reconfigure it.

**Issue: Failed association**  
The **Configuration details** table on the **Configuration details** page of your configuration shows a **Configuration status** of **Failed** if any of the associations failed during set up. Use the following steps to troubleshoot a failed association.

1. In the **Configuration details** table, choose the failed configuration and then choose **View Details**.

1. Copy the **Association name**.

1. Navigate to **State Manager** and paste the association name into the search field. 

1. Choose the association and choose the **Execution history** tab.

1. Under **Execution ID**, choose the association execution that failed.

1. The **Association execution targets** page lists all of the nodes where the association ran. Choose the **Output** button for an execution that failed to run.

1. In the **Output** page, choose **Step - Output** to view the error message for that step in the command execution. Each step can display a different error message. Review the error messages for all steps to help troubleshoot the issue.
If viewing the step output doesn't solve the problem, then you can try to recreate the association. To recreate the association, first delete the failing association in State Manager. After deleting the association, edit the configuration and choose the option you deleted and choose **Update**.  
To investigate **Failed** associations for an **Organization** configuration, you must sign in to the account with the failed association and use the following failed association procedure, previously described. The **Association ID** isn't a hyperlink to the target account when viewing results from the management account.

**Issue: Drift status**  
When viewing a configuration's details page, you can view the drift status of each deployment. Configuration drift occurs whenever a user makes any change to a service or feature that conflicts with the selections made through Quick Setup. If an association has changed after the initial configuration, the table displays a warning icon that indicates the number of items that have drifted. You can determine what caused the drift by hovering over the icon. 
When an association is deleted in State Manager, the related deployments display a drift warning. To fix this, edit the configuration and choose the option that was removed when the association was deleted. Choose **Update** and wait for the deployment to complete.

# AWS Systems Manager application tools
Application tools

Application tools are a suite of capabilities that help you manage your applications running in AWS.

**Topics**
+ [

# AWS AppConfig
](appconfig.md)
+ [

# AWS Systems Manager Application Manager
](application-manager.md)
+ [

# AWS Systems Manager Parameter Store
](systems-manager-parameter-store.md)

# AWS AppConfig


Information about AWS AppConfig has been moved into separate guides. For more information, see the following:
+ [AWS AppConfig User Guide](https://docs.aws.amazon.com/appconfig/latest/userguide/what-is-appconfig.html)
+ [AWS AppConfig API Reference](https://docs.aws.amazon.com/appconfig/2019-10-09/APIReference/Welcome.html)

# AWS Systems Manager Application Manager
Application Manager

Application Manager, a tool in AWS Systems Manager, helps DevOps engineers investigate and remediate issues with their AWS resources in the context of their applications and clusters. Application Manager aggregates operations information from multiple AWS services and Systems Manager tools to a single AWS Management Console.

In Application Manager, an *application* is a logical group of AWS resources that you want to operate as a unit. This logical group can represent different versions of an application, ownership boundaries for operators, or developer environments, to name a few. Application Manager support for container clusters includes both Amazon Elastic Kubernetes Service (Amazon EKS) and Amazon Elastic Container Service (Amazon ECS) clusters.

The first time you open Application Manager, the **What Application Manager can do for you** page displays. When you choose **Get started**, Application Manager automatically imports metadata about your resources that were created in other AWS services or Systems Manager tools. Application Manager then displays those resources in a list grouped by predefined categories.

For **Applications**, the list includes the following:
+ AWS CloudFormation stacks
+ Custom applications
+ AWS Launch Wizard applications
+ AppRegistry applications
+ AWS SAP Enterprise Workload applications
+ Amazon ECS clusters
+ Amazon EKS clusters

After you [set up](https://docs.aws.amazon.com/systems-manager/latest/userguide/application-manager-getting-started-related-services.html) and configure AWS services and Systems Manager tools, Application Manager displays the following types of information about your resources:
+ Information about the current state, status, and Amazon EC2 Auto Scaling health of the Amazon Elastic Compute Cloud (Amazon EC2) instances in your application
+ Alarms provided by Amazon CloudWatch
+ Compliance information provided by AWS Config and State Manager (a component of Systems Manager)
+ Kubernetes cluster information provided by Amazon EKS
+ Log data provided by AWS CloudTrail and Amazon CloudWatch Logs
+ OpsItems provided by Systems Manager OpsCenter
+ Resource details provided by the AWS services that host them.
+ Container cluster information provided by Amazon ECS.

To help you remediate issues with components or resources, Application Manager also provides runbooks that you can associate with your applications. To get started with Application Manager, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/appmanager). In the navigation pane, choose **Application Manager**.

## What are the benefits of using Application Manager?


Application Manager reduces the time it takes for DevOps engineers to detect and investigate issues with AWS resources. To do this, Application Manager displays many types of operations information in the context of an application in one console. Application Manager also reduces the time it takes to remediate issues by providing runbooks that perform common remediation tasks on AWS resources.

## What are the features of Application Manager?


Application Manager includes the following features:
+ **Import your AWS resources automatically**

  During initial setup, you can choose to have Application Manager automatically import and display resources in your AWS account that are based on CloudFormation stacks, AWS Resource Groups, Launch Wizard deployments, AppRegistry applications, and Amazon ECS and Amazon EKS clusters. The system displays these resources in predefined application or cluster categories. Thereafter, whenever new resources of these types are added to your AWS account, Application Manager automatically displays the new resources in the predefined application and cluster categories. 
+ **Create or edit CloudFormation stacks and templates**

  Application Manager helps you provision and manage resources for your applications by integrating with [CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html). You can create, edit, and delete CloudFormation templates and stacks in Application Manager. Application Manager also includes a template library where you can clone, create, and store templates. Application Manager and CloudFormation display the same information about the current status of a stack. Templates and template updates are stored in Systems Manager until you provision the stack, at which time the changes are also displayed in CloudFormation.
+ **View information about your instances in the context of an application**

  Application Manager integrates with Amazon Elastic Compute Cloud (Amazon EC2) to display information about your instances in the context of an application. Application Manager displays instance state, status, and Amazon EC2 Auto Scaling health for a selected application in a graphical format. The **Instances** tab also includes a table with the following information for each instance in your application.
  + Instance state (Pending, Stopping, Running, Stopped)
  + Ping status for SSM Agent
  + Status and name of the last Systems Manager Automation runbook processed on the instance
  + A count of Amazon CloudWatch Logs alarms per state.
    + `ALARM` – The metric or expression is outside of the defined threshold.
    + `OK` – The metric or expression is within the defined threshold.
    + `INSUFFICIENT_DATA` – The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.
  + Auto Scaling group health for the parent and individual autoscaling groups
+ **View operational metrics and alarms for an application or cluster**

  Application Manager integrates with [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) to provide real-time operational metrics and alarms for an application or cluster. You can drill down into your application tree to view alarms at each component level, or view alarms for an individual cluster.
+ **View log data for an application**

  Application Manager integrates with [Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) to provide log data in the context of your application without having to leave Systems Manager.
+ **View and manage OpsItems for an application or cluster** 

  Application Manager integrates with [AWS Systems Manager OpsCenter](OpsCenter.md) to provide a list of operational work items (OpsItems) for your applications and clusters. The list reflects automatically generated and manually created OpsItems. You can view details about the resource that created an OpsItem and the OpsItem status, source, and severity. 
+ **View resource compliance data for an application or cluster** 

  Application Manager integrates with [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html) to provide compliance and history details about your AWS resources according to rules you specify. Application Manager also integrates with [AWS Systems Manager State Manager](systems-manager-state.md) to provide compliance information about the state you want to maintain for your Amazon Elastic Compute Cloud (Amazon EC2) instances. 
+ **View Amazon ECS and Amazon EKS cluster infrastructure information**

  Application Manager integrates with [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/) and [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) to provide information about the health of your cluster infrastructures and a component runtime view of the compute, networking, and storage resources in a cluster.

  However, you can't manage or view operations information about your Amazon EKS pods or containers in Application Manager. You can only manage and view operations information about the infrastructure that is hosting your Amazon EKS resources.
+ **View resource cost details for an application**

  Application Manager is integrated with AWS Cost Explorer, a feature of AWS Billing and Cost Management, through the **Cost** widget. After you enable Cost Explorer in the Billing and Cost Management console, the **Cost** widget in Application Manager shows cost data for a specific non-container application or application component. You can use filters in the widget to view cost data according to different time periods, granularities, and cost types in either a bar or line chart. 
+ **View detailed resource information in one console**

  Choose a resource name listed in Application Manager and view contextual information and operations information about that resource without having to leave Systems Manager.
+ **Receive automatic resource updates for applications** 

  If you make changes to a resource in a service console, and that resource is part of an application in Application Manager, then Systems Manager automatically displays those changes. For example, if you update a stack in the CloudFormation console, and if that stack is part of an Application Manager application, then the stack updates are automatically reflected in Application Manager. 
+ **Discover Launch Wizard applications automatically**

  Application Manager is integrated with [AWS Launch Wizard](https://docs.aws.amazon.com/launchwizard/?id=docs_gateway). If you used Launch Wizard to deploy resources for an application, Application Manager can automatically import and display them in a Launch Wizard section.
+ **Monitor application resources in Application Manager by using CloudWatch Application Insights**

  Application Manager integrates with Amazon CloudWatch Application Insights. Application Insights identifies and sets up key metrics, logs, and alarms across your application resources and technology stack. Application Insights continuously monitors metrics and logs to detect and correlate anomalies and errors. When the system detects errors or anomalies, Application Insights generates CloudWatch Events that you can use to set up notifications or take actions. You can enable and view Application Insights on the **Overview** and **Monitoring** tabs in Application Manager. For more information about Application Insights, see [What is Amazon CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/appinsights-what-is.html) in the *Amazon CloudWatch User Guide*.
+ **Remediate issues with runbooks** 

  Application Manager includes predefined Systems Manager runbooks for remediating common issues with AWS resources. You can execute a runbook against all of the applicable resources in an application without having to leave Application Manager.

## Is there a charge to use Application Manager?


Application Manager is available at no additional charge.

## What are the resource quotas for Application Manager?


You can view quotas for all Systems Manager tools in the [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*. Unless otherwise noted, each quota is Region specific.

**Topics**
+ [

## What are the benefits of using Application Manager?
](#application-manager-learn-more-benefits)
+ [

## What are the features of Application Manager?
](#application-manager-learn-more-features)
+ [

## Is there a charge to use Application Manager?
](#application-manager-learn-more-cost)
+ [

## What are the resource quotas for Application Manager?
](#application-manager-learn-more-quotas)
+ [

# Setting up related services
](application-manager-getting-started-related-services.md)
+ [

# Configuring permissions for Systems Manager Application Manager
](application-manager-getting-started-permissions.md)
+ [

# Adding applications and container clusters to Application Manager
](application-manager-getting-started-adding-applications.md)
+ [

# Working with applications
](application-manager-working-applications.md)

# Setting up related services


Application Manager, a tool in AWS Systems Manager, displays resources and information from other AWS services and Systems Manager tools. To maximize the amount of operations information displayed in Application Manager, we recommend that you set up and configure these other services or tools *before* you use Application Manager.

**Topics**
+ [

## Set up tasks for importing resources
](#application-manager-getting-started-related-services-resources)
+ [

## Set up tasks for viewing operations information about resources
](#application-manager-getting-started-related-services-information)

## Set up tasks for importing resources


The following setup tasks help you view AWS resources in Application Manager. After each of these tasks is completed, Systems Manager can automatically import resources into Application Manager. After your resources are imported, you can create applications in Application Manager and move your imported resources into them. This helps you view operations information in the context of an application.

**(Optional) Organize your AWS resources by using tags**  
You can assign metadata to your AWS resources in the form of tags. Each tag is a label consisting of a user-defined key and value. Tags can help you manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria.

**(Optional) Organizes your AWS resources by using [AWS Resource Groups](https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html)**  
You can use resource groups to organize your AWS resources. Resource groups make it easier to manage, monitor, and automate tasks on many resources at one time.  
Application Manager automatically imports all of your resource groups and lists them in the **Custom applications** category.

**(Optional) Set up and deploy your AWS resources by using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)**  
CloudFormation allows you to create and provision AWS infrastructure deployments predictably and repeatedly. It helps you use AWS services such as Amazon EC2, Amazon Elastic Block Store (Amazon EBS), Amazon Simple Notification Service (Amazon SNS), Elastic Load Balancing, and AWS Auto Scaling. With CloudFormation, you can build reliable, scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure.   
Application Manager automatically imports all of your CloudFormation resources and lists them in the **CloudFormation stacks** category. You can create CloudFormation stacks and templates in Application Manager. Stack and template changes are automatically synchronized between Application Manager and CloudFormation. You can also create applications in Application Manager and move stacks into them. This helps you view operations information for resources in your stacks in the context of an application. For pricing information, see [CloudFormation Pricing](https://aws.amazon.com/cloudformation/pricing/).

**(Optional) Set up and deploy your applications by using AWS Launch Wizard**  
Launch Wizard guides you through the process of sizing, configuring, and deploying AWS resources for third-party applications without the need to manually identify and provision individual AWS resources.  
Application Manager automatically imports all of your Launch Wizard resources and lists them in the **Launch Wizard** category. For more information about AWS Launch Wizard, see [Getting started with AWS Launch Wizard for SQL Server](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-getting-started.html). Launch Wizard is available at no additional charge. You only pay for the AWS resources that you provision to run your solution.

**(Optional) Set up and deploy your containerized applications by using [Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/) and [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)**  
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service that makes it easy to run, stop, and manage containers on a cluster. Your containers are defined in a task definition that you use to run individual tasks or tasks within a service.   
Amazon EKS is a managed service that helps you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.   
Application Manager automatically imports all of your Amazon ECS and Amazon EKS infrastructure resources and lists them on the **Container clusters** tab. However, you can't manage or view operations information about your Amazon EKS pods or containers in Application Manager. You can only manage and view operations information about the infrastructure that is hosting your Amazon EKS resources. For pricing information, see [Amazon ECS Pricing](https://aws.amazon.com/ecs/pricing/) and [Amazon EKS Pricing](https://aws.amazon.com/eks/pricing/).

## Set up tasks for viewing operations information about resources


The following setup tasks help you view operations information about your AWS resources in Application Manager.

**(Recommended) Verify [runbook permissions](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-setup.html)**  
You can remediate issues with AWS resources from Application Manager by using Systems Manager Automation runbooks. To use this remediation tool, you must configure or verify permissions. For pricing information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

**(Optional) Enable [Cost Explorer](https://docs.aws.amazon.com/cost-management/latest/userguide/ce-enable.html)**  
AWS Cost Explorer is a feature of AWS Cost Management that you can use to visualize your cost data for further analysis. When you enable Cost Explorer, you can view cost information, cost history, and cost optimization for your application's resources in the Application Manager console.

**(Optional) Set up and configure Amazon CloudWatch [logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html) and [alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/GettingStarted.html)**  
CloudWatch is a monitoring and management service that provides data and actionable insights for AWS, hybrid, and multicloud applications and infrastructure resources. With CloudWatch, you can collect and access all your performance and operational data in the form of logs and metrics from a single platform. To view CloudWatch logs and alarms for your resources in Application Manager, you must set up and configure CloudWatch. For pricing information, see [CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/).  
CloudWatch Logs support applies to applications only, not to clusters.

**(Optional) Set up and configure [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/getting-started.html)**  
AWS Config provides a detailed view of the resources associated with your AWS account, including how they're configured, how they're related to one another, and how the configurations and their relationships have changed over time. You can use AWS Config to evaluate the configuration settings of your AWS resources. You do this by creating AWS Config rules, which represent your ideal configuration settings. While AWS Config continually tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as *noncompliant*. Application Manager displays compliance information about AWS Config rules. To view this data in Application Manager, you must set up and configure AWS Config. For pricing information, see [AWS Config Pricing](https://aws.amazon.com/config/pricing/).

**(Optional) Create State Manager [associations](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state.html)**  
You can use Systems Manager State Manager to create a configuration that you assign to your managed nodes. The configuration, called an *association*, defines the state that you want to maintain on your nodes. To view association compliance data in Application Manager, you must configure one or more State Manager associations. State Manager is offered at no additional charge.

**(Optional) Set up and configure [https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html)**  
You can view operational work items (OpsItems) about your resources in Application Manager by using OpsCenter. You can configure Amazon CloudWatch and Amazon EventBridge to automatically send OpsItems to OpsCenter based on alarms and events. You can also enter OpsItems manually. For pricing information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

# Configuring permissions for Systems Manager Application Manager


You can use all features of Application Manager, a tool in AWS Systems Manager, if your AWS Identity and Access Management (IAM) entity (such as a user, group, or role) has access to the API operations listed in this topic. The API operations are separated into two tables to help you understand the different functions they perform.

The following table lists the API operations that Systems Manager calls if you choose a resource in Application Manager because you want to view the resource details. For example, if Application Manager lists an Amazon EC2 Auto Scaling group, and if you choose that group to view its details, then Systems Manager calls the `autoscaling:DescribeAutoScalingGroups` API operations. If you don't have any Auto Scaling groups in your account, this API operation isn't called from Application Manager.


****  

| Resource details only | 
| --- | 
|  <pre>acm:DescribeCertificate <br />acm:ListTagsForCertificate<br />autoscaling:DescribeAutoScalingGroups <br />cloudfront:GetDistribution<br />cloudfront:ListTagsForResource <br />cloudtrail:DescribeTrails<br />cloudtrail:ListTags <br />cloudtrail:LookupEvents<br />codebuild:BatchGetProjects <br />codepipeline:GetPipeline<br />codepipeline:ListTagsForResource <br />dynamodb:DescribeTable<br />dynamodb:ListTagsOfResource <br />ec2:DescribeAddresses<br />ec2:DescribeCustomerGateways <br />ec2:DescribeHosts<br />ec2:DescribeInternetGateways <br />ec2:DescribeNetworkAcls<br />ec2:DescribeNetworkInterfaces <br />ec2:DescribeRouteTables<br />ec2:DescribeSecurityGroups <br />ec2:DescribeSubnets<br />ec2:DescribeVolumes <br />ec2:DescribeVpcs <br />ec2:DescribeVpnConnections<br />ec2:DescribeVpnGateways <br />elasticbeanstalk:DescribeApplications<br />elasticbeanstalk:ListTagsForResource<br />elasticloadbalancing:DescribeInstanceHealth<br />elasticloadbalancing:DescribeListeners<br />elasticloadbalancing:DescribeLoadBalancers<br />elasticloadbalancing:DescribeTags <br />iam:GetGroup <br />iam:GetPolicy<br />iam:GetRole <br />iam:GetUser <br />lambda:GetFunction<br />rds:DescribeDBClusters <br />rds:DescribeDBInstances<br />rds:DescribeDBSecurityGroups <br />rds:DescribeDBSnapshots<br />rds:DescribeDBSubnetGroups <br />rds:DescribeEventSubscriptions<br />rds:ListTagsForResource <br />redshift:DescribeClusterParameters<br />redshift:DescribeClusterSecurityGroups<br />redshift:DescribeClusterSnapshots<br />redshift:DescribeClusterSubnetGroups <br />redshift:DescribeClusters<br />s3:GetBucketTagging</pre>  | 

The following table lists the API operations that Systems Manager uses to make changes to applications and resources listed in Application Manager or to view operations information for a selected application or resource.


****  

| Application actions and details | 
| --- | 
|  <pre><br />applicationinsights:CreateApplication<br />applicationinsights:DescribeApplication<br />applicationinsights:ListProblems<br />ce:GetCostAndUsage<br />ce:GetTags<br />ce:ListCostAllocationTags<br />ce:UpdateCostAllocationTagsStatus<br />cloudformation:CreateStack<br />cloudformation:DeleteStack<br />cloudformation:DescribeStackDriftDetectionStatus<br />cloudformation:DescribeStackEvents<br />cloudformation:DescribeStacks<br />cloudformation:DetectStackDrift<br />cloudformation:GetTemplate<br />cloudformation:GetTemplateSummary<br />cloudformation:ListStacks<br />cloudformation:UpdateStack<br />cloudwatch:DescribeAlarms<br />cloudwatch:DescribeInsightRules<br />cloudwatch:DisableAlarmActions<br />cloudwatch:EnableAlarmActions<br />cloudwatch:GetMetricData<br />cloudwatch:ListTagsForResource<br />cloudwatch:PutMetricAlarm<br />config:DescribeComplianceByConfigRule<br />config:DescribeComplianceByResource<br />config:DescribeConfigRules<br />config:DescribeRemediationConfigurations<br />config:GetComplianceDetailsByConfigRule<br />config:GetComplianceDetailsByResource<br />config:GetResourceConfigHistory<br />config:ListDiscoveredResources<br />config:PutRemediationConfigurations<br />config:SelectResourceConfig<br />config:StartConfigRulesEvaluation<br />config:StartRemediationExecution<br />ec2:DescribeInstances<br />ecs:DescribeCapacityProviders<br />ecs:DescribeClusters<br />ecs:DescribeContainerInstances<br />ecs:ListClusters<br />ecs:ListContainerInstances<br />ecs:TagResource<br />eks:DescribeCluster<br />eks:DescribeFargateProfile<br />eks:DescribeNodegroup<br />eks:ListClusters<br />eks:ListFargateProfiles<br />eks:ListNodegroups<br />eks:TagResource<br />iam:CreateServiceLinkedRole<br />iam:ListRoles<br />logs:DescribeLogGroups<br />resource-groups:CreateGroup<br />resource-groups:DeleteGroup<br />resource-groups:GetGroup<br />resource-groups:GetGroupQuery<br />resource-groups:GetTags<br />resource-groups:ListGroupResources<br />resource-groups:ListGroups<br />resource-groups:Tag<br />resource-groups:Untag<br />resource-groups:UpdateGroup<br />s3:ListAllMyBuckets<br />s3:ListBucket<br />s3:ListBucketVersions<br />servicecatalog:GetApplication<br />servicecatalog:ListApplications<br />sns:CreateTopic<br />sns:ListSubscriptionsByTopic<br />sns:ListTopics<br />sns:Subscribe<br />ssm:AddTagsToResource<br />ssm:CreateDocument<br />ssm:CreateOpsMetadata<br />ssm:DeleteDocument<br />ssm:DeleteOpsMetadata<br />ssm:DescribeAssociation<br />ssm:DescribeAutomationExecutions<br />ssm:DescribeDocument<br />ssm:DescribeDocumentPermission<br />ssm:GetDocument<br />ssm:GetInventory<br />ssm:GetOpsMetadata<br />ssm:GetOpsSummary<br />ssm:GetServiceSetting<br />ssm:ListAssociations<br />ssm:ListComplianceItems<br />ssm:ListDocuments<br />ssm:ListDocumentVersions<br />ssm:ListOpsMetadata<br />ssm:ListResourceComplianceSummaries<br />ssm:ListTagsForResource<br />ssm:ModifyDocumentPermission<br />ssm:RemoveTagsFromResource<br />ssm:StartAssociationsOnce<br />ssm:StartAutomationExecution<br />ssm:UpdateDocument<br />ssm:UpdateDocumentDefaultVersion<br />ssm:UpdateOpsItem<br />ssm:UpdateOpsMetadata<br />ssm:UpdateServiceSetting<br />tag:GetTagKeys<br />tag:GetTagValues<br />tag:TagResources<br />tag:UntagResources</pre>  | 

## Example policy for all Application Manager permissions


To configure Application Manager permissions for an IAM entity (such as a user, group, or role), create an IAM policy using the following example. This policy example includes all API operations used by Application Manager. 

------
#### [ JSON ]

****  

```
                    {
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "acm:DescribeCertificate",
                "acm:ListTagsForCertificate",
                "applicationinsights:CreateApplication",
                "applicationinsights:DescribeApplication",
                "applicationinsights:ListProblems",
                "autoscaling:DescribeAutoScalingGroups",
                "ce:GetCostAndUsage",
                "ce:GetTags",
                "ce:ListCostAllocationTags",
                "ce:UpdateCostAllocationTagsStatus",
                "cloudformation:CreateStack",
                "cloudformation:DeleteStack",
                "cloudformation:DescribeStackDriftDetectionStatus",
                "cloudformation:DescribeStackEvents",
                "cloudformation:DescribeStacks",
                "cloudformation:DetectStackDrift",
                "cloudformation:GetTemplate",
                "cloudformation:GetTemplateSummary",
                "cloudformation:ListStacks",
                "cloudformation:ListStackResources",
                "cloudformation:UpdateStack",
                "cloudfront:GetDistribution",
                "cloudfront:ListTagsForResource",
                "cloudtrail:DescribeTrails",
                "cloudtrail:ListTags",
                "cloudtrail:LookupEvents",
                "cloudwatch:DescribeAlarms",
                "cloudwatch:DescribeInsightRules",
                "cloudwatch:DisableAlarmActions",
                "cloudwatch:EnableAlarmActions",
                "cloudwatch:GetMetricData",
                "cloudwatch:ListTagsForResource",
                "cloudwatch:PutMetricAlarm",
                "codebuild:BatchGetProjects",
                "codepipeline:GetPipeline",
                "codepipeline:ListTagsForResource",
                "config:DescribeComplianceByConfigRule",
                "config:DescribeComplianceByResource",
                "config:DescribeConfigRules",
                "config:DescribeRemediationConfigurations",
                "config:GetComplianceDetailsByConfigRule",
                "config:GetComplianceDetailsByResource",
                "config:GetResourceConfigHistory",
                "config:ListDiscoveredResources",
                "config:PutRemediationConfigurations",
                "config:SelectResourceConfig",
                "config:StartConfigRulesEvaluation",
                "config:StartRemediationExecution",
                "dynamodb:DescribeTable",
                "dynamodb:ListTagsOfResource",
                "ec2:DescribeAddresses",
                "ec2:DescribeCustomerGateways",
                "ec2:DescribeHosts",
                "ec2:DescribeInstances",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeNetworkAcls",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVolumes",
                "ec2:DescribeVpcs",
                "ec2:DescribeVpnConnections",
                "ec2:DescribeVpnGateways",
                "ecs:DescribeCapacityProviders",
                "ecs:DescribeClusters",
                "ecs:DescribeContainerInstances",
                "ecs:ListClusters",
                "ecs:ListContainerInstances",
                "ecs:TagResource",
                "eks:DescribeCluster",
                "eks:DescribeFargateProfile",
                "eks:DescribeNodegroup",
                "eks:ListClusters",
                "eks:ListFargateProfiles",
                "eks:ListNodegroups",
                "eks:TagResource",
                "elasticbeanstalk:DescribeApplications",
                "elasticbeanstalk:ListTagsForResource",
                "elasticloadbalancing:DescribeInstanceHealth",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeTags",
                "iam:CreateServiceLinkedRole",
                "iam:GetGroup",
                "iam:GetPolicy",
                "iam:GetRole",
                "iam:GetUser",
                "iam:ListRoles",
                "lambda:GetFunction",
                "logs:DescribeLogGroups",
                "rds:DescribeDBClusters",
                "rds:DescribeDBInstances",
                "rds:DescribeDBSecurityGroups",
                "rds:DescribeDBSnapshots",
                "rds:DescribeDBSubnetGroups",
                "rds:DescribeEventSubscriptions",
                "rds:ListTagsForResource",
                "redshift:DescribeClusterParameters",
                "redshift:DescribeClusters",
                "redshift:DescribeClusterSecurityGroups",
                "redshift:DescribeClusterSnapshots",
                "redshift:DescribeClusterSubnetGroups",
                "resource-groups:CreateGroup",
                "resource-groups:DeleteGroup",
                "resource-groups:GetGroup",
                "resource-groups:GetGroupQuery",
                "resource-groups:GetTags",
                "resource-groups:ListGroupResources",
                "resource-groups:ListGroups",
                "resource-groups:Tag",
                "resource-groups:Untag",
                "resource-groups:UpdateGroup",
                "s3:GetBucketTagging",
                "s3:ListAllMyBuckets",
                "s3:ListBucket",
                "s3:ListBucketVersions",
                "servicecatalog:GetApplication",
                "servicecatalog:ListApplications",
                "sns:CreateTopic",
                "sns:ListSubscriptionsByTopic",
                "sns:ListTopics",
                "sns:Subscribe",
                "ssm:AddTagsToResource",
                "ssm:CreateDocument",
                "ssm:CreateOpsMetadata",
                "ssm:DeleteDocument",
                "ssm:DeleteOpsMetadata",
                "ssm:DescribeAssociation",
                "ssm:DescribeAutomationExecutions",
                "ssm:DescribeDocument",
                "ssm:DescribeDocumentPermission",
                "ssm:GetDocument",
                "ssm:GetInventory",
                "ssm:GetOpsMetadata",
                "ssm:GetOpsSummary",
                "ssm:GetServiceSetting",
                "ssm:ListAssociations",
                "ssm:ListComplianceItems",
                "ssm:ListDocuments",
                "ssm:ListDocumentVersions",
                "ssm:ListOpsMetadata",
                "ssm:ListResourceComplianceSummaries",
                "ssm:ListTagsForResource",
                "ssm:ModifyDocumentPermission",
                "ssm:RemoveTagsFromResource",
                "ssm:StartAssociationsOnce",
                "ssm:StartAutomationExecution",
                "ssm:UpdateDocument",
                "ssm:UpdateDocumentDefaultVersion",
                "ssm:UpdateOpsMetadata",
                "ssm:UpdateOpsItem",
                "ssm:UpdateServiceSetting",
                "tag:GetResources",
                "tag:GetTagKeys",
                "tag:GetTagValues",
                "tag:TagResources",
                "tag:UntagResources"
            ],
            "Resource": "*"
        }
    ]
}
```

------

**Note**  
You can restrict a user's ability to make changes to applications and resources in Application Manager by removing the following API operations from the IAM permissions policy attached to their user, group, or role. Removing these actions creates a read-only experience in Application Manager. The following are all of the APIs that allow users to make changes to the application or any other related resources.   

```
applicationinsights:CreateApplication
ce:UpdateCostAllocationTagsStatus
cloudformation:CreateStack
cloudformation:DeleteStack
cloudformation:UpdateStack
cloudwatch:DisableAlarmActions
cloudwatch:EnableAlarmActions
cloudwatch:PutMetricAlarm
config:PutRemediationConfigurations
config:StartConfigRulesEvaluation
config:StartRemediationExecution
ecs:TagResource
eks:TagResource
iam:CreateServiceLinkedRole
resource-groups:CreateGroup
resource-groups:DeleteGroup
resource-groups:Tag
resource-groups:Untag
resource-groups:UpdateGroup
sns:CreateTopic
sns:Subscribe
ssm:AddTagsToResource
ssm:CreateDocument
ssm:CreateOpsMetadata
ssm:DeleteDocument
ssm:DeleteOpsMetadata
ssm:ModifyDocumentPermission
ssm:RemoveTagsFromResource
ssm:StartAssociationsOnce
ssm:StartAutomationExecution
ssm:UpdateDocument
ssm:UpdateDocumentDefaultVersion
ssm:UpdateOpsMetadata
ssm:UpdateOpsItem
ssm:UpdateServiceSetting
tag:TagResources
tag:UntagResources
```

For information about creating and editing IAM policies, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*. For information about how to assign this policy to an IAM entity (such as a user, group, or role), see [ Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html). 

# Adding applications and container clusters to Application Manager


Application Manager is a component of AWS Systems Manager. In Application Manager, an *application* is a logical group of AWS resources that you want to operate as a unit. This logical group can represent different versions of an application, ownership boundaries for operators, or developer environments, to name a few.

The first time you open Application Manager, the **What Application Manager can do for you** page displays. When you choose **Get started**, Application Manager automatically imports metadata about your resources that were created in other AWS services or Systems Manager tools. Application Manager then displays those resources in a list grouped by predefined categories.

For **Applications**, the list includes the following:
+ AWS CloudFormation stacks
+ Custom applications
+ AWS Launch Wizard applications
+ AppRegistry applications
+ AWS SAP Enterprise Workload applications
+ Amazon ECS clusters
+ Amazon EKS clusters

After import is complete, you can view operations information for an application or a specific resource in these predefined categories. Or, if you want to provide more context about a collection of resources, you can manually create an application in Application Manager. You can then add resources or groups of resources into that application. After you create an application in Application Manager, you can view operations information about your resource in the context of an application. 

## Creating an application in Application Manager


Use the following procedure to create an application in Application Manager and add resources to that application. 

**To create an application in Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. Choose choose **Create application**.

1. Choose an option from the drop-down list and complete the fields in the page that appears.

# Working with applications


Application Manager is a component of AWS Systems Manager. This section includes topics to help you work with Application Manager applications and view operations information about your AWS resources.

**Topics**
+ [

# Application overview in Application Manager
](application-manager-working-viewing-overview.md)
+ [

# Managing your application EC2 instances
](application-manager-working-instances.md)
+ [

# Viewing resources associated with your application
](application-manager-working-viewing-resources.md)
+ [

# Managing your application compliance
](application-manager-working-viewing-resource-compliance.md)
+ [

# Using CloudWatch Application Insights to monitor an application
](application-manager-working-viewing-monitors.md)
+ [

# Viewing OpsItems for an application
](application-manager-working-viewing-OpsItems.md)
+ [

# Managing your application logs
](application-manager-viewing-logs.md)
+ [

# Use Automation runbooks to remediate application issues
](application-manager-working-runbooks.md)
+ [

# Tag resources in Application Manager
](application-manager-working-tags.md)
+ [

# Working with CloudFormation templates and stacks in Application Manager
](application-manager-working-stacks.md)
+ [

# Working with clusters in Application Manager
](application-manager-working-clusters.md)

# Application overview in Application Manager


In Application Manager, a component of AWS Systems Manager, the **Overview** tab displays a summary of Amazon CloudWatch alarms, operational work items (OpsItems), CloudWatch Application Insights, and runbook history. Choose **View all** for any card to open the corresponding tab where you can view all application insights, alarms, OpsItems, or runbook history.

**About Application Insights**  
CloudWatch Application Insights identifies and sets up key metrics, logs, and alarms across your application resources and technology stack. Application Insights continuously monitors metrics and logs to detect and correlate anomalies and errors. When the system detects errors or anomalies, Application Insights generates CloudWatch Events that you can use to set up notifications or take actions. If you choose the **Edit configuration** button on the **Monitoring** tab, the system opens the CloudWatch Application Insights console. For more information about Application Insights, see [What is Amazon CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/appinsights-what-is.html) in the *Amazon CloudWatch User Guide*.

**About Cost Explorer**  
Application Manager is integrated with AWS Cost Explorer, a feature of [AWS Cost Management](https://docs.aws.amazon.com/account-billing/index.html), through the **Cost** widget and **Cost** tab. After you enable Cost Explorer in the Cost Management console, the **Cost** widget and **Cost** tab in Application Manager shows cost data for a specific non-container application or application component. You can use filters in the widget, or tab, to view cost data according to different time periods, levels of granularity, and cost types in either a bar or line chart. 

You can enable this feature by choosing the **Go to AWS Cost Management console** button. By default, the data is filtered to the past three months. For a non-container application, if you choose the **View all** button, Application Manager opens the **Resources** tab. For container applications, the **View all** button opens the AWS Cost Explorer console.

**Actions you can perform on this page**  
You can turn on and access information about the following widgets on the **Overview** tab on this page. When a widget is enabled, choose its **View all** to see relevant application details for that area.
+ In the **Insights and Alarms** section, choose the number for a severity to open the **Monitoring**, tab, where you can view more details about alarms of the chosen severity.
+ In the **Cost** section, choose **View all ** to open the **Resources** tab, where you can view cost data for a specific application or application component.
+ In the **Compliance** section, choose **View all ** to open the **Compliance** tab, where you can view compliance information from AWS Config and State Manager associations.
**Note**  
To view patch compliance details, choose the **Compliance** tab directly. Then you can view patch compliance details for the managed nodes used by the selected application. 
+ In the **Runbooks** section, choose a runbook to open it in the Systems Manager **Documents** page where you can view more details about the document.
+ In the **OpsItems** section, choose a severity to open the **OpsItems** tab where you can view all OpsItems of the chosen severity.
+ Choose a **View all** button to open the corresponding tab. You can view all alarms, OpsItems, or runbook history entries for the application.

**To open the **Overview** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

# Managing your application EC2 instances


Application Manager integrates with Amazon Elastic Compute Cloud (Amazon EC2) to display information about your instances in the context of an application. Application Manager displays instance state, status, and Amazon EC2 Auto Scaling health for a selected application in a graphical format. The **Instances** tab also includes a table with the following information for each instance in your application:
+ Instance state (Pending, Stopping, Running, Stopped)
+ Ping status for SSM Agent
+ Status and name of the most recent Systems Manager Automation runbook processed on the instance
+ A count of Amazon CloudWatch Logs alarms per state.
  + `ALARM` – The metric or expression is outside of the defined threshold.
  + `OK` – The metric or expression is within the defined threshold.
  + `INSUFFICIENT_DATA` – The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.
+ Auto Scaling group health for the parent and individual autoscaling groups

If you choose an instance in the **All instances** table, Application Manager displays information about that instance on four tabs:
+ **Details** – All of the instance details from Amazon EC2, including the Amazon Machine Image (AMI), DNS information, IP address information, and more.
+ **Health** – The current status as provided by EC2 system and instance status checks.
+ **Execution history** – Execution logs for Systems Manager Automation runbooks and API calls processed by the instance.
+ **CloudWatch alarms** – The name, state, and more for any CloudWatch alarms raised by the instance.

**Actions you can perform on this page**  
You can perform the following actions on this page:
+ Start, stop, and terminate instances.
+ Apply a Chef recipe.
+ Attach instances to, or detach instances from, an Auto Scaling group.
+ Enable automated updates for SSM Agent.

**To open the **Instances** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application that you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Instances** tab.

**To view details for your application instances**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application that you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Instances** tab.

1. Select the button next to the instance whose details you want to view.

1. Review the instance details at the bottom of the page.

**To automatically update SSM Agent**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application that you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Instances** tab.

1. In the **Agent actions** dropdown, choose **Configure SSM Agent update**.

1. Choose **All instances** to configure automatic SSM Agent updates for all managed instances. Alternatively, choose **Instance** to configure automation SSM Agent updates for a single instance in your application.

1. Select the **Enable automatic update** toggle.

1. In the **Specify schedule** dropdown, choose the schedule you want to use for SSM Agent updates.

1. Select **Configure**.

# Viewing resources associated with your application


In Application Manager, a component of AWS Systems Manager, the **Resources** tab displays the AWS resources in your application. If you choose a top-level component, this page displays all resources for that component and any subcomponents. If you choose a subcomponent, this page shows only the resources assigned to that subcomponent. 

**Actions you can perform on this page**  
You can perform the following actions on this page:
+ Choose a resource name to view information about it, including details provided by the console where it was created, tags, Amazon CloudWatch alarms, AWS Config details, and AWS CloudTrail log information.
+ Choose the option button beside a resource name. Then, choose the **Resource timeline** button to open the AWS Config console where you can view compliance information about a selected resource. 
+ If you enabled AWS Cost Explorer, the **Cost Explorer** section shows cost data for a specific non-container application or application component. You can enable this feature by choosing the **Go to AWS Cost Management console** button. Use the filters in this section to view cost information about your application.

**To open the **Resources** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Resources** tab.

# Managing your application compliance


In Application Manager, a component of AWS Systems Manager, the **Configurations** page displays [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/) resource and configuration rule compliance information. This page also displays AWS Systems Manager [https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-state.html) association compliance information. You can choose a resource, a rule, or an association to open the corresponding console for more information. This page displays compliance information from the last 90 days.

**Actions you can perform on this page**  
You can perform the following actions on this page:
+ Choose a resource name to open the AWS Config console where you can view compliance information about a selected resource.
+ Choose the option button beside a resource name. Then, choose the **Resource timeline** button to open the AWS Config console where you can view compliance information about a selected resource.
+ In the **Config rules compliance** section, you can do the following:
  + Choose a rule name to open the AWS Config console where you can view information about that rule.
  + Choose **Add rules** to open the AWS Config console where you can create a rule.
  + Choose the option button beside a rule name, choose **Actions**, and then choose **Manage remediation** to change the remediation action for a rule.
  + Choose the option button beside a rule name, choose **Actions**, and then choose **Re-evaluate** to have AWS Config run a compliance check on the selected rule.
+ In the **Association compliance** section, you can do the following:
  + Choose an association name to open the **Associations** page where you can view information about that association.
  + Choose **Create association** to open Systems Manager State Manager where you can create an association.
  + Choose the option button beside an association name and choose **Apply association** to immediately start all actions specified in the association.

**To open the **Compliance** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Compliance** tab.

# Using CloudWatch Application Insights to monitor an application


In Application Manager, a component of AWS Systems Manager, the **Monitoring** tab displays Amazon CloudWatch Application Insights and alarm details for resources in an application.

**About Application Insights**  
CloudWatch Application Insights identifies and sets up key metrics, logs, and alarms across your application resources and technology stack. Application Insights continuously monitors metrics and logs to detect and correlate anomalies and errors. When the system detects errors or anomalies, Application Insights generates CloudWatch Events that you can use to set up notifications or take actions. If you choose the **Edit configuration** button on the **Monitoring** tab, the system opens the CloudWatch Application Insights console. For more information about Application Insights, see [What is Amazon CloudWatch Application Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/appinsights-what-is.html) in the *Amazon CloudWatch User Guide*.

**Actions you can perform on this page**  
You can perform the following actions on this page:
+ Choose a service name in the **Alarms by AWS service** section to open CloudWatch to the selected service and alarm.
+ Adjust the time period for data displayed in widgets in the **Recent alarms** section by selecting one of the predefined time period values. You can choose **custom** to define your own time period.  
![\[The time period controls on the Application Manager Monitoring tab.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/application-manager-Monitoring-1.png)
+ Hover your cursor over a widget in the **Recent alarms** section to view a data pop-up for a specific time.  
![\[An alarm widget in the Recent alarms section of the Application Manager Monitoring tab.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/application-manager-Monitoring-2.png)
+ Choose the options menu in a widget to view display options. Choose **Enlarge** to expand a widget. Choose **Refresh** to update the data in a widget. Click and drag your cursor in a widget data display to select a specific range. You can then choose **Apply time range**.  
![\[Alarm widget options in Application Manager.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/application-manager-Monitoring-3.png)
+ Choose the **Actions** menu to view alarm data **Override** options, which include the following:
  + Choose whether your widget displays live data. Live data is data published within the last minute that hasn't been fully aggregated. If live data is turned off, only data points with an aggregation period of at least one minute in the past are shown. For example, when using 5-minute periods, the data point for 12:35 would be aggregated from 12:35 to 12:40, and displayed at 12:41.

    If live data is turned on, the most recent data point is shown as soon as any data is published in the corresponding aggregation interval. Each time you refresh the display, the most recent data point might change as new data within that aggregation period is published.
  + Specify a time period for live data.
  + Link the charts in the **Recent alarms** section, so that when you zoom in or zoom out on one chart, the other chart zooms in or zooms out at the same time. You can unlink charts to limit zoom to one chart. 
  + Hide Auto Scaling alarms.  
![\[The Action menu of the Application Manager Monitoring tab.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/application-manager-Monitoring-4.png)

**To open the **Monitoring** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Monitoring** tab.

# Viewing OpsItems for an application


In Application Manager, a component of AWS Systems Manager, the **OpsItems** tab displays operational work items (OpsItems) for resources in the selected application. You can configure Systems Manager OpsCenter to automatically create OpsItems from Amazon CloudWatch alarms and Amazon EventBridge events. You can also manually create OpsItems.

**Actions you can perform on this tab**  
You can perform the following actions on this page:
+ Filter the list of OpsItems by using the search field. You can filter by OpsItem name, ID, source ID, or severity. You can also filter the list based on status. OpsItems support the following statuses: Open, In progress, Open and In progress, Resolved, or All.
+ Change the status of an OpsItem by choosing the option button beside it and then choosing an option in the **Set status** menu. 
+ Open Systems Manager OpsCenter to create an OpsItem by choosing **Create OpsItem**.

**To open the **OpsItems** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **OpsItems** tab.

# Managing your application logs


In Application Manager, a component of AWS Systems Manager, the **Logs** tab displays a list of log groups from Amazon CloudWatch Logs.

**Actions you can perform on this tab**  
You can perform the following actions on this page:
+ Choose a log group name to open it in CloudWatch Logs. You can then choose a log stream to view logs for a resource in the context of an application.
+ Choose **Create log groups** to create a log group in CloudWatch Logs.

**To open the **Logs** tab**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Logs** tab.

# Use Automation runbooks to remediate application issues


You can remediate issues with AWS resources from Application Manager, a tool in AWS Systems Manager, by using Automation runbooks. An Automation runbook defines the actions that Systems Manager performs on your managed instances and other AWS resources when an automation runs. Automation is a tool in AWS Systems Manager. A runbook contains one or more steps that run in sequential order. Each step is built around a single action. Output from one step can be used as input in a later step. 

When you choose **Start runbook** from an Application Manager application or cluster, the system displays a filtered list of available runbooks based on the type of resources in your application or cluster. When you choose the runbook you want to start, Systems Manager opens the **Execute automation document** page. 

Application Manager includes the following enhancements for working with runbooks.
+ If you choose the name of a resource in Application Manager and then choose **Execute runbook**, the system displays a filtered list of runbooks for that resource type.
+ You can initiate an automation on all resources of the same type by choosing a runbook in the list and then choosing **Run for resources of same type**. 

**Before you begin**  
Before you start a runbook from Application Manager, do the following:
+ Verify that you have the correct permissions for starting runbooks. For more information, see [Setting up Automation](automation-setup.md). 
+ Review the Automation procedure documentation about starting runbooks. For more information, see [Run an automated operation powered by Systems Manager Automation](running-simple-automations.md).

**To start a runbook from Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose **Start runbook**. Application Manager opens the **Automation widget** pop up. For information about the options in the **Automation widget**, see [Run an automated operation powered by Systems Manager Automation](running-simple-automations.md).

# Tag resources in Application Manager


You can quickly add or delete tags on applications and AWS resources in Application Manager. Use the following procedure to add a tag to or delete a tag from an application and all AWS resources in that application.

**To add a tag to or delete a tag from an application and all resources in the application**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. In the **Application information** section, choose the number beneath **Application tags**. If no tags are assigned to the application, the number is zero.

1. To add a tag, choose **Add new tag**. Specify a key and an optional value. To delete a tag, choose **Remove**.

1. Choose **Save**. 

Use the following procedure to add a tag to or delete a tag from a specific resource in Application Manager.

**To add a tag to or delete a tag from a resource**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Applications** section, choose a category. If you want to open an application you created manually in Application Manager, choose **Custom applications**.

1. Choose the application in the list. Application Manager opens the **Overview** tab.

1. Choose the **Resources** tab.

1. Choose a resource name.

1. In the **Tags** section choose **Edit**. 

1. To add a tag, choose **Add new tag**. Specify a key and an optional value. To delete a tag, choose **Remove**.

1. Choose **Save**. 

# Working with CloudFormation templates and stacks in Application Manager
Working with CloudFormation templates and stacks

Application Manager, a tool in AWS Systems Manager, helps you provision and manage resources for your applications by integrating with AWS CloudFormation. You can create, edit, and delete CloudFormation templates and stacks in Application Manager. A *stack* is a collection of AWS resources that you can manage as a single unit. This means you can create, update, or delete a collection of AWS resources by using CloudFormation stacks. A *template* is a formatted text file in JSON or YAML that specifies the resources you want to provision in your stacks. 

Application Manager also includes a template library where you can clone, create, and store templates. Application Manager and CloudFormation display the same information about the current status of a stack. Templates and template updates are stored in Systems Manager until you provision the stack, at which time the changes are also displayed in CloudFormation.

After you create a stack in Application Manager, the **CloudFormation stacks** page displays helpful information about it. This includes the template used to create it, a count of [https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html) for resources in your stack, the [stack status](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html#cfn-console-view-stack-data-resources-status-codes), and [drift status](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html).

**About Cost Explorer**  
Application Manager is integrated with AWS Cost Explorer, a feature of [AWS Cost Management](https://docs.aws.amazon.com/account-billing/index.html), through the **Cost** widget. After you enable Cost Explorer in the Cost Management console, the **Cost** widget in Application Manager shows cost data for a specific non-container application or application component. You can use filters in the widget to view cost data according to different time periods, granularities, and cost types in either a bar or line chart. 

You can enable this feature by choosing the **Go to AWS Cost Management console** button. By default, the data is filtered to the past three months. For a non-container application, if you choose the **View all** button, Application Manager opens the **Resources** tab. For container applications, the **View all** button opens the AWS Cost Explorer console.

**Note**  
Cost Explorer uses tags to track your application costs. If your CloudFormation stack-based application isn't configured with the `AppManagerCFNStackKey` tag key, Cost Explorer fails to present accurate cost data in Application Manager. When the `AppManagerCFNStackKey` tag key is not detected, you will be prompted in the console to add the tag to your CloudFormation stack to enable cost tracking. Adding it maps the tag key to the Amazon Resource Name (ARN) of your stack and enables the **Cost** widget to display accurate cost data.

**Important**  
Adding the `AppManagerCFNStackKey` tag will trigger a stack update. Any manual configurations that were performed after the stack was originally deployed will not be reflected after the user tag is added. For more information about resource update behaviors, see [Update behaviors of stack resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/                             using-cfn-updating-stacks-update-behaviors.html) in the *AWS CloudFormation User Guide* 

## Before you begin


Use the following links to learn about CloudFormation concepts before you create, edit, or delete CloudFormation templates and stacks by using Application Manager.
+ [What is AWS CloudFormation?](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)
+ [CloudFormation best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html)
+ [Learn template basics](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/gettingstarted.templatebasics.html)
+ [Working with AWS CloudFormation stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacks.html)
+ [Working with AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html)
+ [Sample templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-sample-templates.html)

**Topics**
+ [

## Before you begin
](#application-manager-working-stacks-before-you-begin)
+ [

# Using Application Manager to manage CloudFormation templates
](application-manager-working-templates-overview.md)
+ [

# Using Application Manager to manage CloudFormation stacks
](application-manager-working-stacks-overview.md)

# Using Application Manager to manage CloudFormation templates


Application Manager, a tool in AWS Systems Manager, includes a template library and other tools to help you manage AWS CloudFormation templates. This section includes the following information.

**Topics**
+ [

## Working with the template library
](#application-manager-working-stacks-template-library-working)
+ [

## Creating a template
](#application-manager-working-stacks-creating-template)
+ [

## Editing a template
](#application-manager-working-stacks-editing-template)

## Working with the template library


The Application Manager template library provides tools to help you view, create, edit, delete, and clone templates. You can also provision stacks directly from the template library. Templates are stored as Systems Manager (SSM) documents of type `CloudFormation`. By storing templates as SSM documents, you can use version controls to work with different versions of a template. You can also set permissions and share templates. After you successfully provision a stack, the stack and template are available in Application Manager and CloudFormation. 

**Before you begin**  
We recommend that you read the following topics to learn more about SSM documents before you start working with CloudFormation templates in Application Manager.
+ [AWS Systems Manager Documents](documents.md)
+ [Sharing SSM documents](documents-ssm-sharing.md)
+ [Best practices for shared SSM documents](documents-ssm-sharing.md#best-practices-shared)

**To view the template library in Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. Choose **CloudFormation Template Library**.

## Creating a template


The following procedure describes how to create a CloudFormation template in Application Manager. When you create a template, you enter the stack details of the template in either JSON or YAML. If you're unfamiliar with JSON or YAML, you can use AWS Infrastructure Composer, a tool for visually creating and modifying templates. For more information, see [Create templates visually with Infrastructure Composer](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/infrastructure-composer-for-cloudformation.html) in the *AWS CloudFormation User Guide*. For information about the structure and syntax of a template, see [CloudFormation template sections](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html) in the *AWS CloudFormation User Guide*.

You can also construct a template from multiple template snippets. Template snippets are examples that demonstrate how to write templates for a particular resource. For example, you can view snippets for Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Simple Storage Service (Amazon S3) domains, CloudFormation mappings, and more. Snippets are grouped by resource. You can find general-purpose CloudFormation snippets in the [General template snippets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-general.html) section of the *AWS CloudFormation User Guide*. 

### Creating a CloudFormation template in Application Manager (console)


Use the following procedure to create a CloudFormation template in Application Manager by using the AWS Management Console.

**To create a CloudFormation template in Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. Choose **CloudFormation Template Library**, and then either choose **Create template** or choose an existing template and then choose **Actions**, **Clone**.

1. For **Name**, enter a name for the template that helps you identify the resources it creates or the purpose of the stack.

1. (Optional) For **Version name**, enter a name or a number to identity the template version.

1. In the **Code editor** section, choose either **YAML** or **JSON** and then either enter or copy and paste your template code.

1. (Optional) In the **Tags** section, apply one or more tag key name/value pairs to the template.

1. (Optional) In the **Permissions** section, enter an AWS account ID and choose **Add account**. This action provides read permission to the template. The account owner can provision and clone the template, but they can't edit or delete it. 

1. Choose **Create**. The template is saved in the Systems Manager (SSM) Document service.

### Creating a CloudFormation template in Application Manager (command line)


After you create the content of your CloudFormation template in JSON or YAML, you can use the AWS Command Line Interface (AWS CLI) or AWS Tools for PowerShell to save the template as an SSM document. Replace each *example resource placeholder* with your own information.

**Before you begin**  
Install and configure the AWS CLI or the AWS Tools for PowerShell, if you have not already. For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

------
#### [ Linux & macOS ]

```
aws ssm create-document \
    --content file://path/to/template_in_json_or_yaml \
    --name "a_name_for_the_template" \
    --document-type "CloudFormation" \
    --document-format "JSON_or_YAML" \
    --tags "Key=tag-key,Value=tag-value"
```

------
#### [ Windows ]

```
aws ssm create-document ^
    --content file://C:\path\to\template_in_json_or_yaml ^
    --name "a_name_for_the_template" ^
    --document-type "CloudFormation" ^
    --document-format "JSON_or_YAML" ^
    --tags "Key=tag-key,Value=tag-value"
```

------
#### [ PowerShell ]

```
$json = Get-Content -Path "C:\path\to\template_in_json_or_yaml | Out-String
New-SSMDocument `
    -Content $json `
    -Name "a_name_for_the_template" `
    -DocumentType "CloudFormation" `
    -DocumentFormat "JSON_or_YAML" `
    -Tags "Key=tag-key,Value=tag-value"
```

------

If successful, the command returns a response similar to the following.

```
{
    "DocumentDescription": {
        "Hash": "c1d9640f15fbdba6deb41af6471d6ace0acc22f213bdd1449f03980358c2d4fb",
        "HashType": "Sha256",
        "Name": "MyTestCFTemplate",
        "Owner": "428427166869",
        "CreatedDate": "2021-06-04T09:44:18.931000-07:00",
        "Status": "Creating",
        "DocumentVersion": "1",
        "Description": "My test template",
        "PlatformTypes": [],
        "DocumentType": "CloudFormation",
        "SchemaVersion": "1.0",
        "LatestVersion": "1",
        "DefaultVersion": "1",
        "DocumentFormat": "YAML",
        "Tags": [
            {
                "Key": "Templates",
                "Value": "Test"
            }
        ]
    }
```

## Editing a template


Use the following procedure to edit a CloudFormation template in Application Manager. Template changes are available in CloudFormation after you provision a stack that uses the updated template.

**To edit a CloudFormation template in Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. Choose **CloudFormation Template Library**

1. Choose a template, and then choose **Actions**, **Edit**. You can't change the name of a template, but you can change all other details.

1. Choose **Save**. The template is saved in the Systems Manager Document service.

# Using Application Manager to manage CloudFormation stacks


Application Manager, a tool in AWS Systems Manager, helps you provision and manage resources for your applications by integrating with AWS CloudFormation. You can create, edit, and delete CloudFormation templates and stacks in Application Manager. A *stack* is a collection of AWS resources that you can manage as a single unit. This means you can create, update, or delete a collection of AWS resources by using CloudFormation stacks. A *template* is a formatted text file in JSON or YAML that specifies the resources you want to provision in your stacks. This section includes the following information.

**Topics**
+ [

## Creating a stack
](#application-manager-working-stacks-creating-stack)
+ [

## Updating a stack
](#application-manager-working-stacks-editing-stack)

## Creating a stack


The following procedures describe how to create a CloudFormation stack by using Application Manager. A stack is based on a template. When you create a stack, you can either choose an existing template or create a new one. After you create the stack, the system immediately attempts to create the resources identified in the stack. After the system successfully provisions the resources, the template and the stack are available to view and edit in Application Manager and CloudFormation.

**Note**  
There is no charge to use Application Manager to create a stack, but you are charged for AWS resources created in the stack. 

### Creating a CloudFormation stack by using Application Manager (console)


Use the following procedure to create a stack by using Application Manager in the AWS Management Console.

**To create a CloudFormation stack**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. Choose ** Create Application, CloudFormation Stack**.

1. In the **Prepare a template** section, choose an option. If you choose **Use an existing template**, you can use the tabs in the **Choose a template** section to locate the template you want. (If you choose one of the other options, complete the wizard to prepare a template.)

1. Select the button beside a template name, and then choose **Next**.

1. On the **Specify template details** page, verify the details of the template to ensure the process creates the resources you want.
   + (Optional) In the **Tags** section, apply one or more tag key name/value pairs to the template.
   + Tags are optional metadata that you assign to a resource. By using tags, you can categorize a resource in different ways, such as by purpose, owner, or environment.
   + Choose **Next**.

1. On the **Edit stack details** page, for **Stack name**, enter a name that helps you identify the resources created by the stack or its purpose.
   + The **Parameters** section includes all optional and required parameters specified in the template. Enter one or more parameters in each field.
   + (Optional) In the **Tags** section, apply one or more tag key name/value pairs to the stack.
   + (Optional) In the **Permissions** section, specify an AWS Identity and Access Management (IAM) role name or an IAM Amazon Resource Name (ARN). The system uses the specified service role to create all resources specified in your stack. If you don't specify an IAM role, then CloudFormation uses a temporary session that the system generates from your user credentials. For more information about this IAM role, see [CloudFormation service role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html) in the *AWS CloudFormation User Guide*. 
   + Choose **Next**.

1. On the **Review and provision** page, review all the details of the stack. Choose an **Edit** button on this page to make change.

1. Choose **Provision stack**.

Application Manager displays the **CloudFormation stacks** page and the status of the stack creation and deployment. If CloudFormation fails to create and provision the stack, see the following topics in the *AWS CloudFormation User Guide*. 
+ [Stack status codes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-describing-stacks.html#w2ab1c23c15c17c11)
+ [Troubleshooting CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html)

After your stack resources are provisioned and running, users can edit resources directly by using the underlying service that created the resource. For example, a user can use the Amazon Elastic Compute Cloud (Amazon EC2) console to update a server instance that was created as part of a CloudFormation stack. Some changes may be accidental, and some may be made intentionally to respond to time-sensitive operational events. Regardless, changes made outside of CloudFormation can complicate stack update or deletion operations. You can use drift detection or *drift status* to identify stack resources to which configuration changes have been made outside of CloudFormation management. For information about drift status, see [Detecting unmanaged configuration changes to stacks and resources](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-stack-drift.html).

### Creating a CloudFormation stack by using Application Manager (command line)


Use the following AWS Command Line Interface (AWS CLI) procedure to provision a stack by using a CloudFormation template that is stored as an SSM document in Systems Manager. Replace each *example resource placeholder* with your own information. For information about other AWS CLI procedures for creating stacks, see [Creating a stack](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-cli-creating-stack.html) in the *AWS CloudFormation User Guide*. 

**Before you begin**  
Install and configure the AWS CLI or the AWS Tools for PowerShell, if you have not already. For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

------
#### [ Linux & macOS ]

```
aws cloudformation create-stack \
    --stack-name a_name_for_the_stack \
    --template-url "ssm-doc://arn:aws:ssm:Region:account_ID:document/template_name" \
```

------
#### [ Windows ]

```
aws cloudformation create-stack ^
     --stack-name a_name_for_the_stack ^
     --template-url "ssm-doc://arn:aws:ssm:Region:account_ID:document/template_name" ^
```

------
#### [ PowerShell ]

```
New-CFNStack `
    -StackName "a_name_for_the_stack" `
    -TemplateURL "ssm-doc://arn:aws:ssm:Region:account_ID:document/template_name" `
```

------

## Updating a stack


You can deploy updates to a CloudFormation stack by directly editing the stack in Application Manager. With a direct update, you specify updates to a template or input parameters. After you save and deploy the changes, CloudFormation updates the AWS resources according to the changes you specified.

You can preview the changes that CloudFormation will make to your stack before you update it by using change sets. For more information, see [Updating stacks using change sets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html) in the *AWS CloudFormation User Guide*. 

**To update a CloudFormation stack in Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. Select the radio button next to an application name, and then choose **Actions**, **Update stack**.

1. On the **Specify template source** page, choose one of the following options, and then choose **Next**.
   + Choose **Use the template code currently provisioned in the stack** to view a template. Choose a template version in the **Versions** list, and then choose **Next**.
   + Choose **Switch to a different template** to choose or create a new template for the stack.

1. After you finish making changes to the template, choose **Next**.

1. On the **Edit stack details** page, you can edit parameters, tags, and permissions. You can't change the stack name. Make your changes and choose **Next**.

1. On the **Review and provision** page, review all the details of the stack, and then choose **Provision stack**.

# Working with clusters in Application Manager
Working with clusters

This section includes topics to help you work with Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) container clusters in Application Manager, a component of AWS Systems Manager.

**Topics**
+ [

# Working with Amazon ECS in Application Manager
](application-manager-working-ECS.md)
+ [

# Working with Amazon EKS in Application Manager
](application-manager-working-EKS.md)
+ [

# Working with runbooks for clusters
](application-manager-working-runbooks-clusters.md)

# Working with Amazon ECS in Application Manager


With Application Manager, a tool in AWS Systems Manager, you can view and manage your Amazon Elastic Container Service (Amazon ECS) cluster infrastructure. Application Manager applies a tag to your Amazon ECS cluster using the Amazon Resource Name (ARN) of the cluster as the tag value. Application Manager provides a component runtime view of the compute, networking, and storage resources in a cluster.

**Note**  
You can't manage or view operations information about your containers in Application Manager. You can only manage and view operations information about the infrastructure hosting your Amazon ECS resources.

**Actions you can perform on this page**  
You can perform the following actions on this page:
+ Choose **Manage cluster** to open the cluster in Amazon ECS.
+ Choose **View all** to view a list of resources in your cluster.
+ Choose **View in CloudWatch** to view resource alarms in Amazon CloudWatch.
+ Choose **Manage nodes** or **Manage Fargate profiles** to view these resources in Amazon ECS.
+ Choose a resource ID to view detailed information about it in the console where it was created.
+ View a list of OpsItems related to your clusters.
+ View a history of runbooks that have been run on your clusters.

**To open an **ECS cluster****

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Container clusters** section, choose **ECS clusters**.

1. Choose a cluster in the list. Application Manager opens the **Overview** tab.

# Working with Amazon EKS in Application Manager


Application Manager, a tool in AWS Systems Manager, integrates with [Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) (Amazon EKS) to provide information about the health of your Amazon EKS cluster infrastructure. Application Manager applies a tag to your Amazon EKS cluster using the Amazon Resource Name (ARN) of the cluster as the tag value. Application Manager provides a component runtime view of the compute, networking, and storage resources in a cluster.

**Note**  
You can't manage or view operations information about your Amazon EKS pods or containers in Application Manager. You can only manage and view operations information about the infrastructure hosting your Amazon EKS resources.

**Actions you can perform on this page**  
You can perform the following actions on this page:
+ Choose **Manage cluster** to open the cluster in Amazon EKS.
+ Choose **View all** to view a list of resources in your cluster.
+ Choose **View in CloudWatch** to view resource alarms in Amazon CloudWatch.
+ Choose **Manage nodes** or **Manage Fargate profiles** to view these resources in Amazon EKS.
+ Choose a resource ID to view detailed information about it in the console where it was created.
+ View a list of OpsItems related to your clusters.
+ View a history of runbooks that have been run on your clusters.

**To open an **EKS clusters** application**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Container clusters** section, choose **EKS clusters**.

1. Choose a cluster in the list. Application Manager opens the **Overview** tab.

# Working with runbooks for clusters


You can remediate issues with AWS resources from Application Manager, a tool in AWS Systems Manager, by using Systems Manager Automation runbooks. When you choose **Start runbook** from an Application Manager cluster, the system displays a filtered list of runbooks based on the type of resources in your cluster. When you choose the runbook you want to start, Systems Manager opens the **Execute automation document** page. 

**Before you begin**  
Before you start a runbook from Application Manager, do the following:
+ Verify that you have the correct permissions for starting runbooks. For more information, see [Setting up Automation](automation-setup.md). 
+ Review the Automation procedure documentation about starting runbooks. For more information, see [Run an automated operation powered by Systems Manager Automation](running-simple-automations.md).
+ If you intend to start runbooks on multiple resources at one time, review the documentation about using targets and rate controls. For more information, see [Run automated operations at scale](running-automations-scale.md).

**To start a runbook for clusters from Application Manager**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Application Manager**.

1. In the **Container clusters** section, choose a container type. 

1. Choose the cluster in the list. Application Manager opens the **Overview** tab.

1. On the **Runbooks** tab, choose **Start runbook**. Application Manager opens the **Execute automation document** page in a new tab. For information about the options in the **Execute automation document** page, see [Run an automated operation powered by Systems Manager Automation](running-simple-automations.md).

# AWS Systems Manager Parameter Store
Parameter Store

Parameter Store, a tool in AWS Systems Manager, provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data. You can reference Systems Manager parameters in your scripts, commands, SSM documents, and configuration and automation workflows by using the unique name that you specified when you created the parameter. To get started with Parameter Store, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/parameters). In the navigation pane, choose **Parameter Store**.

Parameter Store is also integrated with Secrets Manager. You can retrieve Secrets Manager secrets when using other AWS services that already support references to Parameter Store parameters. For more information, see [Referencing AWS Secrets Manager secrets from Parameter Store parameters](integration-ps-secretsmanager.md).

**Note**  
To implement password rotation lifecycles, use AWS Secrets Manager. You can rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle using Secrets Manager. For more information, see [What is AWS Secrets Manager?](https://docs.aws.amazon.com//secretsmanager/latest/userguide/intro.html) in the *AWS Secrets Manager User Guide*.

## How can Parameter Store benefit my organization?


Parameter Store offers these benefits:
+ Use a secure, scalable, hosted secrets management service with no servers to manage.
+ Improve your security posture by separating your data from your code.
+ Store configuration data and encrypted strings in hierarchies and track versions.
+ Control and audit access at granular levels.
+ Store parameters reliably because Parameter Store is hosted in multiple Availability Zones in an AWS Region.

## Who should use Parameter Store?

+ Any AWS customer who wants to have a centralized way to manage configuration data.
+ Software developers who want to store different logins and reference streams.
+ Administrators who want to receive notifications when their secrets and passwords are or aren't changed.

## What are the features of Parameter Store?

+ **Change notification**

  You can configure change notifications and invoke automated actions for both parameters and parameter policies. For more information, see [Setting up notifications or triggering actions based on Parameter Store events](sysman-paramstore-cwe.md).
+ **Organize parameters**

  You can tag your parameters individually to help you identify one or more parameters based on the tags you've assigned to them. For example, you can tag parameters for specific environments or departments. 
+ **Label versions**

  You can associate an alias for versions of your parameter by creating labels. Labels can help you remember the purpose of a parameter version when there are multiple versions. 
+ **Data validation**

  You can create parameters that point to an Amazon Elastic Compute Cloud (Amazon EC2) instance and Parameter Store validates these parameters to make sure that it references expected resource type, that the resource exists, and that the customer has permission to use the resource. For example, you can create a parameter with Amazon Machine Image (AMI) ID as a value with `aws:ec2:image` data type, and Parameter Store performs an asynchronous validation operation to make sure that the parameter value meets the formatting requirements for an AMI ID, and that the specified AMI is available in your AWS account. 
+ **Reference secrets**

  Parameter Store is integrated with AWS Secrets Manager so that you can retrieve Secrets Manager secrets when using other AWS services that already support references to Parameter Store parameters. 
+ **Share parameters with other accounts**

  You can optionally centralize configuration data in a single AWS account and share parameters with other accounts that need to access them.
+ **Accessible from other AWS services**

  You can use Parameter Store parameters with other Systems Manager tools and AWS services to retrieve secrets and configuration data from a central store. Parameters work with Systems Manager tools such as Run Command, Automation, and State Manager, tools in AWS Systems Manager. You can also reference parameters in a number of other AWS services, including the following:
  + Amazon Elastic Compute Cloud (Amazon EC2)
  + Amazon Elastic Container Service (Amazon ECS)
  + AWS Secrets Manager
  + AWS Lambda
  + AWS CloudFormation
  + AWS CodeBuild
  + AWS CodePipeline
  + AWS CodeDeploy
+ **Integrate with other AWS services**

  Configure integration with the following AWS services for encryption, notification, monitoring, and auditing:
  + AWS Key Management Service (AWS KMS)
  + Amazon Simple Notification Service (Amazon SNS)
  + Amazon CloudWatch: For more information, see [Configuring EventBridge rules for parameters and parameter policies](sysman-paramstore-cwe.md#cwe-parameter-changes). 
  + Amazon EventBridge: For more information, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md) and [Reference: Amazon EventBridge event patterns and types for Systems Manager](reference-eventbridge-events.md). 
  + AWS CloudTrail: For more information, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

## What is a parameter?


A Parameter Store parameter is any piece of data that is saved in Parameter Store, such as a block of text, a list of names, a password, an AMI ID, a license key, and so on. You can centrally and securely reference this data in your scripts, commands, and SSM documents.

When you reference a parameter, you specify the parameter name by using the following convention.

\$1\$1`ssm:parameter-name`\$1\$1

**Note**  
Parameters can't be referenced or nested in the values of other parameters. You can't include `{{}}` or `{{ssm:parameter-name}}` in a parameter value.

Parameter Store provides support for three types of parameters: `String`, `StringList`, and `SecureString`. 

With one exception, when you create or update a parameter, you enter the parameter value as plaintext, and Parameter Store performs no validation on the text you enter. For `String` parameters, however, you can specify the data type as `aws:ec2:image`, and Parameter Store validates that the value you enter is the proper format for an Amazon EC2 AMI; for example: `ami-12345abcdeEXAMPLE`.

### Parameter type: String


By default, the value of a `String` parameter consists of any block of text you enter. For example:
+ `abc123`
+ `Example Corp`
+ `<img src="images/bannerImage1.png"/>`

### Parameter type: StringList


The values of `StringList` parameters contain a comma-separated list of values, as shown in the following examples.

`Monday,Wednesday,Friday`

`CSV,TSV,CLF,ELF,JSON`

### Parameter type: SecureString


The value of a `SecureString` parameter is any sensitive data that needs to be stored and referenced in a secure manner. If you have data that you don't want users to alter or reference in plaintext, such as passwords or license keys, create those parameters using the `SecureString` data type.

**Important**  
Don't store sensitive data in a `String` or `StringList` parameter. For all sensitive data that must remain encrypted, use only the `SecureString` parameter type.  
For more information, see [Creating a SecureString parameter using the AWS CLI](param-create-cli.md#param-create-cli-securestring).

We recommend using `SecureString` parameters for the following scenarios:
+ You want to use data/parameters across AWS services without exposing the values as plaintext in commands, functions, agent logs, or CloudTrail logs.
+ You want to control who has access to sensitive data.
+ You want to be able to audit when sensitive data is accessed (CloudTrail).
+ You want to encrypt your sensitive data, and you want to bring your own encryption keys to manage access.

**Important**  
Only the *value* of a `SecureString` parameter is encrypted. Parameter names, descriptions, and other properties aren't encrypted.

You can use the `SecureString` parameter type for textual data that you want to encrypt, such as passwords, application secrets, confidential configuration data, or any other types of data that you want to protect. `SecureString` data is encrypted and decrypted using an AWS KMS key. You can use either a default KMS key provided by AWS or create and use your own AWS KMS key. (Use your own AWS KMS key if you want to restrict user access to `SecureString` parameters. For more information, see [IAM permissions for using AWS default keys and customer managed keys](sysman-paramstore-access.md#ps-kms-permissions).)

You can also use `SecureString` parameters with other AWS services. In the following example, the Lambda function retrieves a `SecureString` parameter by using the [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html) API.

```
import json
import boto3
ssm = boto3.client('ssm', 'us-east-2')
def get_parameters():
    response = ssm.get_parameters(
        Names=['LambdaSecureString'],WithDecryption=True
    )
    for parameter in response['Parameters']:
        return parameter['Value']
        
def lambda_handler(event, context):
    value = get_parameters()
    print("value1 = " + value)
    return value  # Echo back the first key value
```

**AWS KMS encryption and pricing**  
If you choose the `SecureString` parameter type when you create your parameter, Systems Manager uses AWS KMS to encrypt the parameter value.

**Important**  
Parameter Store only supports [symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose-key-spec.html#symmetric-cmks). You can't use an [asymmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) to encrypt your parameters. For help determining whether a KMS key is symmetric or asymmetric, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*

There is no charge from Parameter Store to create a `SecureString` parameter, but charges for use of AWS KMS encryption do apply. For information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing).

For more information about AWS managed keys and customer managed keys, see [AWS Key Management Service Concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) in the *AWS Key Management Service Developer Guide*. For more information about Parameter Store and AWS KMS encryption, see [How AWS Systems Manager Parameter Store Uses AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html).

**Note**  
To view an AWS managed key, use the AWS KMS `DescribeKey` operation. This AWS Command Line Interface (AWS CLI) example uses `DescribeKey` to view an AWS managed key.  

```
aws kms describe-key --key-id alias/aws/ssm
```

**More info**  
+ [Creating a SecureString parameter in Parameter Store and joining a node to a Domain (PowerShell)](sysman-param-securestring-walkthrough.md)
+ [Use Parameter Store to Securely Access Secrets and Config Data in CodeDeploy](https://aws.amazon.com/blogs/mt/use-parameter-store-to-securely-access-secrets-and-config-data-in-aws-codedeploy/)
+ [Interesting Articles on Amazon EC2 Systems Manager Parameter Store](https://aws.amazon.com/blogs/mt/interesting-articles-on-ec2-systems-manager-parameter-store/)

## Parameter size limits


Parameter Store has different size limits for parameter values depending on the parameter tier you use:
+ **Standard parameters**: Maximum value size of 4 KB
+ **Advanced parameters**: Maximum value size of 8 KB

If you need to store parameter values larger than 4 KB, you must use the advanced parameter tier. Advanced parameters provide additional capabilities but incur charges on your AWS account. For more information about parameter tiers and their features, see [Managing parameter tiers](parameter-store-advanced-parameters.md).

For a complete list of Parameter Store quotas and limits, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#parameter-store) in the *AWS General Reference*.

# Setting up Parameter Store


Before setting up parameters in Parameter Store, a tool in AWS Systems Manager, first configure AWS Identity and Access Management (IAM) policies that provide users in your account with permission to perform the actions you specify. 

This section includes information about how to manually configure these policies using the IAM console, and how to assign them to users and user groups. You can also create and assign policies to control which parameter actions can be run on a managed node. 

This section also includes information about how to create Amazon EventBridge rules that let you receive notifications about changes to Systems Manager parameters. You can also use EventBridge rules to invoke other actions in AWS based on changes in Parameter Store.

**Topics**
+ [

# Restricting access to Parameter Store parameters using IAM policies
](sysman-paramstore-access.md)
+ [

# Managing parameter tiers
](parameter-store-advanced-parameters.md)
+ [

# Increasing or resetting Parameter Store throughput
](parameter-store-throughput.md)
+ [

# Setting up notifications or triggering actions based on Parameter Store events
](sysman-paramstore-cwe.md)

# Restricting access to Parameter Store parameters using IAM policies


You restrict access to AWS Systems Manager parameters by using AWS Identity and Access Management (IAM). More specifically, you create IAM policies that restrict access to the following API operations:
+ [DeleteParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteParameter.html)
+ [DeleteParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteParameters.html)
+ [DescribeParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeParameters.html)
+ [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html)
+ [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html)
+ [GetParameterHistory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameterHistory.html)
+ [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html)
+ [PutParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html)

When using IAM policies to restrict access to Systems Manager parameters, we recommend that you create and use *restrictive* IAM policies. For example, the following policy allows a user to call the `DescribeParameters` and `GetParameters` API operations for a limited set of resources. This means that the user can get information about and use all parameters that begin with `prod-*`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeParameters"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameters"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/prod-*"
        }
    ]
}
```

------

**Important**  
If a user has access to a path, then the user can access all levels of that path. For example, if a user has permission to access path `/a`, then the user can also access `/a/b`. Even if a user has explicitly been denied access in IAM for parameter `/a/b`, they can still call the `GetParametersByPath` API operation recursively for `/a` and view `/a/b`.

For trusted administrators, you can provide access to all Systems Manager parameter API operations by using a policy similar to the following example. This policy gives the user full access to all production parameters that begin with `dbserver-prod-*`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutParameter",
                "ssm:DeleteParameter",
                "ssm:GetParameterHistory",
                "ssm:GetParametersByPath",
                "ssm:GetParameters",
                "ssm:GetParameter",
                "ssm:DeleteParameters"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/dbserver-prod-*"
        },
        {
            "Effect": "Allow",
            "Action": "ssm:DescribeParameters",
            "Resource": "*"
        }
    ]
}
```

------

## Denying permissions


Each API is unique and has distinct operations and permissions that you can allow or deny individually. An explicit deny in any policy overrides the allow.

**Note**  
The default AWS Key Management Service (AWS KMS) key has `Decrypt` permission for all IAM principals within the AWS account. If you want to have different access levels to `SecureString` parameters in your account, we don't recommend that you use the default key.

If you want all API operations retrieving parameter values to have the same behavior, then you can use a pattern like `GetParameter*` in a policy. The following example shows how to deny `GetParameter`, `GetParameters`, `GetParameterHistory`, and `GetParametersByPath` for all parameters beginning with `prod-*`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "ssm:GetParameter*"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/prod-*"
        }
    ]
}
```

------

The following example shows how to deny some commands while allowing the user to perform other commands on all parameters that begin with `prod-*`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "ssm:PutParameter",
                "ssm:DeleteParameter",
                "ssm:DeleteParameters",
                "ssm:DescribeParameters"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParametersByPath",
                "ssm:GetParameters",
                "ssm:GetParameter",
                "ssm:GetParameterHistory"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/prod-*"
        }
    ]
}
```

------

**Note**  
The parameter history includes all parameter versions, including the current one. Therefore, if a user is denied permission for `GetParameter`, `GetParameters`, and `GetParameterByPath` but is allowed permission for `GetParameterHistory`, they can see the current parameter, including `SecureString` parameters, using `GetParameterHistory`.

## Allowing only specific parameters to run on nodes


You can control access so that managed nodes can run only parameters that you specify.

If you choose the `SecureString` parameter type when you create your parameter, Systems Manager uses AWS KMS to encrypt the parameter value. AWS KMS encrypts the value by using either an AWS managed key or a customer managed key. For more information about AWS KMS and AWS KMS key, see the *[AWS Key Management Service Developer Guide](https://docs.aws.amazon.com/kms/latest/developerguide/)*.

You can view the AWS managed key by running the following command from the AWS CLI.

```
aws kms describe-key --key-id alias/aws/ssm
```

The following example allows nodes to get a parameter value only for parameters that begin with `prod-`. If the parameter is a `SecureString` parameter, then the node decrypts the string using AWS KMS.

**Note**  
Instance policies, like in the following example, are assigned to the instance role in IAM. For more information about configuring access to Systems Manager features, including how to assign policies to users and instances, see [Managing EC2 instances with Systems Manager](systems-manager-setting-up-ec2.md).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameters"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:parameter/prod-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111122223333:key/4914ec06-e888-4ea5-a371-5b88eEXAMPLE"
            ]
        }
    ]
}
```

------

## IAM permissions for using AWS default keys and customer managed keys


Parameter Store `SecureString` parameters are encrypted and decrypted using AWS KMS keys. You can choose to encrypt your `SecureString` parameters using either an AWS KMS key or the default KMS key provided by AWS.

When using a customer managed key, the IAM policy that grants a user access to a parameter or parameter path must provide explicit `kms:Encrypt` permissions for the key. For example, the following policy allows a user to create, update, and view `SecureString` parameters that begin with `prod-` in the specified AWS Region and AWS account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutParameter",
                "ssm:GetParameter",
                "ssm:GetParameters"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1:111122223333:parameter/prod-*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:Encrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-12345EXAMPLE"
            ]
        }
    ]
}
```

------

**Note**  
The `kms:GenerateDataKey` permission is required for creating encrypted advanced parameters using the specified customer managed key. 

By contrast, all users within the customer account have access to the default AWS managed key. If you use this default key to encrypt `SecureString` parameters and don't want users to work with `SecureString` parameters, their IAM policies must explicitly deny access to the default key, as demonstrated in the following policy example.

**Note**  
You can locate the Amazon Resource Name (ARN) of the default key in the AWS KMS console on the [AWS managed keys](https://console.aws.amazon.com/kms/home#/kms/defaultKeys) page. The default key is identified with `aws/ssm` in the **Alias** column.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "kms:Decrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:111122223333:key/abcd1234-ab12-cd34-ef56-abcdeEXAMPLE"
            ]
        }
    ]
}
```

------

If you require fine-grained access control over the `SecureString` parameters in your account, you should use a customer managed key to protect and restrict access to these parameters. We also recommend using AWS CloudTrail to monitor `SecureString` parameter activities.

For more information, see the following topics:
+ [Policy evaluation logic](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html) in the *IAM User Guide*
+ [Using key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*
+ [Viewing events with CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *AWS CloudTrail User Guide*

# Managing parameter tiers


Parameter Store, a tool in AWS Systems Manager, includes *standard parameters * and *advanced parameters*. You individually configure parameters to use either the standard-parameter tier (the default tier) or the advanced-parameter tier. 

You can change a standard parameter to an advanced parameter at any time, but you can’t revert an advanced parameter to a standard parameter. This is because reverting an advanced parameter to a standard parameter would cause the system to truncate the size of the parameter from 8 KB to 4 KB, resulting in data loss. Reverting would also remove any policies attached to the parameter. Also, advanced parameters use a different form of encryption than standard parameters. For more information, see [How AWS Systems Manager Parameter Store uses AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html) in the *AWS Key Management Service Developer Guide*.

If you no longer need an advanced parameter, or if you no longer want to incur charges for an advanced parameter, delete it and recreate it as a new standard parameter. 

The following table describes the differences between the tiers.


****  

|  | Standard | Advanced | 
| --- | --- | --- | 
|  Total number of parameters allowed (per AWS account and AWS Region)  |  10,000  |  100,000  | 
|  Maximum size of a parameter value  |  4 KB  |  8 KB  | 
|  Parameter policies available  |  No  |  Yes For more information, see [Assigning parameter policies in Parameter Store](parameter-store-policies.md).  | 
|  Cost  |  No additional charge  |  Charges apply For more information, see [AWS Systems Manager Pricing for Parameter Store](https://aws.amazon.com/systems-manager/pricing/#Parameter_Store).  | 

**Topics**
+ [

## Specifying a default parameter tier
](#ps-default-tier)
+ [

## Changing a standard parameter to an advanced parameter
](#parameter-store-advanced-parameters-enabling)

## Specifying a default parameter tier


In requests to create or update a parameter (that is, the `[https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html)` operation), you can specify the parameter tier to use in the request. The following is an example, using the AWS Command Line Interface (AWS CLI).

------
#### [ Linux & macOS ]

```
aws ssm put-parameter \
    --name "default-ami" \
    --type "String" \
    --value "t2.micro" \
    --tier "Standard"
```

------
#### [ Windows ]

```
aws ssm put-parameter ^
    --name "default-ami" ^
    --type "String" ^
    --value "t2.micro" ^
    --tier "Standard"
```

------

Whenever you specify a tier in the request, Parameter Store creates or updates the parameter according to your request. However, if you don't explicitly specify a tier in a request, the Parameter Store default tier setting determines which tier the parameter is created in.

The default tier when you begin using Parameter Store is the standard-parameter tier. If you use the advanced-parameter tier, you can specify one of the following as the default:
+ **Advanced**: With this option, Parameter Store evaluates all requests as advanced parameters. 
+ **Intelligent-Tiering**: With this option, Parameter Store evaluates each request to determine if the parameter is standard or advanced. 

  If the request doesn't include any options that require an advanced parameter, the parameter is created in the standard-parameter tier. If one or more options requiring an advanced parameter are included in the request, Parameter Store creates a parameter in the advanced-parameter tier.

**Benefits of Intelligent-Tiering**  
The following are reasons you might choose Intelligent-Tiering as the default tier.

**Cost control** – Intelligent-Tiering helps control your parameter-related costs by always creating standard parameters unless an advanced parameter is absolutely necessary. 

**Automatic upgrade to the advanced-parameter tier** – When you make a change to your code that requires upgrading a standard parameter to an advanced parameter, Intelligent-Tiering handles the conversion for you. You don't need to change your code to handle the upgrade.

Here are some examples of automatic upgrades:
+ Your AWS CloudFormation templates provision numerous parameters when they're run. When this process causes you to reach the 10,000 parameter quota in the standard-parameter tier, Intelligent-Tiering automatically upgrades you to the advanced-parameter tier, and your CloudFormation processes aren't interrupted.
+ You store a certificate value in a parameter, rotate the certificate value regularly, and the content is less than the 4 KB quota of the standard-parameter tier. If a replacement certificate value exceeds 4 KB, Intelligent-Tiering automatically upgrades the parameter to the advanced-parameter tier.
+ You want to associate numerous existing standard parameters to a parameter policy, which requires the advanced-parameter tier. Instead of your having to include the option `--tier Advanced` in all the calls to update the parameters, Intelligent-Tiering automatically upgrades the parameters to the advanced-parameter tier. The Intelligent-Tiering option upgrades parameters from standard to advanced whenever criteria for the advanced-parameter tier are introduced.

Options that require an advanced parameter include the following:
+ The content size of the parameter is more than 4 KB.
+ The parameter uses a parameter policy.
+ More than 10,000 parameters already exist in your AWS account in the current AWS Region.

**Default Tier Options**  
The tier options you can specify as the default include the following.
+ **Standard** – The standard-parameter tier is the default tier when you begin to use Parameter Store. Using the standard-parameter tier, you can create 10,000 parameters for each AWS Region in an AWS account. The content size of each parameter can equal a maximum of 4 KB. Standard parameters don't support parameter policies. There is no additional charge to use the standard-parameter tier. Choosing **Standard** as the default tier means that Parameter Store always attempts to create a standard parameter for requests that don't specify a tier. 
+ **Advanced** – Use the advanced-parameter tier to create a maximum of 100,000 parameters for each AWS Region in an AWS account. The content size of each parameter can equal a maximum of 8 KB. Advanced parameters support parameter policies. To share a parameter, it must be in the advanced parameter tier. There is a charge to use the advanced-parameter tier. For more information, see [AWS Systems Manager Pricing for Parameter Store](https://aws.amazon.com/systems-manager/pricing/#Parameter_Store). Choosing **Advanced** as the default tier means that Parameter Store always attempts to create an advanced parameter for requests that don't specify a tier.
**Note**  
When you choose the advanced-parameter tier, explicitly authorize AWS to charge your account for any advanced parameters you create.
+ **Intelligent-Tiering **– With the Intelligent-Tiering option, Parameter Store determines whether to use the standard-parameter tier or advanced-parameter tier based on the content of the request. For example, if you run a command to create a parameter with content under 4 KB, and there are fewer than 10,000 parameters in the current AWS Region in your AWS account, and you don't specify a parameter policy, a standard parameter is created. If you run a command to create a parameter with more than 4 KB of content, you already have more than 10,000 parameters in the current AWS Region in your AWS account, or you specify a parameter policy, an advanced parameter is created. 
**Note**  
When you choose Intelligent-Tiering, explicitly authorize AWS to charge your account for any advanced parameters you created. 

You can change the Parameter Store default tier setting at any time.

### Configuring permissions to specify a Parameter Store default tier


Verify that you have permission in AWS Identity and Access Management (IAM) to change the default parameter tier in Parameter Store by doing one of the following:
+ Make sure that you attach the `AdministratorAccess` policy to your IAM entity (such as user, group, or role).
+ Make sure that you have permission to change the default tier setting by using the following API operations:
  + [GetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetServiceSetting.html)
  + [UpdateServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateServiceSetting.html)
  + [ResetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResetServiceSetting.html)

Grant the following permissions to the IAM entity to allow a user to view and change the default tier setting for parameters in a specific AWS Region in an AWS account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetServiceSetting"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/parameter-store/default-parameter-tier"
        }
    ]
}
```

------

Administrators can specify read-only permission by assigning the following permissions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetServiceSetting"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:ResetServiceSetting",
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "*"
        }
    ]
}
```

------

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### Specifying or changing the Parameter Store default tier using the console


The following procedure shows how to use the Systems Manager console to specify or change the default parameter tier for the current AWS account and AWS Region. 

**Tip**  
If you haven't created a parameter yet, you can use the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell to change the default parameter tier. For information, see [Specifying or changing the Parameter Store default tier using the AWS CLI](#parameter-store-tier-changing-cli) and [Specifying or changing the Parameter Store default tier (PowerShell)](#parameter-store-tier-changing-ps).

**To specify or change the Parameter Store default tier**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the **Settings** tab.

1. Choose **Change default tier**.

1. Choose one of the following options.
   + **Standard**
   + **Advanced**
   + **Intelligent-Tiering**

   For information about these options, see [Specifying a default parameter tier](#ps-default-tier).

1. Review the message, and choose **Confirm**.

If you want to change the default tier setting later, repeat this procedure and specify a different default tier option.

### Specifying or changing the Parameter Store default tier using the AWS CLI


The following procedure shows how to use the AWS CLI to change the default parameter tier setting for the current AWS account and AWS Region.

**To specify or change the Parameter Store default tier using the AWS CLI**

1. Open the AWS CLI and run the following command to change the default parameter tier setting for a specific AWS Region in an AWS account.

   ```
   aws ssm update-service-setting --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/default-parameter-tier --setting-value tier-option
   ```

   *region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

   *tier-option* values include `Standard`, `Advanced`, and `Intelligent-Tiering`. For information about these options, see [Specifying a default parameter tier](#ps-default-tier).

   There is no output if the command succeeds.

1. Run the following command to view the current default parameter tier service settings for Parameter Store in the current AWS account and AWS Region.

   ```
   aws ssm get-service-setting --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/default-parameter-tier
   ```

   The system returns information similar to the following.

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/parameter-store/default-parameter-tier",
           "SettingValue": "Advanced",
           "LastModifiedDate": 1556551683.923,
           "LastModifiedUser": "arn:aws:sts::123456789012:assumed-role/Administrator/Jasper",
           "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/parameter-store/default-parameter-tier",
           "Status": "Customized"
       }
   }
   ```

If you want to change the default tier setting again, repeat this procedure and specify a different `SettingValue` option.

### Specifying or changing the Parameter Store default tier (PowerShell)


The following procedure shows how to use the Tools for Windows PowerShell to change the default parameter tier setting for a specific AWS Region in an Amazon Web Services account.

**To specify or change the Parameter Store default tier using PowerShell**

1. Change the Parameter Store default tier in the current AWS account and AWS Region using the AWS Tools for PowerShell (Tools for PowerShell).

   ```
   Update-SSMServiceSetting -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/default-parameter-tier" -SettingValue "tier-option" -Region region
   ```

   *region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

   *tier-option* values include `Standard`, `Advanced`, and `Intelligent-Tiering`. For information about these options, see [Specifying a default parameter tier](#ps-default-tier).

   There is no output if the command succeeds.

1. Run the following command to view the current default parameter tier service settings for Parameter Store in the current AWS account and AWS Region.

   ```
   Get-SSMServiceSetting -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/default-parameter-tier" -Region region
   ```

   *region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

   The system returns information similar to the following.

   ```
   ARN : arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/parameter-store/default-parameter-tier
   LastModifiedDate : 4/29/2019 3:35:44 PM
   LastModifiedUser : arn:aws:sts::123456789012:assumed-role/Administrator/Jasper
   SettingId        : /ssm/parameter-store/default-parameter-tier
   SettingValue     : Advanced
   Status           : Customized
   ```

If you want to change the default tier setting again, repeat this procedure and specify a different `SettingValue` option.

## Changing a standard parameter to an advanced parameter


Use the following procedure to change an existing standard parameter to an advanced parameter. For information about how to create a new advanced parameter, see [Creating Parameter Store parameters in Systems Manager](sysman-paramstore-su-create.md).

**To change a standard parameter to an advanced parameter**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose a parameter, and then choose **Edit**.

1. For **Description**, enter information about this parameter.

1. Choose **Advanced**.

1. For **Value**, enter the value of this parameter. Advanced parameters have a maximum value limit of 8 KB.

1. Choose **Save changes**.

# Increasing or resetting Parameter Store throughput


Increasing Parameter Store throughput increases the maximum number of transactions per second (TPS) that Parameter Store, a tool in AWS Systems Manager, can process. Increased throughput allows you to operate Parameter Store at higher volumes to support applications and workloads that need concurrent access to multiple parameters. You can increase the quota up to the max throughput on the **Settings** tab.

The Parameter Store throughput setting applies to all transactions created by all IAM users in the current AWS account and AWS Region. The throughput setting applies to standard and advanced parameters. 

**Note**  
Typically, updates are immediately visible in Service Quotas. In rare cases, it can take up to 24 hours for an update to be reflected.

For more information about max throughput default and maximum limits, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com//general/latest/gr/ssm.html#limits_ssm).

Increasing the throughput quota incurs a charge on your AWS account. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

**Topics**
+ [

## Configuring permissions to change Parameter Store throughput
](#parameter-store-throughput-permissions)
+ [

## Increasing or resetting throughput using the console
](#parameter-store-throughput-increasing)
+ [

## Increasing or resetting throughput using the AWS CLI
](#parameter-store-throughput-increasing-cli)
+ [

## Increasing or resetting throughput (PowerShell)
](#parameter-store-throughput-increasing-ps)

## Configuring permissions to change Parameter Store throughput


Verify that you have permission in IAM to change Parameter Store throughput by doing one of the following:
+ Make sure that the `AdministratorAccess` policy is attached to your IAM entity (user, group, or role).
+ Make sure that you have permission to change the throughput service setting by using the following API operations:
  + [GetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetServiceSetting.html)
  + [UpdateServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateServiceSetting.html)
  + [ResetServiceSetting](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResetServiceSetting.html)

Grant the following permissions to the IAM entity to allow a user to view and change the parameter-throughput setting for parameters in a specific AWS Region in an AWS account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetServiceSetting"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:servicesetting/ssm/parameter-store/high-throughput-enabled"
        }
    ]
}
```

------

Administrators can specify read-only permission by assigning the following permissions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetServiceSetting"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:ResetServiceSetting",
                "ssm:UpdateServiceSetting"
            ],
            "Resource": "*"
        }
    ]
}
```

------

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

## Increasing or resetting throughput using the console


The following procedure shows how to use the Systems Manager console to increase the number of transactions per second that Parameter Store can process for the current AWS account and AWS Region. It also shows how to revert to the standard settings if you no longer need increased throughput or no longer want to incur charges.

**To increase or reset Parameter Store throughput using the console**
**Tip**  
If you haven't created a parameter yet, you can use the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell to increase throughput. For information, see [Increasing or resetting throughput using the AWS CLI](#parameter-store-throughput-increasing-cli) and [Increasing or resetting throughput (PowerShell)](#parameter-store-throughput-increasing-ps).

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the **Settings** tab.

1. To increase throughput, choose **Set limit**.

   -or-

    To revert to the default limit, choose **Reset limit**.

1. If you are increasing the limit, do the following: 
   + Select the check box for **I accept that changing this setting incurs charges on my AWS account**.
   + Choose **Set limit**.

   -or-

   If you are resetting the limit to the default, do the following:
   + Select the check box for **I accept that resetting to the default throughput limit causes Parameter Store to process fewer transactions per second**.
   + Choose **Reset limit**.

## Increasing or resetting throughput using the AWS CLI


The following procedure shows how to use the AWS CLI to increase the number of transactions per second that Parameter Store can process for the current AWS account and AWS Region. You can also revert to the default limit.

**To increase Parameter Store throughput using the AWS CLI**

1. Open the AWS CLI and run the following command to increase the transactions per second that Parameter Store can process in the current AWS account and AWS Region.

   ```
   aws ssm update-service-setting --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/high-throughput-enabled --setting-value true
   ```

   There is no output if the command succeeds.

1. Run the following command to view the current throughput service settings for Parameter Store in the current AWS account and AWS Region.

   ```
   aws ssm get-service-setting --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/high-throughput-enabled
   ```

   The system returns information similar to the following:

   ```
   {
       "ServiceSetting": {
           "SettingId": "/ssm/parameter-store/high-throughput-enabled",
           "SettingValue": "true",
           "LastModifiedDate": 1556551683.923,
           "LastModifiedUser": "arn:aws:sts::123456789012:assumed-role/Administrator/Jasper",
           "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/parameter-store/high-throughput-enabled",
           "Status": "Customized"
       }
   }
   ```

If you no longer need increased throughput, or if you no longer want to incur charges, you can revert to the standard settings. To revert your settings, run the following command.

```
aws ssm reset-service-setting --setting-id arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/high-throughput-enabled
```

```
{
    "ServiceSetting": {
        "SettingId": "/ssm/parameter-store/high-throughput-enabled",
        "SettingValue": "false",
        "LastModifiedDate": 1555532818.578,
        "LastModifiedUser": "System",
        "ARN": "arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/parameter-store/high-throughput-enabled",
        "Status": "Default"
    }
}
```

## Increasing or resetting throughput (PowerShell)


The following procedure shows how to use the Tools for Windows PowerShell to increase the number of transactions per second that Parameter Store can process for the current AWS account and AWS Region. You can also revert to the default limit.

**To increase Parameter Store throughput using PowerShell**

1. Increase Parameter Store throughput in the current AWS account and AWS Region using the AWS Tools for PowerShell (Tools for PowerShell).

   ```
   Update-SSMServiceSetting -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/high-throughput-enabled" -SettingValue "true" -Region region
   ```

   There is no output if the command succeeds.

1. Run the following command to view the current throughput service settings for Parameter Store in the current AWS account and AWS Region.

   ```
   Get-SSMServiceSetting -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/high-throughput-enabled" -Region region
   ```

   The systems returns information similar to the following:

   ```
   ARN              : arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/parameter-store/high-throughput-enabled
   LastModifiedDate : 4/29/2019 3:35:44 PM
   LastModifiedUser : arn:aws:sts::123456789012:assumed-role/Administrator/Jasper
   SettingId        : /ssm/parameter-store/high-throughput-enabled
   SettingValue     : true
   Status           : Customized
   ```

If you no longer need increased throughput, or if you no longer want to incur charges, you can revert to the standard settings. To revert your settings, run the following command.

```
Reset-SSMServiceSetting -SettingId "arn:aws:ssm:region:account-id:servicesetting/ssm/parameter-store/high-throughput-enabled" -Region region
```

The system returns information similar to the following:

```
ARN              : arn:aws:ssm:us-east-2:123456789012:servicesetting/ssm/parameter-store/high-throughput-enabled
LastModifiedDate : 4/17/2019 8:26:58 PM
LastModifiedUser : System
SettingId        : /ssm/parameter-store/high-throughput-enabled
SettingValue     : false
Status           : Default
```

# Setting up notifications or triggering actions based on Parameter Store events
Setting up notifications or triggering actions based on Parameter Store events

The topics in this section explain how to use Amazon EventBridge and Amazon Simple Notification Service (Amazon SNS) to notify you about changes to AWS Systems Manager parameters. You can create an EventBridge rule to notify you when a parameter or a parameter label version is created, updated, or deleted. Events are emitted on a best effort basis. You can be notified about changes or status related to parameter policies, such as when a parameter expires, is going to expire, or hasn't changed for a specified period of time.

**Note**  
Parameter policies are available for parameters that use the advanced parameters tier. Charges apply. For more information, see [Assigning parameter policies in Parameter Store](parameter-store-policies.md) and [Managing parameter tiers](parameter-store-advanced-parameters.md).

The topics in this section also explain how to initiate other actions on a target for specific parameter events. For example, you can run an AWS Lambda function to recreate a parameter automatically when it expires or is deleted. You can set up a notification to invoke a Lambda function when your database password is updated. The Lambda function can force your database connections to reset or reconnect with the new password. EventBridge also supports running Run Command commands and Automation executions, and actions in many other AWS services. Run Command and Automation are both tools in AWS Systems Manager. For more information, see the *[Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/)*.

**Before You Begin**  
Create any resources you need to specify the target action for the rule you create. For example, if the rule you create is for sending a notification, first create an Amazon SNS topic. For more information, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.htmlGettingStarted.html) in the *Amazon Simple Notification Service Developer Guide*.

## Configuring EventBridge rules for parameters and parameter policies


This topic explains the following:
+ How to create an EventBridge rule that invokes a target based on events that happen to one or more parameters in your AWS account.
+ How to create EventBridge rules that invoke targets based on events that happen to one or more parameter policies in your AWS account. When you create an advanced parameter, you specify when a parameter expires, when to receive notification before a parameter expires, and how long to wait before notification should be sent that a parameter hasn't changed. You set up notification for these events using the following procedure. For more information, see [Assigning parameter policies in Parameter Store](parameter-store-policies.md) and [Managing parameter tiers](parameter-store-advanced-parameters.md).

**To configure an EventBridge rule for a Systems Manager parameter or parameter policy**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**, and then choose **Create rule**.

   -or-

   If the EventBridge home page opens first, choose **Create rule**.

1. Enter a name and description for the rule.

   A rule can't have the same name as another rule in the same Region and on the same event bus.

1. For **Event bus**, choose the event bus that you want to associate with this rule. If you want this rule to initiate on matching events that come from your own AWS account, select **default **. When an AWS service in your account emits an event, it always goes to your account’s default event bus. 

1. For **Rule type**, leave the default **Rule with an event pattern** selected.

1. Choose **Next**.

1. For **Event source**, leave the default **AWS events or EventBridge partner events** selected. You can skip the **Sample event** section.

1. For **Event pattern**, do the following:
   + Choose **Custom patterns (JSON editor)**.
   + For **Event pattern**, paste one of the following content in the box, depending on whether you are creating a rule for a parameter or a parameter policy:

------
#### [ Parameter ]

     ```
     {
         "source": [
             "aws.ssm"
         ],
         "detail-type": [
             "Parameter Store Change"
         ],
         "detail": {
             "name": [
                 "parameter-1-name",
                 "/parameter-2-name/level-2",
                 "/parameter-3-name/level-2/level-3"
             ],
             "operation": [
                 "Create",
                 "Update",
                 "Delete",
                 "LabelParameterVersion"
             ]
         }
     }
     ```

------
#### [ Parameter policy ]

     ```
     {
         "source": [
             "aws.ssm"
         ],
         "detail-type": [
             "Parameter Store Policy Action"
         ],
         "detail": {
             "parameter-name": [
                 "parameter-1-name",
                 "/parameter-2-name/level-2",
                 "/parameter-3-name/level-2/level-3"
             ],
             "policy-type": [
                 "Expiration",
                 "ExpirationNotification",
                 "NoChangeNotification"
             ]
         }
     }
     ```

------
   + Modify the contents for the parameters and the operations you want to act on, as shown in the following samples. 

------
#### [ Parameter ]

     With this example, an action is taken when either of the parameters named /`Oncall` and `/Project/Teamlead` are updated:

     ```
     {
         "source": [
             "aws.ssm"
         ],
         "detail-type": [
             "Parameter Store Change"
         ],
         "detail": {
             "name": [
                 "/Oncall",
                 "/Project/Teamlead"
             ],
             "operation": [
                 "Update"
             ]
         }
     }
     ```

------
#### [ Parameter policy ]

     With this example, an action is taken whenever the parameter named /`OncallDuties` expires and is deleted:

     ```
     {
         "source": [
             "aws.ssm"
         ],
         "detail-type": [
             "Parameter Store Policy Action"
         ],
         "detail": {
             "parameter-name": [
                 "/OncallDuties"
             ],
             "policy-type": [
                 "Expiration"
             ]
         }
     }
     ```

------

1. Choose **Next**.

1. For **Target 1**, choose a target type and a supported resource. For example, if you choose **SNS topic**, make a selection for **Topic**. If you choose **CodePipeline**, enter a pipeline ARN for **Pipeline ARN**. Provide additional configuration values as required.
**Tip**  
Choose **Add another target** if you require additional targets for the rule.

1. Choose **Next**.

1. (Optional) Enter one or more tags for the rule. For more information, see [Amazon EventBridge tags](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-tagging.html) in the *Amazon EventBridge User Guide*.

1. Choose **Next**.

1. Choose **Create rule**.

**More info**  
+ [Use parameter labels for easy configuration update across environments](https://aws.amazon.com/blogs/mt/use-parameter-labels-for-easy-configuration-update-across-environments/)
+ [Tutorial: Use EventBridge to relay events to AWS Systems Manager  Run Command](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-ec2-run-command.html) in the *Amazon EventBridge User Guide*
+ [Tutorial: Set AWS Systems Manager Automation as an EventBridge target](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-ssm-automation-as-target.html) in the *Amazon EventBridge User Guide*

# Working with Parameter Store


This section describes how to organize and create tag parameters, and how to create different versions of parameters. 

You can use the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), the AWS Tools for PowerShell, and the AWS SDKs to create and work with parameters. For more information about parameters, see [What is a parameter?](systems-manager-parameter-store.md#what-is-a-parameter).

**Topics**
+ [

# Creating Parameter Store parameters in Systems Manager
](sysman-paramstore-su-create.md)
+ [

# Searching for Parameter Store parameters in Systems Manager
](parameter-search.md)
+ [

# Assigning parameter policies in Parameter Store
](parameter-store-policies.md)
+ [

# Working with parameter hierarchies in Parameter Store
](sysman-paramstore-hierarchies.md)
+ [

# Preventing access to Parameter Store API operations
](parameter-store-policy-conditions.md)
+ [

# Working with parameter labels in Parameter Store
](sysman-paramstore-labels.md)
+ [

# Working with parameter versions in Parameter Store
](sysman-paramstore-versions.md)
+ [

# Working with shared parameters in Parameter Store
](parameter-store-shared-parameters.md)
+ [

# Working with parameters in Parameter Store using Run Command commands
](sysman-param-runcommand.md)
+ [

# Using native parameter support in Parameter Store for Amazon Machine Image IDs
](parameter-store-ec2-aliases.md)
+ [

# Deleting parameters from Parameter Store
](deleting-parameters.md)

# Creating Parameter Store parameters in Systems Manager
Creating parameters

Use the information in the following topics to help you create Systems Manager parameters using the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell (Tools for Windows PowerShell).

This section demonstrates how to create, store, and run parameters with Parameter Store in a test environment. It also demonstrates how to use Parameter Store with other Systems Manager tools in AWS services. For more information, see [What is a parameter?](systems-manager-parameter-store.md#what-is-a-parameter)

## Understanding requirements and constraints for parameter names


Use the information in this topic to help you specify valid values for parameter names when you create a parameter. 

This information supplements the details in the topic [PutParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html) in the *AWS Systems Manager API Reference*, which also provides information about the values **AllowedPattern**, **Description**, **KeyId**, **Overwrite**, **Type**, and **Value**.

The requirements and constraints for parameter names include the following:
+ **Case sensitivity**: Parameter names are case sensitive.
+ **Spaces**: Parameter names can't include spaces.
+ **Valid characters**: Parameter names can consist of the following symbols and letters only: `a-zA-Z0-9_.-`

  In addition, the slash character ( / ) is used to delineate hierarchies in parameter names. For example: `/Dev/Production/East/Project-ABC/MyParameter`
+ **Valid AMI format**: When you choose `aws:ec2:image` as the data type for a `String` parameter, the ID you enter must validate for the AMI ID format `ami-12345abcdeEXAMPLE`.
+ **Fully qualified**: When you create or reference a parameter in a hierarchy, include a leading forward slash character (/) . When you reference a parameter that is part of a hierarchy, specify the entire hierarchy path including the initial slash (/).
  + Fully qualified parameter names: `MyParameter1`, `/MyParameter2`, `/Dev/Production/East/Project-ABC/MyParameter`
  + Not fully qualified parameter name: `MyParameter3/L1`
+ **Length**: The maximum length for a parameter name that you specify is 1011 characters. This count of 1011 characters includes the characters in the ARN that precede the name you specify, such as the 45 characters in `arn:aws:ssm:us-east-2:111122223333:parameter/`.
+ **Prefixes**: A parameter name can't be prefixed with "`aws`" or "`ssm`" (case-insensitive). For example, attempts to create parameters with the following names fail with an exception:
  + `awsTestParameter`
  + `SSM-testparameter`
  + `/aws/testparam1`
**Note**  
When you specify a parameter in an SSM document, command, or script, include `ssm` as part of the syntax. For example, \$1\$1ssm:*parameter-name*\$1\$1 and \$1\$1 ssm:*parameter-name* \$1\$1, such as `{{ssm:MyParameter}}`, and `{{ ssm:MyParameter }}.`
+ **Uniqueness**: A parameter name must be unique within an AWS Region. For example, Systems Manager treats the following as separate parameters, if they exist in the same Region:
  + `/Test/TestParam1`
  + `/TestParam1`

  The following examples are also unique:
  + `/Test/TestParam1/Logpath1`
  + `/Test/TestParam1`

  The following examples, however, if in the same Region, aren't unique:
  + `/TestParam1`
  + `TestParam1`
+ **Hierarchy depth**: If you specify a parameter hierarchy, the hierarchy can have a maximum depth of fifteen levels. You can define a parameter at any level of the hierarchy. Both of the following examples are structurally valid:
  + `/Level-1/L2/L3/L4/L5/L6/L7/L8/L9/L10/L11/L12/L13/L14/parameter-name`
  + `parameter-name`

  Attempting to create the following parameter would fail with a `HierarchyLevelLimitExceededException` exception:
  + `/Level-1/L2/L3/L4/L5/L6/L7/L8/L9/L10/L11/L12/L13/L14/L15/L16/parameter-name`

**Important**  
If a user has access to a path, then the user can access all levels of that path. For example, if a user has permission to access path `/a`, then the user can also access `/a/b`. Even if a user has explicitly been denied access in AWS Identity and Access Management (IAM) for parameter `/a/b`, they can still call the [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html) API operation recursively for `/a` and view `/a/b`.

**Topics**
+ [

## Understanding requirements and constraints for parameter names
](#sysman-parameter-name-constraints)
+ [

# Creating a Parameter Store parameter using the console
](parameter-create-console.md)
+ [

# Creating a Parameter Store parameter using the AWS CLI
](param-create-cli.md)
+ [

# Creating a Parameter Store parameter using Tools for Windows PowerShell
](param-create-ps.md)

# Creating a Parameter Store parameter using the console
Creating a parameter using the console

You can use the AWS Systems Manager console to create and run `String`, `StringList`, and `SecureString` parameter types. After deleting a parameter, wait for at least 30 seconds to create a parameter with the same name.

**Note**  
Parameters are only available in the AWS Region where they were created.

The following procedure walks you through the process of creating a parameter in the Parameter Store console. You can create `String`, `StringList`, and `SecureString` parameter types from the console.

**To create a parameter**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose **Create parameter**.

1. In the **Name** box, enter a hierarchy and a name. For example, enter **/Test/helloWorld**.

   For more information about parameter hierarchies, see [Working with parameter hierarchies in Parameter Store](sysman-paramstore-hierarchies.md).

1. In the **Description** box, type a description that identifies this parameter as a test parameter.

1. For **Parameter tier** choose either **Standard** or **Advanced**. For more information about advanced parameters, see [Managing parameter tiers](parameter-store-advanced-parameters.md).

1. For **Type**, choose **String**, **StringList**, or **SecureString**.
   + If you choose **String**, the **Data type** field is displayed. If you're creating a parameter to hold the resource ID for an Amazon Machine Image (AMI), select `aws:ec2:image`. Otherwise, keep the default `text` selected.
   + If you choose **SecureString,** the **KMS Key ID** field is displayed. If you don't provide an AWS Key Management Service AWS KMS key ID, an AWS KMS key Amazon Resource Name (ARN), an alias name, or an alias ARN, then the system uses `alias/aws/ssm`, which is the AWS managed key for Systems Manager. If you don't want to use this key, then you can use a customer managed key. For more information about AWS managed keys and customer managed keys, see [AWS Key Management Service Concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) in the *AWS Key Management Service Developer Guide*. For more information about Parameter Store and AWS KMS encryption, see [How AWS Systems Manager Parameter Store Uses AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html).
**Important**  
Parameter Store only supports [symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose-key-spec.html#symmetric-cmks). You can't use an [asymmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) to encrypt your parameters. For help determining whether a KMS key is symmetric or asymmetric, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*
   + When creating a `SecureString` parameter in the console by using the `key-id` parameter with either a customer managed key alias name or an alias ARN, specify the prefix `alias/` before the alias. Following is an ARN example.

     ```
     arn:aws:kms:us-east-2:123456789012:alias/abcd1234-ab12-cd34-ef56-abcdeEXAMPLE
     ```

     Following is an alias name example.

     ```
     alias/MyAliasName
     ```

1. In the **Value** box, type a value. For example, type **This is my first parameter** or **ami-0dbf5ea29aEXAMPLE**.
**Note**  
Parameters can't be referenced or nested in the values of other parameters. You can't include `{{}}` or `{{ssm:parameter-name}}` in a parameter value.  
If you chose **SecureString**, the value of the parameter is masked by default ("\$1\$1\$1\$1\$1\$1") when you view it later on the parameter **Overview** tab, as shown in the following illlustration. Choose **Show** to display the parameter value.  

![\[A SecureString parameter's value is masked with the option to display the value.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/ps-overview-show-secstring.png)


1. (Optional) In the **Tags** area, apply one or more tag key-value pairs to the parameter.

   Tags are optional metadata that you assign to a resource. Tags allow you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a Systems Manager parameter to identify the type of resource to which it applies, the environment, or the type of configuration data referenced by the parameter. In this case, you could specify the following key-value pairs:
   + `Key=Resource,Value=S3bucket`
   + `Key=OS,Value=Windows`
   + `Key=ParameterType,Value=LicenseKey`

1. Choose **Create parameter**. 

1. In the parameters list, choose the name of the parameter you just created. Verify the details on the **Overview** tab. If you created a `SecureString` parameter, choose **Show** to view the unencrypted value.

**Note**  
You can’t change an advanced parameter to a standard parameter. If you no longer need an advanced parameter, or if you no longer want to incur charges for an advanced parameter, delete it and recreate it as a new standard parameter.

**Note**  
You can't change the type of an existing parameter (for example, from `String` to `SecureString`) using the console. To change a parameter's type, you must use the AWS CLI or s with the `--overwrite` option. For more information, see [Creating a Parameter Store parameter using the AWS CLI](param-create-cli.md).

# Creating a Parameter Store parameter using the AWS CLI
Creating a parameter using the AWS CLI

You can use the AWS Command Line Interface (AWS CLI) to create `String`, `StringList`, and `SecureString` parameter types. After deleting a parameter, wait for at least 30 seconds to create a parameter with the same name.

Parameters can't be referenced or nested in the values of other parameters. You can't include `{{}}` or `{{ssm:parameter-name}}` in a parameter value.

**Note**  
Parameters are only available in the AWS Region where they were created.

**Topics**
+ [

## Creating a `String` parameter using the AWS CLI
](#param-create-cli-string)
+ [

## Creating a StringList parameter using the AWS CLI
](#param-create-cli-stringlist)
+ [

## Creating a SecureString parameter using the AWS CLI
](#param-create-cli-securestring)
+ [

## Creating a multi-line parameter using the AWS CLI
](#param-create-cli-multiline)

## Creating a `String` parameter using the AWS CLI


1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create a `String`-type parameter. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "parameter-name" \
       --value "parameter-value" \
       --type String \
       --tags "Key=tag-key,Value=tag-value"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "parameter-name" ^
       --value "parameter-value" ^
       --type String ^
       --tags "Key=tag-key,Value=tag-value"
   ```

------

   -or-

   Run the following command to create a parameter that contains an Amazon Machine Image (AMI) ID as the parameter value.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "parameter-name" \
       --value "an-AMI-id" \
       --type String \
       --data-type "aws:ec2:image" \
       --tags "Key=tag-key,Value=tag-value"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "parameter-name" ^
       --value "an-AMI-id" ^
       --type String ^
       --data-type "aws:ec2:image" ^
       --tags "Key=tag-key,Value=tag-value"
   ```

------

   The `--name` option supports hierarchies. For information about hierarchies, see [Working with parameter hierarchies in Parameter Store](sysman-paramstore-hierarchies.md).

   The `--data-type` option must be specified only if you are creating a parameter that contains an AMI ID. It validates that the parameter value you enter is a properly formatted Amazon Elastic Compute Cloud (Amazon EC2) AMI ID. For all other parameters, the default data type is `text` and it's optional to specify a value. For more information, see [Using native parameter support in Parameter Store for Amazon Machine Image IDs](parameter-store-ec2-aliases.md).
**Important**  
If successful, the command returns the version number of the parameter. **Exception**: If you have specified `aws:ec2:image` as the data type, a new version number in the response doesn't mean that the parameter value has been validated yet. For more information, see [Using native parameter support in Parameter Store for Amazon Machine Image IDs](parameter-store-ec2-aliases.md).

   The following example adds two key-value pair tags to a parameter. 

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name parameter-name \
       --value "parameter-value" \
       --type "String" \
       --tags '[{"Key":"Region","Value":"East"},{"Key":"Environment", "Value":"Production"}]'
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name parameter-name ^
       --value "parameter-value" ^
       --type "String" ^
       --tags [{\"Key\":\"Region1\",\"Value\":\"East1\"},{\"Key\":\"Environment1\",\"Value\":\"Production1\"}]
   ```

------

   The following example uses a parameter hierarchy in the name to create a plaintext `String` parameter. It returns the version number of the parameter. For more information about parameter hierarchies, see [Working with parameter hierarchies in Parameter Store](sysman-paramstore-hierarchies.md).

------
#### [ Linux & macOS ]

   **Parameter not in a hierarchy**

   ```
   aws ssm put-parameter \
       --name "golden-ami" \
       --type "String" \
       --value "ami-12345abcdeEXAMPLE"
   ```

   **Parameter in a hierarchy**

   ```
   aws ssm put-parameter \
       --name "/amis/linux/golden-ami" \
       --type "String" \
       --value "ami-12345abcdeEXAMPLE"
   ```

------
#### [ Windows ]

   **Parameter not in a hierarchy**

   ```
   aws ssm put-parameter ^
       --name "golden-ami" ^
       --type "String" ^
       --value "ami-12345abcdeEXAMPLE"
   ```

   **Parameter in a hierarchy**

   ```
   aws ssm put-parameter ^
       --name "/amis/windows/golden-ami" ^
       --type "String" ^
       --value "ami-12345abcdeEXAMPLE"
   ```

------

1. Run the following command to view the latest parameter value and verify the details of your new parameter.

   ```
   aws ssm get-parameters --names "/Test/IAD/helloWorld"
   ```

   The system returns information like the following.

   ```
   {
       "InvalidParameters": [],
       "Parameters": [
           {            
               "Name": "/Test/IAD/helloWorld",
               "Type": "String",
               "Value": "My updated parameter value",
               "Version": 2,
               "LastModifiedDate": "2020-02-25T15:55:33.677000-08:00",
               "ARN": "arn:aws:ssm:us-east-2:123456789012:parameter/Test/IAD/helloWorld"            
           }
       ]
   }
   ```

Run the following command to change the parameter value. It returns the version number of the parameter.

```
aws ssm put-parameter --name "/Test/IAD/helloWorld" --value "My updated 1st parameter" --type String --overwrite
```

Run the following command to view the parameter value history.

```
aws ssm get-parameter-history --name "/Test/IAD/helloWorld"
```

Run the following command to use this parameter in a command.

```
aws ssm send-command --document-name "AWS-RunShellScript" --parameters '{"commands":["echo {{ssm:/Test/IAD/helloWorld}}"]}' --targets "Key=instanceids,Values=instance-ids"
```

Run the following command if you only want to retrieve the parameter Value.

```
aws ssm get-parameter --name testDataTypeParameter --query "Parameter.Value"
```

Run the following command if you only want to retrieve the parameter Value using `get-parameters`.

```
aws ssm get-parameters --names "testDataTypeParameter" --query "Parameters[*].Value"
```

Run the following command to view the parameter metadata.

```
aws ssm describe-parameters --filters "Key=Name,Values=/Test/IAD/helloWorld"
```

**Note**  
*Name* must be capitalized.

The system returns information like the following.

```
{
    "Parameters": [
        {
            "Name": "helloworld",
            "Type": "String",
            "LastModifiedUser": "arn:aws:iam::123456789012:user/JohnDoe",
            "LastModifiedDate": 1494529763.156,
            "Version": 1,
            "Tier": "Standard",
            "Policies": []           
        }
    ]
}
```

## Creating a StringList parameter using the AWS CLI


1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create a parameter. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "parameter-name" \
       --value "a-comma-separated-list-of-values" \
       --type StringList \
       --tags "Key=tag-key,Value=tag-value"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "parameter-name" ^
       --value "a-comma-separated-list-of-values" ^
       --type StringList ^
       --tags "Key=tag-key,Value=tag-value"
   ```

------
**Note**  
If successful, the command returns the version number of the parameter.

   This example adds two key-value pair tags to a parameter. (Depending on the operating system type on your local machine, run one of the following commands. The version to run from a local Windows machine includes the escape characters ("\$1") that you need to run the command from your command line tool.)

   Here is a `StringList` example that uses a parameter hierarchy.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name /IAD/ERP/Oracle/addUsers \
       --value "Milana,Mariana,Mark,Miguel" \
       --type StringList
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name /IAD/ERP/Oracle/addUsers ^
       --value "Milana,Mariana,Mark,Miguel" ^
       --type StringList
   ```

------
**Note**  
Items in a `StringList` must be separated by a comma (,). You can't use other punctuation or special characters to escape items in the list. If you have a parameter value that requires a comma, then use the `String` type.

1. Run the `get-parameters` command to verify the details of the parameter. For example:

   ```
   aws ssm get-parameters --name "/IAD/ERP/Oracle/addUsers"
   ```

## Creating a SecureString parameter using the AWS CLI


Use the following procedure to create a `SecureString` parameter. Replace each *example resource placeholder* with your own information.

**Important**  
Only the *value* of a `SecureString` parameter is encrypted. Parameter names, descriptions, and other properties aren't encrypted.

**Important**  
Parameter Store only supports [symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose-key-spec.html#symmetric-cmks). You can't use an [asymmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) to encrypt your parameters. For help determining whether a KMS key is symmetric or asymmetric, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run **one** of the following commands to create a parameter that uses the `SecureString` data type.

------
#### [ Linux & macOS ]

   **Create a `SecureString` parameter using the default AWS managed key **

   ```
   aws ssm put-parameter \
       --name "parameter-name" \
       --value "parameter-value" \
       --type "SecureString"
   ```

   **Create a `SecureString` parameter that uses a customer managed key**

   ```
   aws ssm put-parameter \
       --name "parameter-name" \
       --value "a-parameter-value, for example P@ssW%rd#1" \
       --type "SecureString"
       --tags "Key=tag-key,Value=tag-value"
   ```

   **Create a `SecureString` parameter that uses a custom AWS KMS key**

   ```
   aws ssm put-parameter \
       --name "parameter-name" \
       --value "a-parameter-value, for example P@ssW%rd#1" \
       --type "SecureString" \
       --key-id "your-account-ID/the-custom-AWS KMS-key" \
       --tags "Key=tag-key,Value=tag-value"
   ```

------
#### [ Windows ]

   **Create a `SecureString` parameter using the default AWS managed key**

   ```
   aws ssm put-parameter ^
       --name "parameter-name" ^
       --value "parameter-value" ^
       --type "SecureString"
   ```

   **Create a `SecureString` parameter that uses a customer managed key**

   ```
   aws ssm put-parameter ^
       --name "parameter-name" ^
       --value "a-parameter-value, for example P@ssW%rd#1" ^
       --type "SecureString" ^
       --tags "Key=tag-key,Value=tag-value"
   ```

   **Create a `SecureString` parameter that uses a custom AWS KMS key**

   ```
   aws ssm put-parameter ^
       --name "parameter-name" ^
       --value "a-parameter-value, for example P@ssW%rd#1" ^
       --type "SecureString" ^
       --key-id " ^
       --tags "Key=tag-key,Value=tag-value"account-ID/the-custom-AWS KMS-key"
   ```

------

   If you create a `SecureString` parameter by using the AWS managed key key in your account and Region, then you *don't* have to provide a value for the `--key-id` parameter.
**Note**  
To use the AWS KMS key assigned to your AWS account and AWS Region, remove the `key-id` parameter from the command. For more information about AWS KMS keys, see [AWS Key Management Service Concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) in the *AWS Key Management Service Developer Guide*.

   To use a customer managed key instead of the AWS managed key assigned to your account, specify the key by using the `--key-id` parameter. The parameter supports the following KMS parameter formats.
   + Key Amazon Resource Name (ARN) example:

      `arn:aws:kms:us-east-2:123456789012:key/key-id`
   + Alias ARN example:

     `arn:aws:kms:us-east-2:123456789012:alias/alias-name`
   + Key ID example:

     `12345678-1234-1234-1234-123456789012`
   + Alias Name example:

     `alias/MyAliasName`

   You can create a customer managed key by using the AWS Management Console or the AWS KMS API. The following AWS CLI commands create a customer managed key in the current AWS Region of your AWS account.

   ```
   aws kms [create-key](https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateKey.html)
   ```

   Use a command in the following format to create a `SecureString` parameter using the key you just created.

   The following example uses an obfuscated name (`3l3vat3131`) for a password parameter and an AWS KMS key.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name /Finance/Payroll/3l3vat3131 \
       --value "P@sSwW)rd" \
       --type SecureString \
       --key-id arn:aws:kms:us-east-2:123456789012:key/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name /Finance/Payroll/3l3vat3131 ^
       --value "P@sSwW)rd" ^
       --type SecureString ^
       --key-id arn:aws:kms:us-east-2:123456789012:key/1a2b3c4d-1a2b-1a2b-1a2b-1a2b3c4d5e
   ```

------

1. Run the following command to verify the details of the parameter.

   If you don't specify the `with-decryption` parameter, or if you specify the `no-with-decryption` parameter, the command returns an encrypted GUID.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-parameters \
       --name "the-parameter-name-you-specified" \
       --with-decryption
   ```

------
#### [ Windows ]

   ```
   aws ssm get-parameters ^
       --name "the-parameter-name-you-specified" ^
       --with-decryption
   ```

------

1. Run the following command to view the parameter metadata.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-parameters \
       --filters "Key=Name,Values=the-name-that-you-specified"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-parameters ^
       --filters "Key=Name,Values=the-name-that-you-specified"
   ```

------

1. Run the following command to change the parameter value if you're **not** using a customer managed AWS KMS key.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "the-name-that-you-specified" \
       --value "a-new-parameter-value" \
       --type "SecureString" \
       --overwrite
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "the-name-that-you-specified" ^
       --value "a-new-parameter-value" ^
       --type "SecureString" ^
       --overwrite
   ```

------

   -or-

   Run one of the following commands to change the parameter value if you **are** using a customer managed AWS KMS key.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "the-name-that-you-specified" \
       --value "a-new-parameter-value" \
       --type "SecureString" \
       --key-id "the-KMSkey-ID" \
       --overwrite
   ```

   ```
   aws ssm put-parameter \
       --name "the-name-that-you-specified" \
       --value "a-new-parameter-value" \
       --type "SecureString" \
       --key-id "account-alias/the-KMSkey-ID" \
       --overwrite
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "the-name-that-you-specified" ^
       --value "a-new-parameter-value" ^
       --type "SecureString" ^
       --key-id "the-KMSkey-ID" ^
       --overwrite
   ```

   ```
   aws ssm put-parameter ^
       --name "the-name-that-you-specified" ^
       --value "a-new-parameter-value" ^
       --type "SecureString" ^
       --key-id "account-alias/the-KMSkey-ID" ^
       --overwrite
   ```

------

1. Run the following command to view the latest parameter value.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-parameters \
       --name "the-name-that-you-specified" \
       --with-decryption
   ```

------
#### [ Windows ]

   ```
   aws ssm get-parameters ^
       --name "the-name-that-you-specified" ^
       --with-decryption
   ```

------

1. Run the following command to view the parameter value history.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-parameter-history \
       --name "the-name-that-you-specified"
   ```

------
#### [ Windows ]

   ```
   aws ssm get-parameter-history ^
       --name "the-name-that-you-specified"
   ```

------

**Note**  
You can manually create a parameter with an encrypted value. In this case, because the value is already encrypted, you don’t have to choose the `SecureString` parameter type. If you do choose `SecureString`, your parameter is doubly encrypted.

By default, all `SecureString` values are displayed as cipher-text. To decrypt a `SecureString` value, a user must have permission to call the AWS KMS [Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) API operation. For information about configuring AWS KMS access control, see [Authentication and Access Control for AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/control-access.html) in the *AWS Key Management Service Developer Guide*.

**Important**  
If you change the KMS key alias for the KMS key used to encrypt a parameter, then you must also update the key alias the parameter uses to reference AWS KMS. This only applies to the KMS key alias; the key ID that an alias attaches to stays the same unless you delete the whole key.

## Creating a multi-line parameter using the AWS CLI


You can use the AWS CLI to create a parameter with line breaks. Use line breaks to break up the text in longer parameter values for better legibility or, for example, update multi-paragraph parameter content for a web page. You can include the content in a JSON file and use the `--cli-input-json` option, using line break characters like `\n`, as shown in the following example.

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create a multi-line parameter. 

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "MultiLineParameter" \
       --type String \
       --cli-input-json file://MultiLineParameter.json
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "MultiLineParameter" ^
       --type String ^
       --cli-input-json file://MultiLineParameter.json
   ```

------

   The following example shows the contents of the file `MultiLineParameter.json`.

   ```
   {
       "Value": "<para>Paragraph One</para>\n<para>Paragraph Two</para>\n<para>Paragraph Three</para>"
   }
   ```

The saved parameter value is stored as follows.

```
<para>Paragraph One</para>
<para>Paragraph Two</para>
<para>Paragraph Three</para>
```

# Creating a Parameter Store parameter using Tools for Windows PowerShell
Creating a parameter using Tools for Windows PowerShell

You can use AWS Tools for Windows PowerShell to create `String`, `StringList`, and `SecureString` parameter types. After deleting a parameter, wait for at least 30 seconds to create a parameter with the same name.

Parameters can't be referenced or nested in the values of other parameters. You can't include `{{}}` or `{{ssm:parameter-name}}` in a parameter value.

**Note**  
Parameters are only available in the AWS Region where they were created.

**Topics**
+ [

## Creating a String parameter (Tools for Windows PowerShell)
](#param-create-ps-string)
+ [

## Creating a StringList parameter (Tools for Windows PowerShell)
](#param-create-ps-stringlist)
+ [

## Creating a SecureString parameter (Tools for Windows PowerShell)
](#param-create-ps-securestring)

## Creating a String parameter (Tools for Windows PowerShell)


1. Install and configure the AWS Tools for PowerShell (Tools for Windows PowerShell), if you haven't already.

   For information, see [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to create a parameter that contains a plain text value. Replace each *example resource placeholder* with your own information.

   ```
   Write-SSMParameter `
       -Name "parameter-name" `
       -Value "parameter-value" `
       -Type "String"
   ```

   -or-

   Run the following command to create a parameter that contains an Amazon Machine Image (AMI) ID as the parameter value.
**Note**  
To create a parameter with a tag, create the service.model.tag before hand as a variable. Here is an example.  

   ```
   $tag = New-Object Amazon.SimpleSystemsManagement.Model.Tag
   $tag.Key = "tag-key"
   $tag.Value = "tag-value"
   ```

   ```
   Write-SSMParameter `
       -Name "parameter-name" `
       -Value "an-AMI-id" `
       -Type "String" `
       -DataType "aws:ec2:image" `
       -Tags $tag
   ```

   The `-DataType` option must be specified only if you are creating a parameter that contains an AMI ID. For all other parameters, the default data type is `text`. For more information, see [Using native parameter support in Parameter Store for Amazon Machine Image IDs](parameter-store-ec2-aliases.md).

   Here is an example that uses a parameter hierarchy.

   ```
   Write-SSMParameter `
       -Name "/IAD/Web/SQL/IPaddress" `
       -Value "99.99.99.999" `
       -Type "String" `
       -Tags $tag
   ```

1. Run the following command to verify the details of the parameter.

   ```
   (Get-SSMParameterValue -Name "the-parameter-name-you-specified").Parameters
   ```

## Creating a StringList parameter (Tools for Windows PowerShell)


1. Install and configure the AWS Tools for PowerShell (Tools for Windows PowerShell), if you haven't already.

   For information, see [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to create a StringList parameter. Replace each *example resource placeholder* with your own information.
**Note**  
To create a parameter with a tag, create the service.model.tag before hand as a variable. Here is an example.   

   ```
   $tag = New-Object Amazon.SimpleSystemsManagement.Model.Tag
   $tag.Key = "tag-key"
   $tag.Value = "tag-value"
   ```

   ```
   Write-SSMParameter `
       -Name "parameter-name" `
       -Value "a-comma-separated-list-of-values" `
       -Type "StringList" `
       -Tags $tag
   ```

   If successful, the command returns the version number of the parameter.

   Here is an example.

   ```
   Write-SSMParameter `
       -Name "stringlist-parameter" `
       -Value "Milana,Mariana,Mark,Miguel" `
       -Type "StringList" `
       -Tags $tag
   ```
**Note**  
Items in a `StringList` must be separated by a comma (,). You can't use other punctuation or special characters to escape items in the list. If you have a parameter value that requires a comma, then use the `String` type.

1. Run the following command to verify the details of the parameter.

   ```
   (Get-SSMParameterValue -Name "the-parameter-name-you-specified").Parameters
   ```

## Creating a SecureString parameter (Tools for Windows PowerShell)


Before you create a `SecureString` parameter, read about the requirements for this type of parameter. For more information, see [Creating a SecureString parameter using the AWS CLI](param-create-cli.md#param-create-cli-securestring).

**Important**  
Only the *value* of a `SecureString` parameter is encrypted. Parameter names, descriptions, and other properties aren't encrypted.

**Important**  
Parameter Store only supports [symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/symm-asymm-choose-key-spec.html#symmetric-cmks). You can't use an [asymmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/symmetric-asymmetric.html) to encrypt your parameters. For help determining whether a KMS key is symmetric or asymmetric, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*

1. Install and configure the AWS Tools for PowerShell (Tools for Windows PowerShell), if you haven't already.

   For information, see [Installing the AWS Tools for PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html).

1. Run the following command to create a parameter. Replace each *example resource placeholder* with your own information.
**Note**  
To create a parameter with a tag, first create the service.model.tag as a variable. Here is an example.   

   ```
   $tag = New-Object Amazon.SimpleSystemsManagement.Model.Tag
   $tag.Key = "tag-key"
   $tag.Value = "tag-value"
   ```

   ```
   Write-SSMParameter `
       -Name "parameter-name" `
       -Value "parameter-value" `
       -Type "SecureString"  `
       -KeyId "an AWS KMS key ID, an AWS KMS key ARN, an alias name, or an alias ARN" `
       -Tags $tag
   ```

   If successful, the command returns the version number of the parameter.
**Note**  
To use the AWS managed key assigned to your account, remove the `-KeyId` parameter from the command.

   Here is an example that uses an obfuscated name (3l3vat3131) for a password parameter and an AWS managed key.

   ```
   Write-SSMParameter `
       -Name "/Finance/Payroll/3l3vat3131" `
       -Value "P@sSwW)rd" `
       -Type "SecureString"`
       -Tags $tag
   ```

1. Run the following command to verify the details of the parameter.

   ```
   (Get-SSMParameterValue -Name "the-parameter-name-you-specified" –WithDecryption $true).Parameters
   ```

By default, all `SecureString` values are displayed as cipher-text. To decrypt a `SecureString` value, a user must have permission to call the AWS KMS [Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) API operation. For information about configuring AWS KMS access control, see [Authentication and Access Control for AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/control-access.html) in the *AWS Key Management Service Developer Guide*.

**Important**  
If you change the KMS key alias for the KMS key used to encrypt a parameter, then you must also update the key alias the parameter uses to reference AWS KMS. This only applies to the KMS key alias; the key ID that an alias attaches to stays the same unless you delete the whole key.

# Searching for Parameter Store parameters in Systems Manager
Searching for parameters

When you have a lot of parameters in your account, it can be difficult to find information about a single or several parameters at a time. In this case, you can use filter tools to search for the ones you need information about, according to search criteria you specify. You can use the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), the AWS Tools for PowerShell, or the [DescribeParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeParameters.html) API to search for parameters.

**Topics**
+ [

## Searching for a parameter using the console
](#parameter-search-console)
+ [

## Searching for a parameter using the AWS CLI
](#parameter-search-cli)

## Searching for a parameter using the console


1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Select in the search box and choose how you want to search. For example, `Type` or `Name`.

1. Provide information for the search type you selected. For example:
   + If you're searching by `Type`, choose from `String`, `StringList`, or `SecureString`.
   + If you're searching by `Name`, choose `contains`, `equals`, or `begins-with`, and then enter all or part of a parameter name.
**Note**  
In the console, the default search type for `Name` is `contains`.

1. Press **Enter**.

The list of parameters is updated with the results of your search.

**Note**  
Your search might contain more results than are displayed on the first page of results. Use the right arrow (**>**) at the topic of the parameter list (if available) to view the next set of results.

## Searching for a parameter using the AWS CLI


Use the `describe-parameters` command to view information about one or more parameters in the AWS CLI. 

The following examples demonstrate various options you can use to view information about the parameters in your AWS account. For more information about these options, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-parameters.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-parameters.html) in the *AWS Command Line Interface User Guide*.

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Replace the sample values in the following commands with values reflecting parameters that have been created in your account.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=Name,Values=MyParameterName"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-parameters ^
       --parameter-filters "Key=Name,Values=MyParameterName"
   ```

------
**Note**  
For `describe-parameters`, the default search type for `Name` is `Equals`. In your parameter filters, specifying `"Key=Name,Values=MyParameterName"` is the same as specifying `"Key=Name,Option=Equals,Values=MyParameterName"`.

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=Name,Option=Contains,Values=Product"
   ```

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=Type,Values=String"
   ```

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=Path,Values=/Production/West"
   ```

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=Tier,Values=Standard"
   ```

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=tag:tag-key,Values=tag-value"
   ```

   ```
   aws ssm describe-parameters \
       --parameter-filters "Key=KeyId,Values=key-id"
   ```
**Note**  
In the last example, *key-id* represents the ID of an AWS Key Management Service (AWS KMS) key used to encrypt a `SecureString` parameter created in your account. Alternatively, you can enter **alias/aws/ssm** to use the default AWS KMS key for your account. For more information, see [Creating a SecureString parameter using the AWS CLI](param-create-cli.md#param-create-cli-securestring).

   If successful, the command returns output similar to the following.

   ```
   {
       "Parameters": [
           {
               "Name": "/Production/West/Manager",
               "Type": "String",
               "LastModifiedDate": 1573438580.703,
               "LastModifiedUser": "arn:aws:iam::111122223333:user/Mateo.Jackson",
               "Version": 1,
               "Tier": "Standard",
               "Policies": []
           },
           {
               "Name": "/Production/West/TeamLead",
               "Type": "String",
               "LastModifiedDate": 1572363610.175,
               "LastModifiedUser": "arn:aws:iam::111122223333:user/Mateo.Jackson",
               "Version": 1,
               "Tier": "Standard",
               "Policies": []
           },
           {
               "Name": "/Production/West/HR",
               "Type": "String",
               "LastModifiedDate": 1572363680.503,
               "LastModifiedUser": "arn:aws:iam::111122223333:user/Mateo.Jackson",
               "Version": 1,
               "Tier": "Standard",
               "Policies": []
           }
       ]
   }
   ```

# Assigning parameter policies in Parameter Store
Assigning parameter policies

Parameter policies help you manage a growing set of parameters by allowing you to assign specific criteria to a parameter such as an expiration date or *time to live*. Parameter policies are especially helpful in forcing you to update or delete passwords and configuration data stored in Parameter Store, a tool in AWS Systems Manager. Parameter Store offers the following types of policies: `Expiration`, `ExpirationNotification`, and `NoChangeNotification`.

**Note**  
To implement password rotation lifecycles, use AWS Secrets Manager. You can rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle using Secrets Manager. For more information, see [What is AWS Secrets Manager?](https://docs.aws.amazon.com//secretsmanager/latest/userguide/intro.html) in the *AWS Secrets Manager User Guide*.

Parameter Store enforces parameter policies by using asynchronous, periodic scans. After you create a policy, you don't need to perform additional actions to enforce the policy. Parameter Store independently performs the action defined by the policy according to the criteria you specified.

**Note**  
Parameter policies are available for parameters that use the advanced parameters tier. For more information, see [Managing parameter tiers](parameter-store-advanced-parameters.md).

A parameter policy is a JSON array, as shown in the following table. You can assign a policy when you create a new advanced parameter, or you can apply a policy by updating a parameter. Parameter Store supports the following types of parameter policies.


| Policy | Details | Examples | 
| --- | --- | --- | 
|  **Expiration**  |  This policy deletes the parameter. You can specify a specific date and time by using either the `ISO_INSTANT` format or the `ISO_OFFSET_DATE_TIME` format. To change when you want the parameter to be deleted, update the policy. Updating a *parameter* doesn't affect the expiration date or time of the policy attached to it. When the expiration date and time is reached, Parameter Store deletes the parameter.  This example uses the `ISO_INSTANT` format. You can also specify a date and time by using the `ISO_OFFSET_DATE_TIME` format. Here is an example: `2019-11-01T22:13:48.87+10:30:00` .   |  <pre>{<br />    "Type": "Expiration",<br />    "Version": "1.0",<br />    "Attributes": {<br />        "Timestamp": "2018-12-02T21:34:33.000Z"<br />    }<br />}</pre>  | 
|  **ExpirationNotification**  |  This policy initiates an event in Amazon EventBridge (EventBridge) that notifies you about the expiration. By using this policy, you can receive notifications before the expiration time is reached, in units of days or hours.  |  <pre>{<br />    "Type": "ExpirationNotification",<br />    "Version": "1.0",<br />    "Attributes": {<br />        "Before": "15",<br />        "Unit": "Days"<br />    }<br />}</pre>  | 
|  **NoChangeNotification**  |  This policy initiates an event in EventBridge if a parameter has *not* been modified for a specified period of time. This policy type is useful when, for example, a password needs to be changed within a period of time. This policy determines when to send a notification by reading the `LastModifiedTime` attribute of the parameter. If you change or edit a parameter, the system resets the notification time period based on the new value of `LastModifiedTime`.  |  <pre>{<br />    "Type": "NoChangeNotification",<br />    "Version": "1.0",<br />    "Attributes": {<br />        "After": "20",<br />        "Unit": "Days"<br />    }<br />}</pre>  | 

You can assign multiple policies to a parameter. For example, you can assign `Expiration` and `ExpirationNotification` policies so that the system initiates an EventBridge event to notify you about the impending deletion of a parameter. You can assign a maximum of ten (10) policies to a parameter.

The following example shows the request syntax for a [PutParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html) API request that assigns four policies to a new `SecureString` parameter named `ProdDB3`.

```
{
    "Name": "ProdDB3",
    "Description": "Parameter with policies",
    "Value": "P@ssW*rd21",
    "Type": "SecureString",
    "Overwrite": "True",
    "Policies": [
        {
            "Type": "Expiration",
            "Version": "1.0",
            "Attributes": {
                "Timestamp": "2018-12-02T21:34:33.000Z"
            }
        },
        {
            "Type": "ExpirationNotification",
            "Version": "1.0",
            "Attributes": {
                "Before": "30",
                "Unit": "Days"
            }
        },
        {
            "Type": "ExpirationNotification",
            "Version": "1.0",
            "Attributes": {
                "Before": "15",
                "Unit": "Days"
            }
        },
        {
            "Type": "NoChangeNotification",
            "Version": "1.0",
            "Attributes": {
                "After": "20",
                "Unit": "Days"
            }
        }
    ]
}
```

## Adding policies to an existing parameter


This section includes information about how to add policies to an existing parameter by using the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), and AWS Tools for Windows PowerShell . For information about how to create a new parameter that includes policies, see [Creating Parameter Store parameters in Systems Manager](sysman-paramstore-su-create.md).

**Topics**
+ [

### Adding policies to an existing parameter using the console
](#sysman-paramstore-policy-create-console)
+ [

### Adding policies to an existing parameter using the AWS CLI
](#sysman-paramstore-policy-create-cli)
+ [

### Adding policies to an existing parameter (Tools for Windows PowerShell)
](#sysman-paramstore-policy-create-ps)

### Adding policies to an existing parameter using the console


Use the following procedure to add policies to an existing parameter by using the Systems Manager console.

**To add policies to an existing parameter**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the option next to the parameter that you want to update to include policies, and then choose **Edit**.

1. Choose **Advanced**.

1. (Optional) In the **Parameter policies** section, choose **Enabled**. You can specify an expiration date and one or more notification policies for this parameter.

1. Choose **Save changes**.

**Important**  
Parameter Store preserves policies on a parameter until you either overwrite the policies with new policies or remove the policies. 
To remove all policies from an existing parameter, edit the parameter and apply an empty policy by using brackets and curly braces, as follows: `[{}]`
If you add a new policy to a parameter that already has policies, then Systems Manager overwrites the policies attached to the parameter. The existing policies are deleted. If you want to add a new policy to a parameter that already has one or more policies, copy and paste the original policies, type the new policy, and then save your changes.

### Adding policies to an existing parameter using the AWS CLI


Use the following procedure to add policies to an existing parameter by using the AWS CLI.

**To add policies to an existing parameter**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to add policies to an existing parameter. Replace each *example resource placeholder* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter   
       --name "parameter name" \
       --value 'parameter value' \
       --type parameter type \
       --overwrite \
       --policies "[{policies-enclosed-in-brackets-and-curly-braces}]"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter   
       --name "parameter name" ^
       --value 'parameter value' ^
       --type parameter type ^
       --overwrite ^
       --policies "[{policies-enclosed-in-brackets-and-curly-braces}]"
   ```

------

   Here is an example that includes an expiration policy that deletes the parameter after 15 days. The example also includes a notification policy that generates an EventBridge event five (5) days before the parameter is deleted. Last, it includes a `NoChangeNotification` policy if no changes are made to this parameter after 60 days. The example uses an obfuscated name (`3l3vat3131`) for a password and an AWS Key Management Service AWS KMS key. For more information about AWS KMS keys, see [AWS Key Management Service Concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) in the *AWS Key Management Service Developer Guide*.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "/Finance/Payroll/3l3vat3131" \
       --value "P@sSwW)rd" \
       --type "SecureString" \
       --overwrite \
       --policies "[{\"Type\":\"Expiration\",\"Version\":\"1.0\",\"Attributes\":{\"Timestamp\":\"2020-05-13T00:00:00.000Z\"}},{\"Type\":\"ExpirationNotification\",\"Version\":\"1.0\",\"Attributes\":{\"Before\":\"5\",\"Unit\":\"Days\"}},{\"Type\":\"NoChangeNotification\",\"Version\":\"1.0\",\"Attributes\":{\"After\":\"60\",\"Unit\":\"Days\"}}]"
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "/Finance/Payroll/3l3vat3131" ^
       --value "P@sSwW)rd" ^
       --type "SecureString" ^
       --overwrite ^
       --policies "[{\"Type\":\"Expiration\",\"Version\":\"1.0\",\"Attributes\":{\"Timestamp\":\"2020-05-13T00:00:00.000Z\"}},{\"Type\":\"ExpirationNotification\",\"Version\":\"1.0\",\"Attributes\":{\"Before\":\"5\",\"Unit\":\"Days\"}},{\"Type\":\"NoChangeNotification\",\"Version\":\"1.0\",\"Attributes\":{\"After\":\"60\",\"Unit\":\"Days\"}}]"
   ```

------

1. Run the following command to verify the details of the parameter. Replace *parameter name* with your own information.

------
#### [ Linux & macOS ]

   ```
   aws ssm describe-parameters  \
       --parameter-filters "Key=Name,Values=parameter name"
   ```

------
#### [ Windows ]

   ```
   aws ssm describe-parameters  ^
       --parameter-filters "Key=Name,Values=parameter name"
   ```

------

**Important**  
Parameter Store retains policies for a parameter until you either overwrite the policies with new policies or remove the policies. 
To remove all policies from an existing parameter, edit the parameter and apply an empty policy of brackets and curly braces. Replace each *example resource placeholder* with your own information. For example:  

  ```
  aws ssm put-parameter \
      --name parameter name \
      --type parameter type  \
      --value 'parameter value' \
      --policies "[{}]"
  ```

  ```
  aws ssm put-parameter ^
      --name parameter name ^
      --type parameter type  ^
      --value 'parameter value' ^
      --policies "[{}]"
  ```
If you add a new policy to a parameter that already has policies, then Systems Manager overwrites the policies attached to the parameter. The existing policies are deleted. If you want to add a new policy to a parameter that already has one or more policies, copy and paste the original policies, type the new policy, and then save your changes.

### Adding policies to an existing parameter (Tools for Windows PowerShell)


Use the following procedure to add policies to an existing parameter by using Tools for Windows PowerShell. Replace each *example resource placeholder* with your own information.

**To add policies to an existing parameter**

1. Open Tools for Windows PowerShell and run the following command to specify your credentials. You must either have administrator permissions in Amazon Elastic Compute Cloud (Amazon EC2), or you must have been granted the appropriate permission in AWS Identity and Access Management (IAM). 

   ```
   Set-AWSCredentials `
       –AccessKey access-key-name `
       –SecretKey secret-key-name
   ```

1. Run the following command to set the Region for your PowerShell session. The example uses the US East (Ohio) Region (us-east-2).

   ```
   Set-DefaultAWSRegion `
       -Region us-east-2
   ```

1. Run the following command to add policies to an existing parameter. Replace each *example resource placeholder* with your own information.

   ```
   Write-SSMParameter `
       -Name "parameter name" `
       -Value "parameter value" `
       -Type "parameter type" `
       -Policies "[{policies-enclosed-in-brackets-and-curly-braces}]" `
       -Overwrite
   ```

   Here is an example that includes an expiration policy that deletes the parameter at midnight (GMT) on May 13, 2020. The example also includes a notification policy that generates an EventBridge event five (5) days before the parameter is deleted. Last, it includes a `NoChangeNotification` policy if no changes are made to this parameter after 60 days. The example uses an obfuscated name (`3l3vat3131`) for a password and an AWS managed key.

   ```
   Write-SSMParameter `
       -Name "/Finance/Payroll/3l3vat3131" `
       -Value "P@sSwW)rd" `
       -Type "SecureString" `
       -Policies "[{\"Type\":\"Expiration\",\"Version\":\"1.0\",\"Attributes\":{\"Timestamp\":\"2018-05-13T00:00:00.000Z\"}},{\"Type\":\"ExpirationNotification\",\"Version\":\"1.0\",\"Attributes\":{\"Before\":\"5\",\"Unit\":\"Days\"}},{\"Type\":\"NoChangeNotification\",\"Version\":\"1.0\",\"Attributes\":{\"After\":\"60\",\"Unit\":\"Days\"}}]" `
       -Overwrite
   ```

1. Run the following command to verify the details of the parameter. Replace *parameter name* with your own information.

   ```
   (Get-SSMParameterValue -Name "parameter name").Parameters
   ```

**Important**  
Parameter Store preserves policies on a parameter until you either overwrite the policies with new policies or remove the policies. 
To remove all policies from an existing parameter, edit the parameter and apply an empty policy of brackets and curly braces. For example:  

  ```
  Write-SSMParameter `
      -Name "parameter name" `
      -Value "parameter value" `
      -Type "parameter type" `
      -Policies "[{}]"
  ```
If you add a new policy to a parameter that already has policies, then Systems Manager overwrites the policies attached to the parameter. The existing policies are deleted. If you want to add a new policy to a parameter that already has one or more policies, copy and paste the original policies, type the new policy, and then save your changes.

# Working with parameter hierarchies in Parameter Store
Working with parameter hierarchies

Managing dozens or hundreds of parameters as a flat list is time consuming and prone to errors. It can also be difficult to identify the correct parameter for a task. This means you might accidentally use the wrong parameter, or you might create multiple parameters that use the same configuration data. 

You can use parameter hierarchies to help you organize and manage parameters. A hierarchy is a parameter name that includes a path that you define by using forward slashes (/).

**Topics**
+ [

## Understanding parameter hierarchy through examples
](#ps-hierarchy-examples)
+ [

## Querying parameters in a hierarchy
](#ps-hierarchy-queries)
+ [

## Managing parameters using hierarchies using the AWS CLI
](#sysman-paramstore-walk-hierarchy)

## Understanding parameter hierarchy through examples


The following example uses three hierarchy levels in the name to identify the following:

`/Environment/Type of computer/Application/Data`

`/Dev/DBServer/MySQL/db-string13`

You can create a hierarchy with a maximum of 15 levels. We suggest that you create hierarchies that reflect an existing hierarchical structure in your environment, as shown in the following examples:
+ Your [Continuous integration](https://aws.amazon.com/devops/continuous-integration/) and [Continuous delivery](https://aws.amazon.com/devops/continuous-delivery/) environment (CI/CD workflows)

  `/Dev/DBServer/MySQL/db-string`

  `/Staging/DBServer/MySQL/db-string`

  `/Prod/DBServer/MySQL/db-string`
+ Your applications that use containers

  ```
  /MyApp/.NET/Libraries/my-password
  ```
+ Your business organization

  `/Finance/Accountants/UserList`

  `/Finance/Analysts/UserList`

  `/HR/Employees/EU/UserList`

Parameter hierarchies standardize the way you create parameters and make it easier to manage parameters over time. A parameter hierarchy can also help you identify the correct parameter for a configuration task. This helps you to avoid creating multiple parameters with the same configuration data. 

You can create a hierarchy that allows you to share parameters across different environments, as shown in the following examples that use passwords in development and staging environment.

`/DevTest/MyApp/database/my-password`

You could then create a unique password for your production environment, as shown in the following example:

`/prod/MyApp/database/my-password`

You aren't required to specify a parameter hierarchy. You can create parameters at level one. These are called *root* parameters. For backward compatibility, all parameters created in Parameter Store before hierarchies were released are root parameters. The systems treats both of the following parameters as root parameters.

`/parameter-name`

`parameter-name`

## Querying parameters in a hierarchy


Another benefit of using hierarchies is the ability to query for all parameters *under* a certain level in a hierarchy by using the [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html) API operation. For example, if you run the following command from the AWS Command Line Interface (AWS CLI), the system returns all parameters under the `Oncall` level:

```
aws ssm get-parameters-by-path --path /Dev/Web/Oncall
```

Sample output:

```
{
    "Parameters": [
        {
            "Name": "/Dev/Web/Oncall/Week1",
            "Type": "String",
            "Value": "John Doe",
            "Version": 1,
            "LastModifiedDate": "2024-11-22T07:18:53.510000-08:00",
            "ARN": "arn:aws:ssm:us-east-2:123456789012:parameter/Dev/Web/Oncall/Week1",
            "DataType": "text"
        },
        {
            "Name": "/Dev/Web/Oncall/Week2",
            "Type": "String",
            "Value": "Mary Major",
            "Version": 1,
            "LastModifiedDate": "2024-11-22T07:21:25.325000-08:00",
            "ARN": "arn:aws:ssm:us-east-2:123456789012:parameter/Dev/Web/Oncall/Week2",
            "DataType": "text"
        }
    ]
}
```

To view decrypted `SecureString` parameters in a hierarchy, you specify the path and the `--with-decryption` parameter, as shown in the following example.

```
aws ssm get-parameters-by-path --path /Prod/ERP/SAP --with-decryption
```

## Managing parameters using hierarchies using the AWS CLI


This procedure shows how to work with parameters and parameter hierarchies by using the AWS CLI.

**To manage parameters using hierarchies**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to create a parameter that uses the `allowedPattern` parameter and the `String` parameter type. The allowed pattern in this example means the value for the parameter must be between 1 and 4 digits long.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "/MyService/Test/MaxConnections" \
       --value 100 --allowed-pattern "\d{1,4}" \
       --type String
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "/MyService/Test/MaxConnections" ^
       --value 100 --allowed-pattern "\d{1,4}" ^
       --type String
   ```

------

   The command returns the version number of the parameter.

1. Run the following command to *attempt* to overwrite the parameter you just created with a new value.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "/MyService/Test/MaxConnections" \
       --value 10,000 \
       --type String \
       --overwrite
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "/MyService/Test/MaxConnections" ^
       --value 10,000 ^
       --type String ^
       --overwrite
   ```

------

   The system returns the following error because the new value doesn't meet the requirements of the allowed pattern you specified in the previous step.

   ```
   An error occurred (ParameterPatternMismatchException) when calling the PutParameter operation: Parameter value, cannot be validated against allowedPattern: \d{1,4}
   ```

1. Run the following command to create a `SecureString` parameter that uses an AWS managed key. The allowed pattern in this example means the user can specify any character, and the value must be between 8 and 20 characters.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "/MyService/Test/my-password" \
       --value "p#sW*rd33" \
       --allowed-pattern ".{8,20}" \
       --type SecureString
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "/MyService/Test/my-password" ^
       --value "p#sW*rd33" ^
       --allowed-pattern ".{8,20}" ^
       --type SecureString
   ```

------

1. Run the following commands to create more parameters that use the hierarchy structure from the previous step.

------
#### [ Linux & macOS ]

   ```
   aws ssm put-parameter \
       --name "/MyService/Test/DBname" \
       --value "SQLDevDb" \
       --type String
   ```

   ```
   aws ssm put-parameter \
       --name "/MyService/Test/user" \
       --value "SA" \
       --type String
   ```

   ```
   aws ssm put-parameter \
       --name "/MyService/Test/userType" \
       --value "SQLuser" \
       --type String
   ```

------
#### [ Windows ]

   ```
   aws ssm put-parameter ^
       --name "/MyService/Test/DBname" ^
       --value "SQLDevDb" ^
       --type String
   ```

   ```
   aws ssm put-parameter ^
       --name "/MyService/Test/user" ^
       --value "SA" ^
       --type String
   ```

   ```
   aws ssm put-parameter ^
       --name "/MyService/Test/userType" ^
       --value "SQLuser" ^
       --type String
   ```

------

1. Run the following command to get the value of two parameters.

------
#### [ Linux & macOS ]

   ```
   aws ssm get-parameters \
       --names "/MyService/Test/user" "/MyService/Test/userType"
   ```

------
#### [ Windows ]

   ```
   aws ssm get-parameters ^
       --names "/MyService/Test/user" "/MyService/Test/userType"
   ```

------

1. Run the following command to query for all parameters under a single level. 

------
#### [ Linux & macOS ]

   ```
   aws ssm get-parameters-by-path \
       --path "/MyService/Test"
   ```

------
#### [ Windows ]

   ```
   aws ssm get-parameters-by-path ^
       --path "/MyService/Test"
   ```

------

1. Run the following command to delete two parameters.

------
#### [ Linux & macOS ]

   ```
   aws ssm delete-parameters \
       --names "/IADRegion/Dev/user" "/IADRegion/Dev/userType"
   ```

------
#### [ Windows ]

   ```
   aws ssm delete-parameters ^
       --names "/IADRegion/Dev/user" "/IADRegion/Dev/userType"
   ```

------

# Preventing access to Parameter Store API operations


Using service-specific *[conditions](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html)* supported by Systems Manager for AWS Identity and Access Management (IAM) policies, you can explicity allow or deny access to Parameter Store API operations and content. By using these conditions, you can allow only certain IAM Entities (users and roles) in your organization to call certain API actions, or prevent certain IAM Entities from running them. This includes actions run through the Parameter Store console, the AWS Command Line Interface (AWS CLI), and SDKs. 

Systems Manager currently supports three conditions that are specific to Parameter Store.

**Topics**
+ [

## Preventing changes to existing parameters using `ssm:Overwrite`
](#overwrite-condition)
+ [

## Preventing creation or updates to parameters that use a parameter policy using `ssm:Policies`
](#parameter-policies-condition)
+ [

## Preventing access to levels in a hierarchical parameter using `ssm:Recursive`
](#recursive-condition)

## Preventing changes to existing parameters using `ssm:Overwrite`


Use the `ssm:Overwrite` condition to control whether IAM Entities can update existing parameters.

In the following sample policy, the `"Allow"` statement grants permission to create parameters by running the `PutParameter` API operation in the AWS account 123456789012 in the US East (Ohio) Region (us-east-2). 

However, the `"Deny"` statement prevents Entities from changing values of *existing* parameters because the `Overwrite` option is explicitly denied for the `PutParameter` operation. Therefore, Entities that are assigned this policy can create parameters, but not make changes to existing parameters.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutParameter"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:PutParameter"
            ],
            "Condition": {
                "StringEquals": {
                    "ssm:Overwrite": [
                        "true"
                    ]
                }
            },
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/*"
        }
    ]
}
```

------

## Preventing creation or updates to parameters that use a parameter policy using `ssm:Policies`


User the `ssm:Policies` condition to control whether Entities can create parameters that include a parameter policy and update existing parameters that include a parameter policy.

In the following policy example, the `"Allow"` statement grants general permission to create parameters, but the `"Deny"` statement prevents Entities from creating or updating parameters that include a parameter policy in the the AWS account 123456789012 in the US East (Ohio) Region (us-east-2). Entities can still create or update parameters that aren't assigned a parameter policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:PutParameter"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:PutParameter"
            ],
            "Condition": {
                "StringEquals": {
                    "ssm:Policies": [
                        "true"
                    ]
                }
            },
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/*"
        }
    ]
}
```

------

## Preventing access to levels in a hierarchical parameter using `ssm:Recursive`


Use the `ssm:Recursive` condition to control whether IAM Entities can view or reference levels in a hierarchical parameter. You can provide or restrict access to all parameters beyond a specific level of a hierarchy. 

In the following example policy, the `"Allow"` statement provides access to Parameter Store operations on all parameters in the path `/Code/Departments/Finance/*` for the AWS account 123456789012 in the US East (Ohio) Region (us-east-2). 

After this, the `"Deny"` statement prevents IAM Entities from viewing or retrieving parameter data at or below the level of `/Code/Departments/*`. Entities can still, however, still create or update parameters in that path. The example has been constructed to illustrate that recursively denying access below a certain level in a parameter hierarchy takes precedence over more permissive access in the same policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:*"
            ],
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "ssm:GetParametersByPath"
            ],
            "Condition": {
                "StringEquals": {
                    "ssm:Recursive": [
                        "true"
                    ]
                }
            },
            "Resource": "arn:aws:ssm:us-east-1:111122223333:parameter/Code/Departments/*"
        }
    ]
}
```

------

**Important**  
If a user has access to a path, then the user can access all levels of that path. For example, if a user has permission to access path `/a`, then the user can also access `/a/b`. This is true unless the user has explicitly been denied access in IAM for parameter `/b`, as illustrated above.

# Working with parameter labels in Parameter Store
Working with parameter labels

A parameter label is a user-defined alias to help you manage different versions of a parameter. When you modify a parameter, AWS Systems Manager automatically saves a new version and increments the version number by one. A label can help you remember the purpose of a parameter version when there are multiple versions.

For example, let's say you have a parameter called `/MyApp/DB/ConnectionString`. The value of the parameter is a connection string to a MySQL server in a local database in a test environment. After you finish updating the application, you want the parameter to use a connection string for a production database. You change the value of `/MyApp/DB/ConnectionString`. Systems Manager automatically creates version two with the new connection string. To help you remember the purpose of each version, you attach a label to each parameter. For version one, you attach the label *Test* and for version two you attach the label *Production*.

You can move labels from one version of a parameter to another version. For example, if you create version 3 of the `/MyApp/DB/ConnectionString` parameter with a connection string for a new production database, then you can move the *Production* label from version 2 of the parameter to version 3 of the parameter.

Parameter labels are a lightweight alternative to parameter tags. Your organization might have strict guidelines for tags that must be applied to different AWS resources. In contrast, a label is simply a text association for a specific version of a parameter. 

Similar to tags, you can query parameters by using labels. You can view a list of specific parameter versions that all use the same label if you query your parameter set by using the [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html) API operation, as described later in this section.

**Note**  
If you run a command that specifies a version of a parameter that doesn't exist, the command fails. It doesn't fall back to the latest or default value of the parameter.

**Label requirements and restrictions**

Parameter labels have the following requirements and restrictions:
+ A version of a parameter can have a maximum of 10 labels.
+ You can't attach the same label to different versions of the same parameter. For example, if version 1 of the parameter has the label *Production*, then you can't attach *Production* to version 2.
+ You can move a label from one version of a parameter to another.
+ You can't create a label when you create a parameter. You must attach a label to a specific version of a parameter.
+ If you no longer want to use a parameter label, then you can move it to a different version of a parameter or delete it.
+ A label can have a maximum of 100 characters.
+ Labels can contain letters (case sensitive), numbers, periods (.), hyphens (-), or underscores (\$1). 
+ Labels can't begin with a number, "aws", or "ssm" (not case sensitive). If a label doesn't meet these requirements, then the label isn't attached to the parameter version and the system displays it in the list of `InvalidLabels`.

**Topics**
+ [

## Working with parameter labels using the console
](#sysman-paramstore-labels-console)
+ [

## Working with parameter labels using the AWS CLI
](#sysman-paramstore-labels-cli)

## Working with parameter labels using the console


This section describes how to perform the following tasks by using the Systems Manager console.
+ [Creating a parameter label using the console](#sysman-paramstore-labels-console-create)
+ [Viewing labels attached to a parameter using the console](#sysman-paramstore-labels-console-view)
+ [Moving a parameter label using the console](#sysman-paramstore-labels-console-move)
+ [Deleting parameter labels using the console](#systems-manager-parameter-store-labels-console-delete)

### Creating a parameter label using the console


The following procedure describes how to attach a label to a specific version of an *existing* parameter by using the Systems Manager console. You can't attach a label when you create a parameter.

**To attach a label to a parameter version**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the name of a parameter to open the details page for that parameter.

1. Choose the **History** tab.

1. Choose the parameter version for which you want to attach a label.

1. Choose **Manage labels**.

1. Choose **Add new label**. 

1. In the text box, enter the label name. To add more labels, choose **Add new label**. You can attach a maximum of ten labels.

1. When you're finished, choose **Save changes**. 

### Viewing labels attached to a parameter using the console


A parameter version can have a maximum of ten labels. The following procedure describes how to view all labels attached to a parameter version by using the Systems Manager console.

**To view labels attached to a parameter version**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the name of a parameter to open the details page for that parameter.

1. Choose the **History** tab.

1. Locate the parameter version for which you want to view all attached labels. The **Labels** column shows all labels attached to the parameter version.

### Moving a parameter label using the console


The following procedure describes how to move a parameter label to a different version of the same parameter by using the Systems Manager console.

**To move a label to a different parameter version**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the name of a parameter to open the details page for that parameter.

1. Choose the **History** tab.

1. Choose the parameter version for which you want to move the label.

1. Choose **Manage labels**.

1. Choose **Add new label**. 

1. In the text box, enter the label name.

1. When you're finished, choose **Save changes**. 

### Deleting parameter labels using the console


The following procedure describes how to delete one or multiple parameter labels using the Systems Manager console.

**To delete labels from a parameter**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the name of a parameter to open the details page for that parameter.

1. Choose the **History** tab.

1. Choose the parameter version for which you want to delete labels.

1. Choose **Manage labels**.

1. Choose **Remove**. next to each label you want to delete. 

1. When you're finished, choose **Save changes**.

1. Confirm that your changes are correct, enter `Confirm` in the text box, and choose **Confirm**.

## Working with parameter labels using the AWS CLI


This section describes how to perform the following tasks by using the AWS Command Line Interface (AWS CLI).
+ [Creating a new parameter label using the AWS CLI](#sysman-paramstore-labels-cli-create)
+ [Viewing labels for a parameter using the AWS CLI](#sysman-paramstore-labels-cli-view)
+ [Viewing a list of parameters that are assigned a label using the AWS CLI](#sysman-paramstore-labels-cli-view-param)
+ [Moving a parameter label using the AWS CLI](#sysman-paramstore-labels-cli-move)
+ [Deleting parameter labels using the AWS CLI](#systems-manager-parameter-store-labels-cli-delete)

### Creating a new parameter label using the AWS CLI


The following procedure describes how to attach a label to a specific version of an *existing* parameter by using the AWS CLI. You can't attach a label when you create a parameter.

**To create a parameter label**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to view a list of parameters for which you have permission to attach a label.
**Note**  
Parameters are only available in the AWS Region where they were created. If you don't see a parameter for which you want to attach a label, then verify your Region.

   ```
   aws ssm describe-parameters
   ```

   Note the name of a parameter for which you want to attach a label.

1. Run the following command to view all versions of the parameter.

   ```
   aws ssm get-parameter-history --name "parameter-name"
   ```

   Note the parameter version for which you want to attach a label.

1. Run the following command to retrieve information about a parameter by version number.

   ```
   aws ssm get-parameters --names "parameter-name:version-number" 
   ```

   Here is an example.

   ```
   aws ssm get-parameters --names "/Production/SQLConnectionString:3" 
   ```

1. Run one of the following commands to attach a label to a version of a parameter. If you attach multiple labels, separate label names with a space.

   **Attach a label to the latest version of a parameter**

   ```
   aws ssm label-parameter-version --name parameter-name  --labels label-name
   ```

   **Attach a label to a specific version of a parameter**

   ```
   aws ssm label-parameter-version --name parameter-name --parameter-version version-number --labels label-name
   ```

   Here are some examples.

   ```
   aws ssm label-parameter-version --name /config/endpoint --labels production east-region finance
   ```

   ```
   aws ssm label-parameter-version --name /config/endpoint --parameter-version 3 --labels MySQL-test
   ```
**Note**  
If the output shows the label you created in the `InvalidLabels` list, then the label doesn't meet the requirements described earlier in this topic. Review the requirements and try again. If the `InvalidLabels` list is empty, then your label was successfully applied to the version of the parameter.

1. You can view the details of the parameter by using either a version number or a label name. Run the following command and specify the label you created in the previous step.

   ```
   aws ssm get-parameter --name parameter-name:label-name --with-decryption
   ```

   The command returns information like the following.

   ```
   {
       "Parameter": {
           "Version": version-number, 
           "Type": "parameter-type", 
           "Name": "parameter-name", 
           "Value": "parameter-value", 
           "Selector": ":label-name"
       }
   }
   ```
**Note**  
*Selector* in the output is either the version number or the label that you specified in the `Name` input field.

### Viewing labels for a parameter using the AWS CLI


You can use the [GetParameterHistory](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameterHistory.html) API operation to view the full history and all labels attached to a specified parameter. Or, you can use the [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html) API operation to view a list of all parameters that are assigned a specific label. 

**To view labels for a parameter by using the GetParameterHistory API operation**

1. Run the following command to view a list of parameters for which you can view labels.
**Note**  
Parameters are only available in the Region where they were created. If you don't see a parameter for which you want to move a label, then verify your Region.

   ```
   aws ssm describe-parameters
   ```

   Note the name of the parameter you want to view the labels of.

1. Run the following command to view all versions of the parameter.

   ```
   aws ssm get-parameter-history --name parameter-name --with-decryption
   ```

   The system returns information like the following.

   ```
   {
       "Parameters": [
           {
               "Name": "/Config/endpoint", 
               "LastModifiedDate": 1528932105.382, 
               "Labels": [
                   "Deprecated"
               ], 
               "Value": "MyTestService-June-Release.example.com", 
               "Version": 1, 
               "LastModifiedUser": "arn:aws:iam::123456789012:user/test", 
               "Type": "String"
           }, 
           {
               "Name": "/Config/endpoint", 
               "LastModifiedDate": 1528932111.222, 
               "Labels": [
                   "Current"
               ], 
               "Value": "MyTestService-July-Release.example.com", 
               "Version": 2, 
               "LastModifiedUser": "arn:aws:iam::123456789012:user/test", 
               "Type": "String"
           }
       ]
   }
   ```

### Viewing a list of parameters that are assigned a label using the AWS CLI


You can use the [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html) API operation to view a list of all parameters in a path that are assigned a specific label. 

Run the following command to view a list of parameters in a path that are assigned a specific label. Replace each *example resource placeholder* with your own information.

```
aws ssm get-parameters-by-path \
    --path parameter-path \
    --parameter-filters Key=Label,Values=label-name,Option=Equals \
    --max-results a-number \
    --with-decryption --recursive
```

The system returns information like the following. For this example, the user searched under the `/Config` path.

```
{
    "Parameters": [
        {
            "Version": 3, 
            "Type": "SecureString", 
            "Name": "/Config/DBpwd", 
            "Value": "MyS@perGr&pass33"
        }, 
        {
            "Version": 2, 
            "Type": "String", 
            "Name": "/Config/DBusername", 
            "Value": "TestUserDB"
        }, 
        {
            "Version": 2, 
            "Type": "String", 
            "Name": "/Config/endpoint", 
            "Value": "MyTestService-July-Release.example.com"
        }
    ]
}
```

### Moving a parameter label using the AWS CLI


The following procedure describes how to move a parameter label to a different version of the same parameter.

**To move a parameter label**

1. Run the following command to view all versions of the parameter. Replace *parameter name* with your own information.

   ```
   aws ssm get-parameter-history \
       --name "parameter name"
   ```

   Note the parameter versions you want to move the label to and from.

1. Run the following command to assign an existing label to a different version of a parameter. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm label-parameter-version \
       --name parameter name \
       --parameter-version version number \
       --labels name-of-existing-label
   ```
**Note**  
If you want to move an existing label to the latest version of a parameter, then remove `--parameter-version` from the command.

### Deleting parameter labels using the AWS CLI


The following procedure describes how to delete parameter labels by using the AWS CLI.

**To delete a parameter label**

1. Run the following command to view all versions of the parameter. Replace *parameter name* with your own information.

   ```
   aws ssm get-parameter-history \
       --name "parameter name"
   ```

   The system returns information like the following.

   ```
   {
       "Parameters": [
           {
               "Name": "foo",
               "DataType": "text",
               "LastModifiedDate": 1607380761.11,
               "Labels": [
                   "l3",
                   "l2"
               ],
               "Value": "test",
               "Version": 1,
               "LastModifiedUser": "arn:aws:iam::123456789012:user/test",
               "Policies": [],
               "Tier": "Standard",
               "Type": "String"
           },
           {
               "Name": "foo",
               "DataType": "text",
               "LastModifiedDate": 1607380763.11,
               "Labels": [
                   "l1"
               ],
               "Value": "test",
               "Version": 2,
               "LastModifiedUser": "arn:aws:iam::123456789012:user/test",
               "Policies": [],
               "Tier": "Standard",
               "Type": "String"
           }
       ]
   }
   ```

   Note the parameter version for which you want to delete a label or labels.

1. Run the following command to delete the labels you choose from that parameter. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm unlabel-parameter-version \
       --name parameter name \
       --parameter-version version \
       --labels label 1,label 2,label 3
   ```

   The system returns information like the following.

   ```
   {
       "InvalidLabels": ["invalid"], 
       "DeletedLabels" : ["Prod"]
    }
   ```

# Working with parameter versions in Parameter Store
Working with parameter versions

Each time you edit the value of a parameter, Parameter Store, a tool in AWS Systems Manager creates a new *version* of the parameter and retains the previous versions. When you initially create a parameter, Parameter Store assigns version `1` to that parameter. When you change the value of the parameter, Parameter Store automatically increments the version number by one. You can view the details, including the values, of all versions in a parameter's history. 

You can also specify the version of a parameter to use in API commands and SSM documents; for example: `ssm:MyParameter:3`. You can specify a parameter name and a specific version number in API calls and SSM documents. If you don't specify a version number, the system automatically uses the latest version. If you specify the number for a version that doesn't exist, the system returns an error rather than falling back to the latest or default version of the parameter.

You can use parameter versions to see the number of times a parameter changed over a period of time. Parameter versions also provide a layer of protection if a parameter value is accidentally changed. 

You can create and maintain up to 100 versions of a parameter. After you have created 100 versions of a parameter, each time you create a new version, the oldest version of the parameter is removed from history to make room for the new version. 

An exception to this is when there are already 100 parameter versions in history, and a parameter label is assigned to the oldest version of a parameter. In this case, that version isn't removed from history, and the request to create a new parameter version fails. This safeguard is to prevent parameter versions with mission critical labels assigned to them from being deleted. To continue creating new parameters, first move the label from the oldest version of the parameter to a newer one for use in your operations. For information about moving parameter labels, see [Moving a parameter label using the console](sysman-paramstore-labels.md#sysman-paramstore-labels-console-move) and [Moving a parameter label using the AWS CLI](sysman-paramstore-labels.md#sysman-paramstore-labels-cli-move).

The following procedures show you how to edit a parameter and then verify that you created a new version. You can use the `get-parameter` and `get-parameters` commands to view parameter versions. For examples on using these commands, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html#API_GetParameter_Examples) and [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html#API_GetParameters_Examples) in the *AWS Systems Manager API Reference* 

**Topics**
+ [

## Creating a new version of a parameter using the console
](#sysman-paramstore-version-console)
+ [

## Referencing a parameter version
](#reference-parameter-version)

## Creating a new version of a parameter using the console


You can use the Systems Manager console to create a new version of a parameter and view the version history of a parameter.

**To create a new version of a parameter**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the name of a parameter that you created earlier. For information about creating a new parameter, see [Creating Parameter Store parameters in Systems Manager](sysman-paramstore-su-create.md). 

1. Choose **Edit**.

1. In the **Value** box, enter a new value, and then choose **Save changes**.

1. Choose the name of the parameter you just updated. On the **Overview** tab, verify that the version number incremented by 1, and verify the new value.

1. To view the history of all versions of a parameter, choose the **History** tab. 

## Referencing a parameter version


You can reference specific parameter versions in commands, API calls, and SSM documents by using the following format: ssm:`parameter-name:version-number`. 

In the following example, the Amazon Elastic Compute Cloud (Amazon EC2) `run-instances command` uses version 3 of the parameter `golden-ami`. 

------
#### [ Linux & macOS ]

```
aws ec2 run-instances \
    --image-id resolve:ssm:/golden-ami:3 \
    --count 1 \
    --instance-type t2.micro \
    --key-name my-key-pair \
    --security-groups my-security-group
```

------
#### [ Windows ]

```
aws ec2 run-instances ^
    --image-id resolve:ssm:/golden-ami:3 ^
    --count 1 ^
    --instance-type t2.micro ^
    --key-name my-key-pair ^
    --security-groups my-security-group
```

------

**Note**  
Using `resolve` and a parameter value only works with the `--image-id` option and a parameter that contains an Amazon Machine Image (AMI) as its value. For more information, see [Using native parameter support in Parameter Store for Amazon Machine Image IDs](parameter-store-ec2-aliases.md).

Here is an example for specifying version 2 of a parameter named `MyRunCommandParameter` in an SSM document.

------
#### [ YAML ]

```
---
schemaVersion: '2.2'
description: Run a shell script or specify the commands to run.
parameters:
  commands:
    type: String
    description: "(Required) Specify a shell script or a command to run."
    displayType: textarea
    default: "{{ssm:MyRunCommandParameter:2}}"
mainSteps:
- action: aws:runShellScript
  name: RunScript
  inputs:
    runCommand:
    - "{{commands}}"
```

------
#### [ JSON ]

```
{
    "schemaVersion": "2.2",
    "description": "Run a shell script or specify the commands to run.",
    "parameters": {
        "commands": {
            "type": "String",
            "description": "(Required) Specify a shell script or a command to run.",
            "displayType": "textarea",
            "default": "{{ssm:MyRunCommandParameter:2}}"
        }
    },
    "mainSteps": [
        {
            "action": "aws:runShellScript",
            "name": "RunScript",
            "inputs": {
                "runCommand": [
                    "{{commands}}"
                ]
            }
        }
    ]
}
```

------

# Working with shared parameters in Parameter Store
Working with shared parameters

Sharing advanced parameters simplifies configuration data management in a multi-account environment. You can centrally store and manage your parameters and share them with other AWS accounts that need to reference them.

Parameter Store integrates with AWS Resource Access Manager (AWS RAM) to enable advanced parameter sharing. AWS RAM is a service that enables you to share resources with other AWS accounts or through AWS Organizations.

With AWS RAM, you share resources that you own by creating a resource share. A resource share specifies the resources to share, permissions to grant, and the consumers with whom to share. Consumers can include:
+ Specific AWS accounts inside or outside of its organization in AWS Organizations
+ An organizational unit inside its organization in AWS Organizations
+ Its entire organization in AWS Organizations

For more information about AWS RAM, see the *[AWS RAM User Guide](https://docs.aws.amazon.com/ram/latest/userguide/)*.

This topic explains how to share parameters that you own, and how to use parameters that are shared with you.

**Topics**
+ [

## Prerequisites for sharing parameters
](#prereqs)
+ [

## Sharing a parameter
](#share)
+ [

## Stop sharing a shared parameter
](#unshare)
+ [

## Identifying shared parameters
](#identify)
+ [

## Accessing shared parameters
](#accessing)
+ [

## Permissions sets for sharing parameters
](#sharing-permissions)
+ [

## Maximum throughput for shared parameters
](#throughput)
+ [

## Pricing for shared parameters
](#pricing)
+ [

## Cross-account access for closed AWS accounts
](#closed-accounts)

## Prerequisites for sharing parameters


The following prerequisites must be met before you can share parameters from your account:
+ To share a parameter, you must own it in your AWS account. You can't share a parameter that has been shared with you.
+ To share a parameter, it must be in the advanced parameter tier. For information about parameter tiers, see [Managing parameter tiers](parameter-store-advanced-parameters.md). For information about changing an existing standard parameter to an advanced parameter, see [Changing a standard parameter to an advanced parameter](parameter-store-advanced-parameters.md#parameter-store-advanced-parameters-enabling).
+ To share a `SecureString` parameter, it must be encrypted with a customer managed key, and you must share the key separately through AWS Key Management Service. AWS managed keys cannot be shared. Parameters encrypted with the default AWS managed key can be updated to use a customer managed key instead. For AWS KMS key definitions, see [AWS KMS concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
+ To share a parameter with your organization or an organizational unit in AWS Organizations, you must enable sharing with AWS Organizations. For more information, see [Enable Sharing with AWS Organizations](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs) in the *AWS RAM User Guide*.

## Sharing a parameter


To share a parameter, you must add it to a resource share. A resource share is an AWS RAM resource that lets you share your resources across AWS accounts. A resource share specifies the resources to share, and the consumers with whom they are shared. 

When you share a parameter that you own with other AWS accounts, you can choose from two AWS managed permissions to grant the consumers. For more information, see [Permissions sets for sharing parameters](#sharing-permissions). 

If you are part of an organization in AWS Organizations and sharing within your organization is enabled, you can grant consumers in your organization access from the AWS RAM console to the shared parameter. Otherwise, consumers receive an invitation to join the resource share and are granted access to the shared parameter after accepting the invitation.

You can share a parameter that you own using the AWS RAM console, or the AWS CLI.

**Note**  
While you can share a parameter using the Systems Manager [PutResourcePolicy](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutResourcePolicy.html) API operation, we recommend using AWS Resource Access Manager (AWS RAM) instead. This is because using `PutResourcePolicy` requires the extra step of promoting the parameter to a standard Resource Share using the AWS RAM [PromoteResourceShareCreatedFromPolicy](https://docs.aws.amazon.com/ram/latest/APIReference/API_PromoteResourceShareCreatedFromPolicy.html) API operation. Otherwise, the parameter won't be returned by the Systems Manager [DescribeParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeParameters.html) API operation using the `--shared` option.

**To share a parameter that you own using the AWS RAM console**  
See [Creating a resource share in AWS RAM](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing.html#working-with-sharing-create) in the *AWS RAM User Guide*.

Make the following selections as you complete the procedure:
+ In the Step 1 page, for **Resources**, select `Parameter Store Advanced Parameter`, and then select the box of each parameter in the advanced parameter tier that you want to share.
+ In the Step 2 page, for **Managed permissions**, choose the permission to grant consumers, as described in [Permissions sets for sharing parameters](#sharing-permissions) later in this topic.

Choose other options based on your parameter sharing objectives.

**To share a parameter that you own using the AWS CLI**  
Use the [https://docs.aws.amazon.com/cli/latest/reference/ram/create-resource-share.html](https://docs.aws.amazon.com/cli/latest/reference/ram/create-resource-share.html) command to add parameters to a new resource share.

Use the [https://docs.aws.amazon.com/cli/latest/reference/ram/associate-resource-share.html](https://docs.aws.amazon.com/cli/latest/reference/ram/associate-resource-share.html) command to add parameters to an existing resource share.

The following example creates a new resource share to share parameters with consumers in an organization and in an individual account.

```
aws ram create-resource-share \
    --name "MyParameter" \
    --resource-arns "arn:aws:ssm:us-east-2:123456789012:parameter/MyParameter" \
    --principals "arn:aws:organizations::123456789012:ou/o-63bEXAMPLE/ou-46xi-rEXAMPLE" "987654321098"
```

## Stop sharing a shared parameter


When you stop sharing a shared parameter, the consumer account can no longer access the parameter.

To stop sharing a parameter that you own, you must remove it from the resource share. You can do this using the Systems Manager console, AWS RAM console, or the AWS CLI.

**To stop sharing a parameter that you own using the AWS RAM console**  
See [Update a resource share in AWS RAM](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing-update.html) in the *AWS RAM User Guide*.

**To stop sharing a parameter that you own using the AWS CLI**  
Use the [disassociate-resource-share](https://docs.aws.amazon.com/cli/latest/reference/ram/disassociate-resource-share.html) command.

## Identifying shared parameters


Owners and consumers can identify shared parameters using the AWS CLI.

**To identify shared parameters using the AWS CLI**  
To identify shared parameters using the AWS CLI, you can choose from the Systems Manager `[https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-parameters.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/describe-parameters.html)` command and the AWS RAM `[list-resources](https://docs.aws.amazon.com/cli/latest/reference/ram/list-resources.html)` command. 

When you use the `--shared` option with `describe-parameters`, the command returns the parameters that are shared with you.

The following is an example:

```
aws ssm describe-parameters --shared
```

## Accessing shared parameters


Consumers can access shared parameters using the AWS command line tools, and AWS SDKs. For consumer accounts, parameters shared with that account aren't included in the **My parameters** page.

**CLI Example: Accessing shared parameter details using the AWS CLI**  
To access shared parameter details using the AWS CLI, you can use the [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameter.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameter.html) or [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameters.html) commands. You must specify the full parameter ARN as the `--name` in order to retrieve the parameter from another account.

The following is an example.

```
aws ssm get-parameter \
    --name arn:aws:ssm:us-east-2:123456789012:parameter/MySharedParameter
```

**Supported and unsupported integrations for shared parameters**  
Currently, you can use shared parameters in the following integration scenarios:
+ AWS CloudFormation [template parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#aws-ssm-parameter-types)
+ The [AWS Parameters and Secrets Lambda extension](ps-integration-lambda-extensions.md)
+ [Amazon Elastic Compute Cloud (EC2) launch templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/using-systems-manager-parameters.html)
+ Values for `ImageID` with the [EC2 RunInstances command](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) to create instances from an Amazon Machine Image (AMI)
+ [Retrieving parameter values in runbooks](https://repost.aws/knowledge-center/systems-manager-parameter-store) for Automation, a tool in Systems Manager

The following scenarios and integrated services do not currently support the use of shared parameters:
+ [Parameters in commands](sysman-param-runcommand.md) in Run Command, a tool in Systems Manager
+ AWS CloudFormation [dynamic references](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html)
+ The [values of environment variables](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.env.parameter-store) in AWS CodeBuild
+ The [values of environment variables](https://docs.aws.amazon.com/apprunner/latest/dg/env-variable.html) in AWS App Runner
+ The [value of a secret](https://docs.aws.amazon.com/AmazonECS/latest/userguide/secrets-envvar-ssm-paramstore.html) in Amazon Elastic Container Service

## Permissions sets for sharing parameters


Consumer accounts receive read-only access to the parameters you share with them. The consumer can't update or delete the parameter. The consumer can't share the parameter with a third account. 

When you create a resource share in AWS Resource Access Manager for sharing your parameters, you can choose from two AWS managed permission sets to grant this read-only access:

**AWSRAMDefaultPermissionSSMParameterReadOnly**  
Allowed actions: `DescribeParameters`, `GetParameter`, `GetParameters`

**AWSRAMPermissionSSMParameterReadOnlyWithHistory**  
Allowed actions: `DescribeParameters`, `GetParameter`, `GetParameters`, `GetParameterHistory`

When you folllow the steps in [Creating a resource share in AWS RAM](https://docs.aws.amazon.com/ram/latest/userguide/working-with-sharing.html#working-with-sharing-create) in the *AWS RAM User Guide*, choose `Parameter Store Advanced Parameters` as the resource type and either of these managed permissions, depending on whether you want users to view parameter history or not.

**Note**  
If you're retrieving shared parameters programmatically (for example, using AWS Lambda) you might need to add the `ssm:GetResourcePolicies` and `ssm:PutResourcePolicy` permissions to any IAM roles calling AWS Resource Access Manager API actions.

## Maximum throughput for shared parameters


Systems Manager limits the maximum throughput (transactions per second) for the [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html) and [https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html). operations. Throughput is enforced at the individual account level. Therefore, each account that consumes a shared parameter can use its maximum allowed throughput without being affected by other accounts. For more information about maximum throughput for parameters, see the following topics:
+ [Increasing Parameter Store throughput](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-throughput.html)
+ [Systems Manager Service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*.

## Pricing for shared parameters


Cross-account sharing is only available in the advanced parameter tier. For advanced parameters, charges are incurred at the current price for the *storage* and *API usage* for each advanced parameter. The owning account is charged for storage of the advanced parameter. Any consuming account that makes an API call to a shared advanced parameter is charged for the parameter usage. 

For example, if Account A creates an advanced parameter, `MyAdvancedParameter`, that account is charged USD 0.05 per month to store the parameter. 

Account A then shares `MyAdvancedParameter` with Account B and Account C. During a month, the three accounts make calls to `MyAdvancedParameter`. The following table illustrates the charges they would incur for the number of calls each makes.

**Note**  
The charges in the following table are for illustration only. To verify current pricing, see [AWS Systems Manager Pricing for Parameter Store](https://aws.amazon.com/systems-manager/pricing/#Parameter_Store).


| Account | Number of calls | Charges | 
| --- | --- | --- | 
| Account A (owning account) | 10,000 calls |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-shared-parameters.html)  | 
| Account B (consuming account) | 20,000 calls |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-shared-parameters.html)  | 
| Account C (consuming account) | 30,000 calls |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-shared-parameters.html)  | 

## Cross-account access for closed AWS accounts


If the AWS account that owns a shared parameter is closed, all consuming accounts lose access to the shared parameter. If the owning account is reopened within 90 days after the account is closed, consuming accounts regain access to the previously shared parameters. For more information about reopening an account during the Post-Closure Period, see [Accessing your AWS account after you close it](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-closing.html#accessing-after-closure) in the *AWS Account Management Reference Guide*.

# Working with parameters in Parameter Store using Run Command commands
Working with parameters using Run Command commands

You can work with parameters in Run Command, a tool in AWS Systems Manager. For more information, see [AWS Systems Manager Run Command](run-command.md).

## Running a String parameter using the console


The following procedure walks you through the process of running a command that uses a `String` parameter. 

**To run a String parameter using Parameter Store**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Run Command**.

1. Choose **Run command**.

1. In the **Command document** list, choose `AWS-RunPowerShellScript` (Windows) or `AWS-RunShellScript` (Linux).

1. For **Command parameters**, enter **echo \$1\$1ssm:*parameter-name*\$1\$1**. For example: **echo \$1\$1ssm:/Test/helloWorld\$1\$1**. 

1. In the **Targets** section, choose the managed nodes on which you want to run this operation by specifying tags, selecting instances or edge devices manually, or specifying a resource group.
**Tip**  
If a managed node you expect to see isn't listed, see [Troubleshooting managed node availability](fleet-manager-troubleshooting-managed-nodes.md) for troubleshooting tips.

1. For **Other parameters**:
   + For **Comment**, enter information about this command.
   + For **Timeout (seconds)**, specify the number of seconds for the system to wait before failing the overall command execution. 

1. For **Rate control**:
   + For **Concurrency**, specify either a number or a percentage of managed nodes on which to run the command at the same time.
**Note**  
If you selected targets by specifying tags applied to managed nodes or by specifying AWS resource groups, and you aren't certain how many managed nodes are targeted, then restrict the number of targets that can run the document at the same time by specifying a percentage.
   + For **Error threshold**, specify when to stop running the command on other managed nodes after it fails on either a number or a percentage of nodes. For example, if you specify three errors, then Systems Manager stops sending the command when the fourth error is received. Managed nodes still processing the command might also send errors.

1. (Optional) For **Output options**, to save the command output to a file, select the **Write command output to an S3 bucket** box. Enter the bucket and prefix (folder) names in the boxes.
**Note**  
The S3 permissions that grant the ability to write the data to an S3 bucket are those of the instance profile (for EC2 instances) or IAM service role (hybrid-activated machines) assigned to the instance, not those of the IAM user performing this task. For more information, see [Configure instance permissions required for Systems Manager](setup-instance-permissions.md) or [Create an IAM service role for a hybrid environment](hybrid-multicloud-service-role.md). In addition, if the specified S3 bucket is in a different AWS account, make sure that the instance profile or IAM service role associated with the managed node has the necessary permissions to write to that bucket.

1. In the **SNS notifications** section, if you want notifications sent about the status of the command execution, select the **Enable SNS notifications** check box.

   For more information about configuring Amazon SNS notifications for Run Command, see [Monitoring Systems Manager status changes using Amazon SNS notifications](monitoring-sns-notifications.md).

1. Choose **Run**.

1. In the **Command ID** page, in the **Targets and outputs** area, select the button next to the ID of a node where you ran the command, and then choose **View output**. Verify that the output of the command is the value you provided for the parameter, such as **This is my first parameter**.

## Running a parameter using the AWS CLI


**Example 1: Simple command**  
The following example command includes a Systems Manager parameter named `DNS-IP`. The value of this parameter is simply the IP address of a node. This example uses an AWS Command Line Interface (AWS CLI) command to echo the parameter value.

------
#### [ Linux & macOS ]

```
aws ssm send-command \
    --document-name "AWS-RunShellScript" \
    --document-version "1" \
    --targets "Key=instanceids,Values=i-02573cafcfEXAMPLE" \
    --parameters "commands='echo {{ssm:DNS-IP}}'" \
    --timeout-seconds 600 \
    --max-concurrency "50" \
    --max-errors "0" \
    --region us-east-2
```

------
#### [ Windows ]

```
aws ssm send-command ^
    --document-name "AWS-RunPowerShellScript" ^
    --document-version "1" ^
    --targets "Key=instanceids,Values=i-02573cafcfEXAMPLE" ^
    --parameters "commands='echo {{ssm:DNS-IP}}'" ^
    --timeout-seconds 600 ^
    --max-concurrency "50" ^
    --max-errors "0" ^
    --region us-east-2
```

------

The command returns information like the following.

```
{
    "Command": {
        "CommandId": "c70a4671-8098-42da-b885-89716EXAMPLE",
        "DocumentName": "AWS-RunShellScript",
        "DocumentVersion": "1",
        "Comment": "",
        "ExpiresAfter": "2023-12-26T15:19:17.771000-05:00",
        "Parameters": {
            "commands": [
                "echo {{ssm:DNS-IP}}"
            ]
        },
        "InstanceIds": [],
        "Targets": [
            {
                "Key": "instanceids",
                "Values": [
                    "i-02573cafcfEXAMPLE"
                ]
            }
        ],
        "RequestedDateTime": "2023-12-26T14:09:17.771000-05:00",
        "Status": "Pending",
        "StatusDetails": "Pending",
        "OutputS3Region": "us-east-2",
        "OutputS3BucketName": "",
        "OutputS3KeyPrefix": "",
        "MaxConcurrency": "50",
        "MaxErrors": "0",
        "TargetCount": 0,
        "CompletedCount": 0,
        "ErrorCount": 0,
        "DeliveryTimedOutCount": 0,
        "ServiceRole": "",
        "NotificationConfig": {
            "NotificationArn": "",
            "NotificationEvents": [],
            "NotificationType": ""
        },
        "CloudWatchOutputConfig": {
            "CloudWatchLogGroupName": "",
            "CloudWatchOutputEnabled": false
        },
        "TimeoutSeconds": 600,
        "AlarmConfiguration": {
            "IgnorePollAlarmFailure": false,
            "Alarms": []
        },
        "TriggeredAlarms": []
    }
}
```

After a command execution completes, you can view more information about it using the following commands:
+ [https://docs.aws.amazon.com/cli/latest/reference/ssm/get-command-invocation.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/get-command-invocation.html) – View detailed information about the command execution.
+ [https://docs.aws.amazon.com/cli/latest/reference/ssm/link-cli-ref-list-command-invocations.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/link-cli-ref-list-command-invocations.html) – View the command execution status on a specific managed node.
+ [https://docs.aws.amazon.com/cli/latest/reference/ssm/link-cli-ref-list-commands.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/link-cli-ref-list-commands.html) – View the command execution status across managed nodes.

**Example 2: Decrypt a `SecureString` parameter value**  
The next example command uses a `SecureString` parameter named **SecurePassword**. The command used in the `parameters` field retrieves and decrypts the value of the `SecureString` parameter, and then resets the local administrator password without having to pass the password in clear text.

------
#### [ Linux ]

```
aws ssm send-command \
        --document-name "AWS-RunShellScript" \
        --document-version "1" \
        --targets "Key=instanceids,Values=i-02573cafcfEXAMPLE" \
        --parameters '{"commands":["secure=$(aws ssm get-parameters --names SecurePassword --with-decryption --query Parameters[0].Value --output text --region us-east-2)","echo $secure | passwd myuser --stdin"]}' \
        --timeout-seconds 600 \
        --max-concurrency "50" \
        --max-errors "0" \
        --region us-east-2
```

------
#### [ Windows ]

```
aws ssm send-command ^
        --document-name "AWS-RunPowerShellScript" ^
        --document-version "1" ^
        --targets "Key=instanceids,Values=i-02573cafcfEXAMPLE" ^
        --parameters "commands=['$secure = (Get-SSMParameterValue -Names SecurePassword -WithDecryption $True).Parameters[0].Value','net user administrator $secure']" ^
        --timeout-seconds 600 ^
        --max-concurrency "50" ^
        --max-errors "0" ^
        --region us-east-2
```

------

**Example 3: Reference a parameter in an SSM document**  
You can also reference Systems Manager parameters in the *Parameters* section of an SSM document, as shown in the following example.

```
{
   "schemaVersion":"2.0",
   "description":"Sample version 2.0 document v2",
   "parameters":{
      "commands" : {
        "type": "StringList",
        "default": ["{{ssm:parameter-name}}"]
      }
    },
    "mainSteps":[
       {
          "action":"aws:runShellScript",
          "name":"runShellScript",
          "inputs":{
             "runCommand": "{{commands}}"
          }
       }
    ]
}
```

Don't confuse the similar syntax for *local parameters* used in the `runtimeConfig` section of SSM documents with Parameter Store parameters. A local parameter isn't the same as a Systems Manager parameter. You can distinguish local parameters from Systems Manager parameters by the absence of the `ssm:` prefix.

```
"runtimeConfig":{
        "aws:runShellScript":{
            "properties":[
                {
                    "id":"0.aws:runShellScript",
                    "runCommand":"{{ commands }}",
                    "workingDirectory":"{{ workingDirectory }}",
                    "timeoutSeconds":"{{ executionTimeout }}"
```

**Note**  
SSM documents don't support references to `SecureString` parameters. This means that to use `SecureString` parameters with, for example, Run Command, you have to retrieve the parameter value before passing it to Run Command, as shown in the following examples.  

```
value=$(aws ssm get-parameters --names parameter-name --with-decryption)
```

```
aws ssm send-command \
    --name AWS-JoinDomain \
    --parameters password=$value \
    --instance-id instance-id
```

```
aws ssm send-command ^
    --name AWS-JoinDomain ^
    --parameters password=$value ^
    --instance-id instance-id
```

```
$secure = (Get-SSMParameterValue -Names parameter-name -WithDecryption $True).Parameters[0].Value | ConvertTo-SecureString -AsPlainText -Force
```

```
$cred = New-Object System.Management.Automation.PSCredential -argumentlist user-name,$secure
```

# Using native parameter support in Parameter Store for Amazon Machine Image IDs
Using native parameter support for Amazon Machine Image IDs

When you create a `String` parameter, you can specify the *data type* as `aws:ec2:image` to ensure that the parameter value you enter is a valid Amazon Machine Image (AMI) ID format. 

Support for AMI ID formats allows you to avoid updating all your scripts and templates with a new ID each time the AMI that you want to use in your processes changes. You can create a parameter with the data type `aws:ec2:image`, and for its value, enter the ID of an AMI. This is the AMI you want to create new instances from. You then reference this parameter in your templates, commands, and scripts. 

For example, you can specify the parameter that contains your preferred AMI ID when you run the Amazon Elastic Compute Cloud (Amazon EC2) `run-instances` command.

**Note**  
The user who runs this command must have AWS Identity and Access Management (IAM) permissions that include the `ssm:GetParameters` API operation in order for the parameter value to be validated. Otherwise, the parameter creation process fails.

------
#### [ Linux & macOS ]

```
aws ec2 run-instances \
    --image-id resolve:ssm:/golden-ami \
    --count 1 \
    --instance-type t2.micro \
    --key-name my-key-pair \
    --security-groups my-security-group
```

------
#### [ Windows ]

```
aws ec2 run-instances ^
    --image-id resolve:ssm:/golden-ami ^
    --count 1 ^
    --instance-type t2.micro ^
    --key-name my-key-pair ^
    --security-groups my-security-group
```

------

You can also choose your preferred AMI when you create an instance using the Amazon EC2 console. For more information, see [Using a Systems Manager parameter to find an AMI](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/finding-an-ami.html#using-systems-manager-parameter-to-find-AMI) in the *Amazon EC2 User Guide*.

When it's time to use a different AMI in your instance creation workflow, you need only update the parameter with the new AMI value, and Parameter Store again validates that you have entered an ID in the proper format.

## Granting permissions to create a parameter of `aws:ec2:image` data type


Using AWS Identity and Access Management (IAM) policies, you can provide or restrict user access to Parameter Store API operations and content.

To create an `aws:ec2:image` data type parameter, the user must have both `ssm:PutParameter` and `ec2:DescribeImages` permissions.

The following example policy grants users permission to call the `PutParameter` API operation for `aws:ec2:image`. This means that the user can add a parameter of data type `aws:ec2:image` to the system. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "ssm:PutParameter",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "ec2:DescribeImages",
            "Resource": "*"
        }
    ]
}
```

------

## How AMI format validation works


When you specify `aws:ec2:image` as the data type for a parameter, Systems Manager doesn't create the parameter immediately. It instead performs an asynchronous validation operation to ensure that the parameter value meets the formatting requirements for an AMI ID, and that the specified AMI is available in your AWS account.

A parameter version number might be generated before the validation operation is complete. The operation might not be complete even if a parameter version number is generated. 

To monitor whether your parameters are created successfully, we recommend using Amazon EventBridge to send you notifications about your `create` and `update` parameter operations. These notifications report whether a parameter operation was successful or not. If an operation fails, the notification includes an error message that indicates the reason for the failure. 

```
{
    "version": "0",
    "id": "eed4a719-0fa4-6a49-80d8-8ac65EXAMPLE",
    "detail-type": "Parameter Store Change",
    "source": "aws.ssm",
    "account": "111122223333",
    "time": "2020-05-26T22:04:42Z",
    "region": "us-east-2",
    "resources": [
        "arn:aws:ssm:us-east-2:111122223333:parameter/golden-ami"
    ],
    "detail": {
        "exception": "Unable to Describe Resource",
        "dataType": "aws:ec2:image",
        "name": "golden-ami",
        "type": "String",
        "operation": "Create"
    }
}
```

For information about subscribing to Parameter Store events in EventBridge, see [Setting up notifications or triggering actions based on Parameter Store events](sysman-paramstore-cwe.md).

# Deleting parameters from Parameter Store
Deleting parameters

This topic describes how to delete parameters that you have created in Parameter Store, a tool in AWS Systems Manager.

**Warning**  
Deleting a parameter removes all versions of it. Once deleted, the parameter and its versions can't be restored.

**To delete a parameter using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. On the **My parameters** tab, select the check box next to each parameter to delete.

1. Choose **Delete**.

1. On the confirmation dialog, choose **Delete parameters**.

**To delete a parameter using the AWS CLI**
+ Run the following command:

  ```
  aws ssm delete-parameter --name "my-parameter"
  ```

  Replace *my-parameter* with the name of your parameter to be deleted.

  For information about all options available for use with the `delete-parameter` command, see [https://docs.aws.amazon.com/cli/latest/reference/ssm/delete-parameter.html](https://docs.aws.amazon.com/cli/latest/reference/ssm/delete-parameter.html) in the *AWS Systems Manager section of the AWS CLI Command Reference*.

# Working with public parameters in Parameter Store
Working with public parameters

Some AWS services publish information about common artifacts as AWS Systems Manager *public* parameters. For example, the Amazon Elastic Compute Cloud (Amazon EC2) service publishes information about Amazon Machine Images (AMIs) as public parameters.

**Topics**
+ [

# Discovering public parameters in Parameter Store
](parameter-store-finding-public-parameters.md)
+ [

# Calling AMI public parameters in Parameter Store
](parameter-store-public-parameters-ami.md)
+ [

# Calling the ECS optimized AMI public parameter in Parameter Store
](parameter-store-public-parameters-ecs.md)
+ [

# Calling the EKS optimized AMI public parameter in Parameter Store
](parameter-store-public-parameters-eks.md)
+ [

# Calling public parameters for AWS services, Regions, endpoints, Availability Zones, local zones, and Wavelength Zones in Parameter Store
](parameter-store-public-parameters-global-infrastructure.md)

**Related AWS blog posts**  
+ [Query for AWS Regions, Endpoints, and More Using AWS Systems Manager Parameter Store](https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/)
+ [Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store](https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/)
+ [Query for the Latest Windows AMI Using AWS Systems Manager Parameter Store](https://aws.amazon.com/blogs/mt/query-for-the-latest-windows-ami-using-systems-manager-parameter-store/)

# Discovering public parameters in Parameter Store
Discovering public parameters

You can search for public parameters using the Parameter Store console or the AWS Command Line Interface. 

A public parameter name begins with `aws/service/list`. The next part of the name corresponds to the service that owns that parameter. 

The following is a partial list of AWS services and other resources that provide public parameters:
+ `ami-amazon-linux-latest`
+ `ami-windows-latest`
+  `ec2-macos`
+ `appmesh`
+ `aws-for-fluent-bit`
+ `aws-sdk-pandas`
+ `bottlerocket`
+ `canonical`
+ `cloud9`
+ `datasync`
+ `deeplearning`
+ `ecs`
+ `eks`
+ `fis`
+ `freebsd`
+ `global-infrastructure`
+ `marketplace`
+ `neuron`
+ `powertools`
+ `sagemaker-distribution`
+ `storagegateway`

Not all public parameters are published to every AWS Region.

## Finding public parameters using the Parameter Store console


You must have at least one parameter in your AWS account and AWS Region before you can search for public parameters using the console.

**To find public parameters using the console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Parameter Store**.

1. Choose the **Public parameters** tab.

1. Choose the **Select a service** dropdown. Choose the service whose parameters you want to use.

1. (Optional) Filter the parameters owned by the service you selected by entering more information into the search bar.

1. Choose the public parameter you want to use. 

## Finding public parameters using the AWS CLI


Use `describe-parameters` for discovery of public parameters. 

Use `get-parameters-by-path` to get the actual path for a service listed under `/aws/service/list`. To get the service's path, remove `/list` from the path. For example, `/aws/service/list/ecs` becomes `/aws/service/ecs`.

To retrieve a list of public parameters owned by different services in Parameter Store, run the following command.

```
aws ssm get-parameters-by-path --path /aws/service/list
```

The command returns information like the following. This example has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/list/ami-al-latest",
            "Type": "String",
            "Value": "/aws/service/ami-al-latest/",
            "Version": 1,
            "LastModifiedDate": "2021-01-29T10:25:10.902000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/list/ami-al-latest",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/list/ami-windows-latest",
            "Type": "String",
            "Value": "/aws/service/ami-windows-latest/",
            "Version": 1,
            "LastModifiedDate": "2021-01-29T10:25:12.567000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/list/ami-windows-latest",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/list/aws-storage-gateway-latest",
            "Type": "String",
            "Value": "/aws/service/aws-storage-gateway-latest/",
            "Version": 1,
            "LastModifiedDate": "2021-01-29T10:25:09.903000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/list/aws-storage-gateway-latest",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/list/global-infrastructure",
            "Type": "String",
            "Value": "/aws/service/global-infrastructure/",
            "Version": 1,
            "LastModifiedDate": "2021-01-29T10:25:11.901000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/list/global-infrastructure",
            "DataType": "text"
        }
    ]
}
```

If you want to view parameters owned by a specific service, choose the service from the list that was produced after running the earlier command. Then, make a `get-parameters-by-path` call using the name of your desired service. 

For example, `/aws/service/global-infrastructure`. The path might be one-level (only calls parameters that match the exact values given) or recursive (contains elements in the path beyond what you have given). 

**Note**  
The `/aws/service/global-infrastructure` path is not supported for queries in all Regions. For information, see [Calling public parameters for AWS services, Regions, endpoints, Availability Zones, local zones, and Wavelength Zones in Parameter Store](parameter-store-public-parameters-global-infrastructure.md).

If no results are returned for the service you specify, add the `--recursive` flag and run the command again.

```
aws ssm get-parameters-by-path --path /aws/service/global-infrastructure
```

 This returns all parameters owned by `global-infrastructure`. The following is an example.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/global-infrastructure/current-region",
            "Type": "String",
            "LastModifiedDate": "2019-06-21T05:15:34.252000-07:00",
            "Version": 1,
            "Tier": "Standard",
            "Policies": [],
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/version",
            "Type": "String",
            "LastModifiedDate": "2019-02-04T06:59:32.875000-08:00",
            "Version": 1,
            "Tier": "Standard",
            "Policies": [],
            "DataType": "text"
        }
    ]
}
```

You can also view parameters owned by a specific service by using the `Option:BeginsWith` filter.

```
aws ssm describe-parameters --parameter-filters "Key=Name, Option=BeginsWith, Values=/aws/service/ami-amazon-linux-latest"
```

The command returns information like the following. This example output has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-ebs",
            "Type": "String",
            "LastModifiedDate": "2021-01-26T13:39:40.686000-08:00",
            "Version": 25,
            "Tier": "Standard",
            "Policies": [],
            "DataType": "text"
        },
        {
            "Name": "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2",
            "Type": "String",
            "LastModifiedDate": "2021-01-26T13:39:40.807000-08:00",
            "Version": 25,
            "Tier": "Standard",
            "Policies": [],
            "DataType": "text"
        },
        {
            "Name": "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-s3",
            "Type": "String",
            "LastModifiedDate": "2021-01-26T13:39:40.920000-08:00",
            "Version": 25,
            "Tier": "Standard",
            "Policies": [],
            "DataType": "text"
        }
    ]
}
```

**Note**  
The returned parameters might be different when you use `Option=BeginsWith` because it uses a different search pattern.

# Calling AMI public parameters in Parameter Store
Calling AMI public parameters

Amazon Elastic Compute Cloud (Amazon EC2) Amazon Machine Image (AMI) public parameters are available for Amazon Linux 2, Amazon Linux 2023 (AL2023), macOS, and Windows Server from the following paths:
+ Amazon Linux 2, and Amazon Linux 2023: `/aws/service/ami-amazon-linux-latest`
+ macOS: `/aws/service/ec2-macos`
+ Windows Server: `/aws/service/ami-windows-latest`



## Calling AMI public parameters for Amazon Linux 2 and Amazon Linux 2023


You can view a list of all Amazon Linux 2 and Amazon Linux 2023 (AL2023) AMIs in the current AWS Region by using the following command in the AWS Command Line Interface (AWS CLI).

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/ami-amazon-linux-latest \
    --query 'Parameters[].Name'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/ami-amazon-linux-latest ^
    --query Parameters[].Name
```

------

The command returns information like the following.

```
[
    "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-minimal-kernel-6.1-arm64",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-minimal-kernel-6.1-x86_64",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-minimal-kernel-default-arm64",
    "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2",
    "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-s3",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-5.10-hvm-x86_64-ebs",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-arm64",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64",
    "/aws/service/ami-amazon-linux-latest/al2023-ami-minimal-kernel-default-x86_64",
    "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-ebs",
    "/aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-ebs",
    "/aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-s3",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-arm64-gp2",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-5.10-hvm-arm64-gp2",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-5.10-hvm-x86_64-gp2",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-minimal-hvm-arm64-ebs",
    "/aws/service/ami-amazon-linux-latest/amzn2-ami-minimal-hvm-x86_64-ebs"
]
```

You can view details about these AMIs, including the AMI IDs and Amazon Resource Names (ARNs), by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path "/aws/service/ami-amazon-linux-latest" \
    --region region
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path "/aws/service/ami-amazon-linux-latest" ^
    --region region
```

------

*region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

The command returns information like the following. This example output has been truncated for space.

```
{
    "Parameters": [
         {
            "Name": "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64",
            "Type": "String",
            "Value": "ami-0b1b8b24a6c8e5d8b",
            "Version": 69,
            "LastModifiedDate": "2024-03-13T14:05:09.583000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64",
            "Type": "String",
            "Value": "ami-0e0bf53f6def86294",
            "Version": 69,
            "LastModifiedDate": "2024-03-13T14:05:09.890000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-x86_64",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/ami-amazon-linux-latest/al2023-ami-minimal-kernel-6.1-arm64",
            "Type": "String",
            "Value": "ami-09951bb66f9e5b5a5",
            "Version": 69,
            "LastModifiedDate": "2024-03-13T14:05:10.197000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-amazon-linux-latest/al2023-ami-minimal-kernel-6.1-arm64",
            "DataType": "text"
        }
    ]
}
```

You can view details of a specific AMI by using the [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html) API operation with the full AMI name, including the path. Here is an example command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters \
    --names /aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64 \
    --region us-east-2
```

------
#### [ Windows ]

```
aws ssm get-parameters ^
    --names /aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64 ^
    --region us-east-2
```

------

The command returns the following information.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64",
            "Type": "String",
            "Value": "ami-0b1b8b24a6c8e5d8b",
            "Version": 69,
            "LastModifiedDate": "2024-03-13T14:05:09.583000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-6.1-arm64",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}
```

## Calling AMI public parameters for macOS


You can view a list of all macOS AMIs in the current AWS Region by using the following command in the AWS CLI.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/ec2-macos\
    --query 'Parameters[].Name'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/ec2-macos ^
    --query Parameters[].Name
```

------

The command returns information like the following. 

```
[
    "/aws/service/ec2-macos/sonoma/x86_64_mac/latest/image_id",
    "/aws/service/ec2-macos/ventura/x86_64_mac/latest/image_id",
    "/aws/service/ec2-macos/sonoma/arm64_mac/latest/image_id",
    "/aws/service/ec2-macos/ventura/arm64_mac/latest/image_id"
]
```

You can view details about these AMIs, including the AMI IDs and Amazon Resource Names (ARNs), by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path "/aws/service/ec2-macos" \
    --region region
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path "/aws/service/ec2-macos" ^
    --region region
```

------

*region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

The command returns information like the following. This example output has been truncated for space.

```
{
    "Parameters": [
        ...sample results pending...
    ]
}
```

You can view details of a specific AMI by using the [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html) API operation with the full AMI name, including the path. Here is an example command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters \
    --names /aws/service/ec2-macos/...pending... \
    --region us-east-2
```

------
#### [ Windows ]

```
aws ssm get-parameters ^
     --names /aws/service/ec2-macos/...pending... ^
    --region us-east-2
```

------

The command returns the following information.

```
{
    "Parameters": [
         ...sample results pending...
    ],
    "InvalidParameters": []
}
```

## Calling AMI public parameters for Windows Server


You can view a list of all Windows Server AMIs in the current AWS Region by using the following command in the AWS CLI.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/ami-windows-latest \
    --query 'Parameters[].Name'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/ami-windows-latest ^
    --query Parameters[].Name
```

------

The command returns information like the following. This example output has been truncated for space.

```
[
    "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-SQL_2014_SP3_Enterprise",
    "/aws/service/ami-windows-latest/Windows_Server-2016-German-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2016-Japanese-Full-SQL_2016_SP3_Standard",
    "/aws/service/ami-windows-latest/Windows_Server-2016-Japanese-Full-SQL_2017_Web",
    "/aws/service/ami-windows-latest/Windows_Server-2019-English-Core-EKS_Optimized-1.25",
    "/aws/service/ami-windows-latest/Windows_Server-2019-Italian-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2022-Japanese-Full-SQL_2019_Enterprise",
    "/aws/service/ami-windows-latest/Windows_Server-2022-Portuguese_Brazil-Full-Base",
    "/aws/service/ami-windows-latest/amzn2-ami-hvm-2.0.20191217.0-x86_64-gp2-mono",
    "/aws/service/ami-windows-latest/Windows_Server-2016-English-Deep-Learning",
    "/aws/service/ami-windows-latest/Windows_Server-2016-Japanese-Full-SQL_2016_SP3_Web",
    "/aws/service/ami-windows-latest/Windows_Server-2016-Korean-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2019-English-STIG-Core",
    "/aws/service/ami-windows-latest/Windows_Server-2019-French-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2019-Japanese-Full-SQL_2017_Enterprise",
    "/aws/service/ami-windows-latest/Windows_Server-2019-Korean-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2022-English-Full-SQL_2022_Web",
    "/aws/service/ami-windows-latest/Windows_Server-2022-Italian-Full-Base",
    "/aws/service/ami-windows-latest/amzn2-x86_64-SQL_2019_Express",
    "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Core-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-SQL_2019_Enterprise",
    "/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-SQL_2019_Standard",
    "/aws/service/ami-windows-latest/Windows_Server-2016-Portuguese_Portugal-Full-Base",
    "/aws/service/ami-windows-latest/Windows_Server-2019-English-Core-EKS_Optimized-1.24",
    "/aws/service/ami-windows-latest/Windows_Server-2019-English-Deep-Learning",
    "/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-SQL_2017_Web",
    "/aws/service/ami-windows-latest/Windows_Server-2019-Hungarian-Full-Base
]
```

You can view details about these AMIs, including the AMI IDs and Amazon Resource Names (ARNs), by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path "/aws/service/ami-windows-latest" \
    --region region
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path "/aws/service/ami-windows-latest" ^
    --region region
```

------

*region* represents the identifier for an AWS Region supported by AWS Systems Manager, such as `us-east-2` for the US East (Ohio) Region. For a list of supported *region* values, see the **Region** column in [Systems Manager service endpoints](https://docs.aws.amazon.com/general/latest/gr/ssm.html#ssm_region) in the *Amazon Web Services General Reference*.

The command returns information like the following. This example output has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base",
            "Type": "String",
            "Value": "ami-0a30b2e65863e2d16",
            "Version": 36,
            "LastModifiedDate": "2024-03-15T15:58:37.976000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-SQL_2014_SP3_Enterprise",
            "Type": "String",
            "Value": "ami-001f20c053dd120ce",
            "Version": 69,
            "LastModifiedDate": "2024-03-15T15:53:58.905000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-windows-latest/Windows_Server-2016-English-Full-SQL_2014_SP3_Enterprise",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/ami-windows-latest/Windows_Server-2016-German-Full-Base",
            "Type": "String",
            "Value": "ami-063be4935453e94e9",
            "Version": 102,
            "LastModifiedDate": "2024-03-15T15:51:12.003000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-windows-latest/Windows_Server-2016-German-Full-Base",
            "DataType": "text"
        }
    ]
}
```

You can view details of a specific AMI by using the [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html) API operation with the full AMI name, including the path. Here is an example command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters \
    --names /aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base \
    --region us-east-2
```

------
#### [ Windows ]

```
aws ssm get-parameters ^
    --names /aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base ^
    --region us-east-2
```

------

The command returns the following information.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base",
            "Type": "String",
            "Value": "ami-0a30b2e65863e2d16",
            "Version": 36,
            "LastModifiedDate": "2024-03-15T15:58:37.976000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}
```

# Calling the ECS optimized AMI public parameter in Parameter Store
Calling the ECS optimized AMI public parameter

The Amazon Elastic Container Service (Amazon ECS) service publishes the name of the latest Amazon ECS optimized Amazon Machine Images (AMIs) as public parameters. Users are encouraged to use this AMI when creating a new Amazon Elastic Compute Cloud (Amazon EC2) cluster for Amazon ECS because the optimized AMIs include bug fixes and feature updates.

Use the following command to view the name of the latest Amazon ECS optimized AMI for Amazon Linux 2. To see commands for other operating systems, see [Retrieving Amazon ECS-Optimized AMI metadata ](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_AMI.html) in the *Amazon Elastic Container Service Developer Guide*.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters \
    --names /aws/service/ecs/optimized-ami/amazon-linux-2/recommended
```

------
#### [ Windows ]

```
aws ssm get-parameters ^
    --names /aws/service/ecs/optimized-ami/amazon-linux-2/recommended
```

------

The command returns information like the following.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/ecs/optimized-ami/amazon-linux-2/recommended",
            "Type": "String",
            "Value": "{\"schema_version\":1,\"image_name\":\"amzn2-ami-ecs-hvm-2.0.20210929-x86_64-ebs\",\"image_id\":\"ami-0c38a2329ed4dae9a\",\"os\":\"Amazon Linux 2\",\"ecs_runtime_version\":\"Docker version 20.10.7\",\"ecs_agent_version\":\"1.55.4\"}",
            "Version": 73,
            "LastModifiedDate": "2021-10-06T16:35:10.004000-07:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/ecs/optimized-ami/amazon-linux-2/recommended",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}
```

# Calling the EKS optimized AMI public parameter in Parameter Store
Calling the EKS optimized AMI public parameter

The Amazon Elastic Kubernetes Service (Amazon EKS) service publishes the name of the latest Amazon EKS optimized Amazon Machine Image (AMI) as a public parameter. We encourage you to use this AMI when adding nodes to an Amazon EKS cluster, as new releases include Kubernetes patches and security updates. Previously, to guarantee you were using the latest AMI meant checking the Amazon EKS documentation and manually updating any deployment templates or resources with the new AMI ID.

Use the following command to view the name of the latest Amazon EKS optimized AMI for Amazon Linux 2.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters \
    --names /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended
```

------
#### [ Windows ]

```
aws ssm get-parameters ^
    --names /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended
```

------

The command returns information like the following.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended",
            "Type": "String",
            "Value": "{\"schema_version\":\"2\",\"image_id\":\"ami-08984d8491de17ca0\",\"image_name\":\"amazon-eks-node-1.14-v20201007\",\"release_version\":\"1.14.9-20201007\"}",
            "Version": 24,
            "LastModifiedDate": "2020-11-17T10:16:09.971000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}
```

# Calling public parameters for AWS services, Regions, endpoints, Availability Zones, local zones, and Wavelength Zones in Parameter Store
Calling public parameters for AWS services, Regions, endpoints, and Zones

You can call the AWS Region, service, endpoint, Availability, and Wavelength Zones of public parameters by using the following path.

`/aws/service/global-infrastructure`

**Note**  
Currently, the path `/aws/service/global-infrastructure` is supported for queries in the following AWS Regions only:  
US East (N. Virginia) (us-east-1)
US East (Ohio) (us-east-2)
US West (N. California) (us-west-1)
US West (Oregon) (us-west-2) 
Asia Pacific (Hong Kong) (ap-east-1)
Asia Pacific (Mumbai) (ap-south-1)
Asia Pacific (Seoul) (ap-northeast-2)
Asia Pacific (Singapore) (ap-southeast-1)
Asia Pacific (Sydney) (ap-southeast-2)
Asia Pacific (Tokyo) (ap-northeast-1)
Canada (Central) (ca-central-1)
Europe (Frankfurt) (eu-central-1)
Europe (Ireland) (eu-west-1) 
Europe (London) (eu-west-2) 
Europe (Paris) (eu-west-3) 
Europe (Stockholm) (eu-north-1)
South America (São Paulo) (sa-east-1)
If you are working in a different [commercial Region](https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html#region), you can specify a supported Region in your query to view results. For example, if you are working in the Canada West (Calgary) (ca-west-1) Region, you could specify Canada (Central) (ca-central-1) in your query:  

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/regions \
    --region ca-central-1
```

**View active AWS Regions**  
You can view a list of all active AWS Regions by using the following command in the AWS Command Line Interface (AWS CLI).

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/regions \
    --query 'Parameters[].Name'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/regions ^
    --query Parameters[].Name
```

------

The command returns information like the following.

```
[
    "/aws/service/global-infrastructure/regions/af-south-1",
    "/aws/service/global-infrastructure/regions/ap-east-1",
    "/aws/service/global-infrastructure/regions/ap-northeast-3",
    "/aws/service/global-infrastructure/regions/ap-south-2",
    "/aws/service/global-infrastructure/regions/ca-central-1",
    "/aws/service/global-infrastructure/regions/eu-central-2",
    "/aws/service/global-infrastructure/regions/eu-west-2",
    "/aws/service/global-infrastructure/regions/eu-west-3",
    "/aws/service/global-infrastructure/regions/us-east-1",
    "/aws/service/global-infrastructure/regions/us-gov-west-1",
    "/aws/service/global-infrastructure/regions/ap-northeast-2",
    "/aws/service/global-infrastructure/regions/ap-southeast-1",
    "/aws/service/global-infrastructure/regions/ap-southeast-2",
    "/aws/service/global-infrastructure/regions/ap-southeast-3",
    "/aws/service/global-infrastructure/regions/cn-north-1",
    "/aws/service/global-infrastructure/regions/cn-northwest-1",
    "/aws/service/global-infrastructure/regions/eu-south-1",
    "/aws/service/global-infrastructure/regions/eu-south-2",
    "/aws/service/global-infrastructure/regions/us-east-2",
    "/aws/service/global-infrastructure/regions/us-west-1",
    "/aws/service/global-infrastructure/regions/ap-northeast-1",
    "/aws/service/global-infrastructure/regions/ap-south-1",
    "/aws/service/global-infrastructure/regions/ap-southeast-4",
    "/aws/service/global-infrastructure/regions/ca-west-1",
    "/aws/service/global-infrastructure/regions/eu-central-1",
    "/aws/service/global-infrastructure/regions/il-central-1",
    "/aws/service/global-infrastructure/regions/me-central-1",
    "/aws/service/global-infrastructure/regions/me-south-1",
    "/aws/service/global-infrastructure/regions/sa-east-1",
    "/aws/service/global-infrastructure/regions/us-gov-east-1",
    "/aws/service/global-infrastructure/regions/eu-north-1",
    "/aws/service/global-infrastructure/regions/eu-west-1",
    "/aws/service/global-infrastructure/regions/us-west-2"
]
```

**View available AWS services**  
You can view a complete list of all available AWS services and sort them into alphabetical order by using the following command. This example output has been truncated for space.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/services \
    --query 'Parameters[].Name | sort(@)'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/services ^
    --query "Parameters[].Name | sort(@)"
```

------

The command returns information like the following. This example has been truncated for space.

```
[
     "/aws/service/global-infrastructure/services/accessanalyzer",
    "/aws/service/global-infrastructure/services/account",
    "/aws/service/global-infrastructure/services/acm",
    "/aws/service/global-infrastructure/services/acm-pca",
    "/aws/service/global-infrastructure/services/ahl",
    "/aws/service/global-infrastructure/services/aiq",
    "/aws/service/global-infrastructure/services/amazonlocationservice",
    "/aws/service/global-infrastructure/services/amplify",
    "/aws/service/global-infrastructure/services/amplifybackend",
    "/aws/service/global-infrastructure/services/apigateway",
    "/aws/service/global-infrastructure/services/apigatewaymanagementapi",
    "/aws/service/global-infrastructure/services/apigatewayv2",
    "/aws/service/global-infrastructure/services/appconfig",
    "/aws/service/global-infrastructure/services/appconfigdata",
    "/aws/service/global-infrastructure/services/appflow",
    "/aws/service/global-infrastructure/services/appintegrations",
    "/aws/service/global-infrastructure/services/application-autoscaling",
    "/aws/service/global-infrastructure/services/application-insights",
    "/aws/service/global-infrastructure/services/applicationcostprofiler",
    "/aws/service/global-infrastructure/services/appmesh",
    "/aws/service/global-infrastructure/services/apprunner",
    "/aws/service/global-infrastructure/services/appstream",
    "/aws/service/global-infrastructure/services/appsync",
    "/aws/service/global-infrastructure/services/aps",
    "/aws/service/global-infrastructure/services/arc-zonal-shift",
    "/aws/service/global-infrastructure/services/artifact",
    "/aws/service/global-infrastructure/services/athena",
    "/aws/service/global-infrastructure/services/auditmanager",
    "/aws/service/global-infrastructure/services/augmentedairuntime",
    "/aws/service/global-infrastructure/services/aurora",
    "/aws/service/global-infrastructure/services/autoscaling",
    "/aws/service/global-infrastructure/services/aws-appfabric",
    "/aws/service/global-infrastructure/services/awshealthdashboard",
```

**View supported Regions for an AWS service**  
You can view a list of AWS Regions where a service is available. This example uses AWS Systems Manager (`ssm`).

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/services/ssm/regions \
    --query 'Parameters[].Value'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/services/ssm/regions ^
    --query Parameters[].Value
```

------

The command returns information like the following.

```
[
    "ap-south-1",
    "eu-central-1",
    "eu-central-2",
    "eu-west-1",
    "eu-west-2",
    "eu-west-3",
    "il-central-1",
    "me-south-1",
    "us-east-2",
    "us-gov-west-1",
    "af-south-1",
    "ap-northeast-3",
    "ap-southeast-1",
    "ap-southeast-4",
    "ca-central-1",
    "ca-west-1",
    "cn-north-1",
    "eu-north-1",
    "eu-south-2",
    "us-west-1",
    "ap-east-1",
    "ap-northeast-1",
    "ap-northeast-2",
    "ap-southeast-2",
    "ap-southeast-3",
    "cn-northwest-1",
    "eu-south-1",
    "me-central-1",
    "us-gov-east-1",
    "us-west-2",
    "ap-south-2",
    "sa-east-1",
    "us-east-1"
]
```

**View the Regional endpoint for a service**  
You can view a Regional endpoint for a service by using the following command. This command queries the US East (Ohio) (us-east-2) Region.

------
#### [ Linux & macOS ]

```
aws ssm get-parameter \
    --name /aws/service/global-infrastructure/regions/us-east-2/services/ssm/endpoint \
    --query 'Parameter.Value'
```

------
#### [ Windows ]

```
aws ssm get-parameter ^
    --name /aws/service/global-infrastructure/regions/us-east-2/services/ssm/endpoint ^
    --query Parameter.Value
```

------

The command returns information like the following.

```
"ssm.us-east-2.amazonaws.com"
```

**View complete Availability Zone details**  
You can view Availability Zones by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/availability-zones/
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/availability-zones/
```

------

The command returns information like the following. This example has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/global-infrastructure/availability-zones/afs1-az3",
            "Type": "String",
            "Value": "afs1-az3",
            "Version": 1,
            "LastModifiedDate": "2020-04-21T12:05:35.375000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/afs1-az3",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/availability-zones/aps1-az2",
            "Type": "String",
            "Value": "aps1-az2",
            "Version": 1,
            "LastModifiedDate": "2020-04-03T16:13:57.351000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/aps1-az2",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/availability-zones/apse3-az1",
            "Type": "String",
            "Value": "apse3-az1",
            "Version": 1,
            "LastModifiedDate": "2021-12-13T08:51:38.983000-05:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/apse3-az1",
            "DataType": "text"
        }
    ]
}
```

**View Availability Zone names only**  
You can view the names of Availability Zones only by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/availability-zones \
    --query 'Parameters[].Name | sort(@)'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/availability-zones ^
    --query "Parameters[].Name | sort(@)"
```

------

The command returns information like the following. This example has been truncated for space.

```
[
    "/aws/service/global-infrastructure/availability-zones/afs1-az1",
    "/aws/service/global-infrastructure/availability-zones/afs1-az2",
    "/aws/service/global-infrastructure/availability-zones/afs1-az3",
    "/aws/service/global-infrastructure/availability-zones/ape1-az1",
    "/aws/service/global-infrastructure/availability-zones/ape1-az2",
    "/aws/service/global-infrastructure/availability-zones/ape1-az3",
    "/aws/service/global-infrastructure/availability-zones/apne1-az1",
    "/aws/service/global-infrastructure/availability-zones/apne1-az2",
    "/aws/service/global-infrastructure/availability-zones/apne1-az3",
    "/aws/service/global-infrastructure/availability-zones/apne1-az4"
```

**View names of Availability Zones in a single Region**  
You can view the names of the Availability Zones in one Region (`us-east-2`, in this example) using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/regions/us-east-2/availability-zones \
    --query 'Parameters[].Name | sort(@)'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/regions/us-east-2/availability-zones ^
    --query "Parameters[].Name | sort(@)"
```

------

The command returns information like the following.

```
[
    "/aws/service/global-infrastructure/regions/us-east-2/availability-zones/use2-az1",
    "/aws/service/global-infrastructure/regions/us-east-2/availability-zones/use2-az2",
    "/aws/service/global-infrastructure/regions/us-east-2/availability-zones/use2-az3"
```

**View Availability Zone ARNs only**  
You can view the Amazon Resource Names (ARNs) of Availability Zones only by using the following command. 

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/availability-zones \
    --query 'Parameters[].ARN | sort(@)'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/availability-zones ^
    --query "Parameters[].ARN | sort(@)"
```

------

The command returns information like the following. This example has been truncated for space.

```
[
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/afs1-az1",
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/afs1-az2",
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/afs1-az3",
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/ape1-az1",
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/ape1-az2",
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/ape1-az3",
    "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/availability-zones/apne1-az1",
```

**View local zone details**  
You can view local zones by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/local-zones
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/local-zones
```

------

The command returns information like the following. This example has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/global-infrastructure/local-zones/afs1-los1-az1",
            "Type": "String",
            "Value": "afs1-los1-az1",
            "Version": 1,
            "LastModifiedDate": "2023-01-25T11:53:11.690000-05:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/afs1-los1-az1",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/apne1-tpe1-az1",
            "Type": "String",
            "Value": "apne1-tpe1-az1",
            "Version": 1,
            "LastModifiedDate": "2024-03-15T12:35:41.076000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/apne1-tpe1-az1",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/aps1-ccu1-az1",
            "Type": "String",
            "Value": "aps1-ccu1-az1",
            "Version": 1,
            "LastModifiedDate": "2022-12-19T11:34:43.351000-05:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/aps1-ccu1-az1",
            "DataType": "text"
        }
    ]
}
```

**View Wavelength Zone details**  
You can view Wavelength Zones by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/wavelength-zones
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/wavelength-zones
```

------

The command returns information like the following. This example has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/global-infrastructure/wavelength-zones/apne1-wl1-nrt-wlz1",
            "Type": "String",
            "Value": "apne1-wl1-nrt-wlz1",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T17:16:04.715000-05:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/wavelength-zones/apne1-wl1-nrt-wlz1",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/wavelength-zones/apne2-wl1-sel-wlz1",
            "Type": "String",
            "Value": "apne2-wl1-sel-wlz1",
            "Version": 1,
            "LastModifiedDate": "2022-05-25T12:29:13.862000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/wavelength-zones/apne2-wl1-sel-wlz1",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/wavelength-zones/cac1-wl1-yto-wlz1",
            "Type": "String",
            "Value": "cac1-wl1-yto-wlz1",
            "Version": 1,
            "LastModifiedDate": "2022-04-26T09:57:44.495000-04:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/wavelength-zones/cac1-wl1-yto-wlz1",
            "DataType": "text"
        }
    ]
}
```

**View all parameters and values under a local zone**  
You can view all parameter data for a local zone by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path "/aws/service/global-infrastructure/local-zones/usw2-lax1-az1/"
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path "/aws/service/global-infrastructure/local-zones/use1-bos1-az1"
```

------

The command returns information like the following. This example has been truncated for space.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/geolocationCountry",
            "Type": "String",
            "Value": "US",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T14:16:17.641000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/geolocationCountry",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/geolocationRegion",
            "Type": "String",
            "Value": "US-MA",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T14:16:17.794000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/geolocationRegion",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/location",
            "Type": "String",
            "Value": "US East (Boston)",
            "Version": 1,
            "LastModifiedDate": "2021-01-11T10:53:24.634000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/location",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/network-border-group",
            "Type": "String",
            "Value": "us-east-1-bos-1",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T14:16:20.641000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/network-border-group",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/parent-availability-zone",
            "Type": "String",
            "Value": "use1-az4",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T14:16:20.834000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/parent-availability-zone",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/parent-region",
            "Type": "String",
            "Value": "us-east-1",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T14:16:20.721000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/parent-region",
            "DataType": "text"
        },
        {
            "Name": "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/zone-group",
            "Type": "String",
            "Value": "us-east-1-bos-1",
            "Version": 3,
            "LastModifiedDate": "2020-12-15T14:16:17.983000-08:00",
            "ARN": "arn:aws:ssm:us-east-2::parameter/aws/service/global-infrastructure/local-zones/use1-bos1-az1/zone-group",
            "DataType": "text"
        }
    ]
}
```

**View local zone parameter names only**  
You can view just the names of local zone parameters by using the following command.

------
#### [ Linux & macOS ]

```
aws ssm get-parameters-by-path \
    --path /aws/service/global-infrastructure/local-zones/usw2-lax1-az1 \
    --query 'Parameters[].Name | sort(@)'
```

------
#### [ Windows ]

```
aws ssm get-parameters-by-path ^
    --path /aws/service/global-infrastructure/local-zones/use1-bos1-az1 ^
    --query "Parameters[].Name | sort(@)"
```

------

The command returns information like the following. 

```
[
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/geolocationCountry",
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/geolocationRegion",
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/location",
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/network-border-group",
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/parent-availability-zone",
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/parent-region",
    "/aws/service/global-infrastructure/local-zones/use1-bos1-az1/zone-group"
]
```

# Parameter Store walkthroughs


The walkthrough in this section shows you how to create, store, and run parameters with Parameter Store, a tool in AWS Systems Manager, in a test environment. This walkthrough shows you how to use Parameter Store with other Systems Manager tools. You can also use Parameter Store with other AWS services. For more information, see [What is a parameter?](systems-manager-parameter-store.md#what-is-a-parameter).

**Topics**
+ [

# Creating a SecureString parameter in Parameter Store and joining a node to a Domain (PowerShell)
](sysman-param-securestring-walkthrough.md)

# Creating a SecureString parameter in Parameter Store and joining a node to a Domain (PowerShell)
Creating a SecureString parameter and joining a node to a Domain (PowerShell)

This walkthrough shows how to join a Windows Server node to a domain using AWS Systems Manager `SecureString` parameters and Run Command. The walkthrough uses typical domain parameters, such as the domain name and a domain user name. These values are passed as unencrypted string values. The domain password is encrypted using an AWS managed key and passed as an encrypted string. 

**Prerequisites**  
This walkthrough assumes that you already specified your domain name and DNS server IP address in the DHCP option set that is associated with your Amazon VPC. For information, see [Working with DHCP Options Sets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html#DHCPOptionSet) in the *Amazon VPC User Guide*.

**To create a `SecureString` parameter and join a node to a domain**

1. Enter parameters into the system using AWS Tools for Windows PowerShell.

   In the following commands, replace each *user input placeholder* with your own information.

   ```
   Write-SSMParameter -Name "domainName" -Value "DOMAIN-NAME" -Type String
   Write-SSMParameter -Name "domainJoinUserName" -Value "DOMAIN\USERNAME" -Type String
   Write-SSMParameter -Name "domainJoinPassword" -Value "PASSWORD" -Type SecureString
   ```
**Important**  
Only the *value* of a `SecureString` parameter is encrypted. Parameter names, descriptions, and other properties aren't encrypted.

1. Attach the following AWS Identity and Access Management (IAM) policies to the IAM role permissions for your node: 
   + **AmazonSSMManagedInstanceCore** – Required. This AWS managed policy allows a managed node to use Systems Manager service core functionality.
   + **AmazonSSMDirectoryServiceAccess** – Required. This AWS managed policy allows SSM Agent to access AWS Directory Service on your behalf for requests to join the domain by the managed node.
   + **A custom policy for S3 bucket access** – Required. SSM Agent, which is on your node and performs Systems Manager tasks, requires access to specific Amazon-owned Amazon Simple Storage Service (Amazon S3) buckets. In the custom S3 bucket policy that you create, you also provide access to S3 buckets of your own that are necessary for Systems Manager operations. 

     Examples: You can write output for Run Command commands or Session Manager sessions to an S3 bucket, and then use this output later for auditing or troubleshooting. You store access scripts or custom patch baseline lists in an S3 bucket, and then reference the script or list when you run a command, or when a patch baseline is applied.

     For information about creating a custom policy for Amazon S3 bucket access, see [Create a custom S3 bucket policy for an instance profile](setup-instance-permissions.md#instance-profile-custom-s3-policy)
**Note**  
Saving output log data in an S3 bucket is optional, but we recommend setting it up at the beginning of your Systems Manager configuration process if you have decided to use it. For more information, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingABucket.html) in the *Amazon Simple Storage Service User Guide*.
   + **CloudWatchAgentServerPolicy** – Optional. This AWS managed policy allows you to run the CloudWatch agent on managed nodes. This policy makes it possible to read information on a node and write it to Amazon CloudWatch. Your instance profile needs this policy only if you use services such as Amazon EventBridge or CloudWatch Logs.
**Note**  
Using CloudWatch and EventBridge features is optional, but we recommend setting them up at the beginning of your Systems Manager configuration process if you have decided to use them. For more information, see the *[Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/)* and the *[Amazon CloudWatch Logs User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/)*.

1. Edit the IAM role attached to the node and add the following policy. This policy gives the node permissions to call the `kms:Decrypt` and the `ssm:CreateDocument` API. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt",
                   "ssm:CreateDocument"
               ],
               "Resource": [
                   "arn:aws:kms:us-east-1:111122223333:key/kms-key-id"
               ]
           }
       ]
   }
   ```

------

1. Copy and paste the following json text into a text editor and save the file as `JoinInstanceToDomain.json` in the following location: `c:\temp\JoinInstanceToDomain.json`.

   ```
   {
       "schemaVersion": "2.2",
       "description": "Run a PowerShell script to securely join a Windows Server instance to a domain",
       "mainSteps": [
           {
               "action": "aws:runPowerShellScript",
               "name": "runPowerShellWithSecureString",
               "precondition": {
                   "StringEquals": [
                       "platformType",
                       "Windows"
                   ]
               },
               "inputs": {
                   "runCommand": [
                       "$domain = (Get-SSMParameterValue -Name domainName).Parameters[0].Value",
                       "if ((gwmi Win32_ComputerSystem).domain -eq $domain){write-host \"Computer is part of $domain, exiting\"; exit 0}",
                       "$username = (Get-SSMParameterValue -Name domainJoinUserName).Parameters[0].Value",
                       "$password = (Get-SSMParameterValue -Name domainJoinPassword -WithDecryption $True).Parameters[0].Value | ConvertTo-SecureString -asPlainText -Force",
                       "$credential = New-Object System.Management.Automation.PSCredential($username,$password)",
                       "Add-Computer -DomainName $domain -Credential $credential -ErrorAction SilentlyContinue -ErrorVariable domainjoinerror",
                       "if($?){Write-Host \"Instance joined to domain successfully. Restarting\"; exit 3010}else{Write-Host \"Instance failed to join domain with error:\" $domainjoinerror; exit 1 }"
                   ]
               }
           }
       ]
   }
   ```

1. Run the following command in Tools for Windows PowerShell to create a new SSM document.

   ```
   $json = Get-Content C:\temp\JoinInstanceToDomain | Out-String
   New-SSMDocument -Name JoinInstanceToDomain -Content $json -DocumentType Command
   ```

1. Run the following command in Tools for Windows PowerShell to join the node to the domain.

   ```
   Send-SSMCommand -InstanceId instance-id -DocumentName JoinInstanceToDomain 
   ```

   If the command succeeds, the system returns information similar to the following. 

   ```
   WARNING: The changes will take effect after you restart the computer EC2ABCD-EXAMPLE.
   Domain join succeeded, restarting
   Computer is part of example.local, exiting
   ```

   If the command fails, the system returns information similar to the following. 

   ```
   Failed to join domain with error:
   Computer 'EC2ABCD-EXAMPLE' failed to join domain 'example.local'
   from its current workgroup 'WORKGROUP' with following error message:
   The specified domain either does not exist or could not be contacted.
   ```

# Auditing and logging Parameter Store activity


AWS CloudTrail captures API calls made in the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), and the Systems Manager SDK. You can view the information in the CloudTrail console or in an Amazon Simple Storage Service (Amazon S3) bucket. All CloudTrail logs for your account use one bucket. For more information about viewing and using CloudTrail logs of Systems Manager activity, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md). For more information about auditing and logging options for Systems Manager, see [Logging and monitoring in AWS Systems Manager](monitoring.md).

# Troubleshooting Parameter Store


Use the following information to help you troubleshoot problems with Parameter Store, a tool in AWS Systems Manager.

## Troubleshooting `aws:ec2:image` parameter creation


Use the following information to help troubleshoot problems with creating `aws:ec2:image` data type parameters.

### No permission to create an instance


**Problem**: You try to create an instance using an `aws:ec2:image` parameter but receive an error message such as "You are not authorized to perform this operation."
+ **Solution**: You do not have all the permissions needed to create an EC2 instance using a parameter value, such as permissions for `ec2:RunInstances`, `ec2:DescribeImages`, and `ssm:GetParameter`, among others. Contact a user with administrator permissions in your organzation to request the necessary permissions.

### EventBridge reports the failure message "Unable to Describe Resource"


**Problem**: You ran a command to create an `aws:ec2:image` parameter, but parameter creation failed. You receive a notification from Amazon EventBridge that reports the exception "Unable to Describe Resource". 

**Solution**: This message can indicate the following: 
+ You do not have all the permissions needed for the `ec2:DescribeImages` API operation, or you lack permission to access the specific image referenced in the parameter. Contact a user with administrator permissions in your organization to request the necessary permissions.
+ The Amazon Machine Image (AMI) ID you entered as a parameter value isn't valid. Make sure you're entering the ID of an AMI that is available in the current AWS Region and account you're working in.

### New `aws:ec2:image` parameter isn't available


**Problem**: You just ran a command to create an `aws:ec2:image` parameter and a version number was reported, but the parameter isn't available.
+ **Solution**: When you run the command to create a parameter that uses the `aws:ec2:image` data type, a version number is generated for the parameter right away, but the parameter format must be validated before the parameter is available. This process can take up to a few minutes. To monitor the parameter creation and validation process, you can do the following:
  + Use EventBridge to send you notifications about your `create` and `update` parameter operations. These notifications report whether a parameter operation was successful or not. For information about subscribing to Parameter Store events in EventBridge, see [Setting up notifications or triggering actions based on Parameter Store events](sysman-paramstore-cwe.md).
  + In the Parameter Store section of the Systems Manager console, refresh the list of parameters periodically to search for the new or updated parameter details.
  + Use the **GetParameter** command to check for the new or updated parameter. For example, using the AWS Command Line Interface (AWS CLI):

    ```
    aws ssm get-parameter name MyParameter
    ```

    For a new parameter, a `ParameterNotFound` message is returned until the parameter is validated. For an existing parameter that you're updating, information about the new version isn't included until the parameter is validated.

  If you attempt to create or update the parameter again before the validation process is complete, the system reports that validation is still in process. If the parameter isn't created or updated, you can try again after 5 minutes have passed from the original attempt. 

# AWS Systems Manager operations tools
Operations tools

Operations tools are a suite of capabilities that help you manage your AWS resources.
+ [Amazon CloudWatch dashboards hosted by Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-cloudwatch-dashboards.html) are customizable home pages in the CloudWatch console that you can use to monitor your resources across AWS Regions in a single view. 
+ [https://docs.aws.amazon.com/systems-manager/latest/userguide/Explorer.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/Explorer.html) is a customizable operations dashboard that reports information about your AWS resources. Explorer displays an aggregated view of operations data (OpsData) for your AWS accounts and across AWS Regions. 
+ [Incident Manager](https://docs.aws.amazon.com/incident-manager/latest/userguide/what-is-incident-manager.html) helps you mitigate and recover from incidents affecting your applications hosted on AWS.
+ [https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html) provides a central location where operations engineers and IT professionals can view, investigate, and resolve operational work items (OpsItems) related to AWS resources.

**Topics**
+ [

# AWS Systems Manager Incident Manager
](incident-manager.md)
+ [

# AWS Systems Manager Explorer
](Explorer.md)
+ [

# AWS Systems Manager OpsCenter
](OpsCenter.md)
+ [

# Using Amazon CloudWatch dashboards hosted by Systems Manager
](systems-manager-cloudwatch-dashboards.md)

# AWS Systems Manager Incident Manager
Incident Manager

**Incident Manager availability change**  
AWS Systems Manager Incident Manager will no longer be open to new customers starting November 7, 2025. If you would like to use Incident Manager, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [AWS Systems Manager Incident Manager availability change](https://docs.aws.amazon.com/incident-manager/latest/userguide/incident-manager-availability-change.html). 

Use Incident Manager, a tool in AWS Systems Manager, to manage incidents occurring in your AWS hosted applications. Incident Manager combines user engagements, escalation, runbooks, response plans, chat channels, and post-incident analysis to help your team triage incidents faster and return your applications to normal. To learn more about Incident Manager, see the [Incident Manager User Guide](https://docs.aws.amazon.com/incident-manager/latest/userguide/what-is-incident-manager.html). 

# AWS Systems Manager Explorer
Explorer

AWS Systems Manager Explorer is a customizable operations dashboard that reports information about your AWS resources. Explorer displays an aggregated view of operations data (OpsData) for your AWS accounts and across AWS Regions. In Explorer, OpsData includes metadata about the managed nodes in your [hybrid and multicloud](operating-systems-and-machine-types.md#supported-machine-types) environment. OpsData also includes information provided by other Systems Manager tools, including Patch Manager patch compliance and State Manager association compliance details. To further simplify how you access OpsData, Explorer displays information from supporting AWS services like AWS Config, AWS Trusted Advisor, AWS Compute Optimizer, and AWS Support (support cases).

To raise operational awareness, Explorer also displays operational work items (OpsItems). Explorer provides context about how OpsItems are distributed across your business units or applications, how they trend over time, and how they vary by category. You can group and filter information in Explorer to focus on items that are relevant to you and that require action. When you identify high priority issues, you can use Systems Manager OpsCenter to run Automation runbooks and quickly resolve those issues. To get started with Explorer, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/explorer). In the navigation pane, choose **Explorer**.

The following image shows some of the individual report boxes, called *widgets*, which are available in Explorer.

![\[Explorer dashboard in AWS Systems Manager\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/Explorer-1-overview.png)


## What are the features of Explorer?


Explorer includes the following features:
+ **Customizable display of actionable information**: Explorer includes drag-and-drop widgets that automatically display actionable information about your AWS resources. Explorer displays information in two types of widgets.
  + **Informational widgets**: These widgets summarize data from Amazon EC2, Patch Manager, State Manager, and supporting AWS services like AWS Trusted Advisor, AWS Compute Optimizer, and Support. These widgets provide important context to help you understand the state and operational risks of your AWS resources. Examples of informational widgets include **Instance count**, **Instance by AMI**, **Total noncompliant nodes** (patching), **Noncompliant associations**, and **Support Center cases**. 
  + **OpsItem widgets**: A Systems Manager *OpsItem* is an operational work item that is related to one or more AWS resources. OpsItems are a feature of Systems Manager OpsCenter. OpsItems might require DevOps engineers to investigate and potentially remediate an issue. Examples of possible OpsItems include high EC2 instance CPU utilization, detached Amazon Elastic Block Store (Amazon EBS) volumes, AWS CodeDeploy deployment failure, or Systems Manager Automation execution failure. Examples of OpsItem widgets include **Open OpsItem summary**, **OpsItem by status**, and **OpsItems over time**.
+ **Filters**: Each widget offers the ability to filter information based on AWS account, AWS Region, and tag. Filters help you quickly refine the information displayed in Explorer.
+ **Direct links to service screens**: To help you investigate issues with AWS resources, Explorer widgets contain direct links to related service screens. Filters applied to a widget remain in effect if you navigate to a related service screen.
+ **Groups**: To help you understand the types of operational issues across your organization, some widgets allow you to group data based on account, Region, and tag.
+ **Reporting tag keys**: When you set up Explorer, you can specify up to five tag keys. These keys help you group and filter data in Explorer. If a specified key matches a key on a resource that generates an OpsItem, then the key and value are included in the OpsItems. 
+ **Three modes of AWS account and AWS Region display**: Explorer includes the following display modes for OpsData and OpsItems in AWS accounts and AWS Regions:
  + *Single-account/single-Region*: This is the default view. This mode allows users to view data and OpsItems from their own account and the current Region.
  + *Single-account/multiple-Region*: This mode requires you to create one or more resource data syncs by using the Explorer **Settings** page. A resource data sync aggregates OpsData from one or more Regions. After you create a resource data sync, you can toggle which sync to use on the Explorer dashboard. You can then filter and group data based on Region.
  + *Multiple-account/multiple-Region*: This mode requires that your organization or company use [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/) with **All features** turned on. After you configure AWS Organizations in your computing environment, you can aggregate all account data in a management account. You can then create resource data syncs so that you can filter and group data based on Region. For more information about Organizations **All features** mode, see [Enabling All Features in Your Organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org_support-all-features.html).
+ **Reporting**: You can export Explorer reports as comma separated value (.csv) files to an Amazon Simple Storage Service (Amazon S3) bucket. You receive an alert from Amazon Simple Notification Service (Amazon SNS) when an export is complete. 

## How does Explorer relate to OpsCenter?


[Systems Manager OpsCenter](OpsCenter.md) provides a central location where operations engineers and IT professionals view, investigate, and resolve OpsItems related to AWS resources. Explorer is a report hub where DevOps managers view aggregated summaries of their operations data, including OpsItems, across AWS Regions and accounts. Explorer helps users discover trends and patterns and, if necessary, quickly resolve issues using Systems Manager Automation runbooks.

OpsCenter setup is now integrated with Explorer Setup. If you already set up OpsCenter, then Explorer automatically displays operations data, including aggregated information about OpsItems. If you haven't set up OpsCenter, then you can use Explorer Setup to get started with both tool. For more information, see [Getting started with Systems Manager Explorer and OpsCenter](Explorer-setup.md).

## What is OpsData?


OpsData is any operations data that is displayed in the Systems Manager Explorer dashboard. Explorer retrieves OpsData from the following sources:
+ **Amazon Elastic Compute Cloud (Amazon EC2)**

  Data displayed in Explorer includes: total number of nodes, total number of managed and unmanaged nodes, and a count of nodes using a specific Amazon Machine Image (AMI). 
+ **Systems Manager OpsCenter**

  Data displayed in Explorer includes: a count of OpsItems by status, a count of OpsItems by severity, a count of open OpsItems across groups and across 30-day time periods, and historical data of OpsItems over time.
+ **Systems Manager Patch Manager**

  Data displayed in Explorer includes a count of noncompliant and critical noncompliant nodes.
+ **AWS Trusted Advisor**

  Data displayed in Explorer includes: status of best practice checks for EC2 Reserved Instances in the areas of cost optimization, security, fault tolerance, performance, and service limits.
+ **AWS Compute Optimizer**

  Data displayed in Explorer includes: a count of **Under provisioned** and **Over provisioned** EC2 instances, optimization findings, on-demand pricing details, and recommendations for instance type and price.
+ **Support Center cases**

  Data displayed in Explorer includes: case ID, severity, status, created time, subject, service, and category. 
+ **AWS Config**

  Data displayed in Explorer includes: overall summary of compliant and non-compliant AWS Config rules, the number of compliant and non-compliant resources, and specific details about each (when you drill down into a non-compliant rule or resource). 
+ **AWS Security Hub CSPM**

  Data displayed in Explorer includes: overall summary of Security Hub CSPM findings, the number of each finding grouped by severity, and specific details about finding.

**Note**  
To view AWS Trusted Advisor and Support Center cases in Explorer, you must have either an Enterprise or Business account set up with AWS Support.

You can view and manage OpsData sources from the Explorer **Settings** page. For information about setting up and configuring services that populate Explorer widgets with OpsData, see [Setting up related services for Explorer](Explorer-setup-related-services.md).

## Is there a charge to use Explorer?


Yes. When you turn on the default rules for creating OpsItems during Integrated Setup, you initiate a process that automatically creates OpsItems. Your account is charged based on the number of OpsItems created per month. Your account is also charged based on the number of `GetOpsItem`, `DescribeOpsItem`, `UpdateOpsItem`, and `GetOpsSummary` API calls made per month. Additionally, you can be charged for public API calls to other services that expose relevant diagnostic information. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

**Topics**
+ [

## What are the features of Explorer?
](#Explorer-learn-more-features)
+ [

## How does Explorer relate to OpsCenter?
](#Explorer-learn-more-OpsCenter)
+ [

## What is OpsData?
](#Explorer-learn-more-OpsData)
+ [

## Is there a charge to use Explorer?
](#Explorer-learn-more-cost)
+ [

# Getting started with Systems Manager Explorer and OpsCenter
](Explorer-setup.md)
+ [

# Using Explorer
](Explorer-using.md)
+ [

# Exporting OpsData from Systems Manager Explorer
](Explorer-exporting-OpsData.md)
+ [

# Troubleshooting Systems Manager Explorer
](Explorer-troubleshooting.md)

# Getting started with Systems Manager Explorer and OpsCenter
Getting started

AWS Systems Manager uses an integrated setup experience to help you get started with Systems Manager Explorer and Systems Manager OpsCenter. In this documentation, Explorer and OpsCenter Setup is called *Integrated Setup*. If you already set up OpsCenter, you still need to complete Integrated Setup to verify settings and options. If you haven't set up OpsCenter, then you can use Integrated Setup to get started with both tools.

**Note**  
Integrated Setup is only available in the Systems Manager console. You can't set up Explorer or OpsCenter programmatically.

Integrated Setup performs the following tasks:
+ [Configures roles and permissions](Explorer-setup-permissions.md): Integrated Setup creates an AWS Identity and Access Management (IAM) role that allows Amazon EventBridge to automatically create OpsItems based on default rules. After setting up, you must configure user, group, or role permissions for OpsCenter, as described in this section. 
+ [Allows default rules for OpsItem creation](Explorer-setup-default-rules.md): Integrated Setup creates default rules in EventBridge. These rules automatically create OpsItems in response to events. Examples of these events are: state change for an AWS resource, a change in security settings, or a service becoming unavailable.
+ Activates OpsData sources: Integrated Setup activates the following data sources that populate Explorer widgets.
  + Support Center (You must have either a Business or Enterprise Support plan to activate this source.)
  + AWS Compute Optimizer (You must have either a Business or Enterprise Support plan to activate this source.)
  + Systems Manager State Manager association compliance
  + AWS Config Compliance
  + Systems Manager OpsCenter
  + Systems Manager Patch Manager patch compliance
  + Amazon Elastic Compute Cloud (Amazon EC2)
  + Systems Manager Inventory
  + AWS Trusted Advisor (You must have either a Business or Enterprise Support plan to activate this source.)
  + AWS Security Hub CSPM

**Note**  
You can change setup configurations at any time on the **Settings** page.

After you complete Integrated Setup, we recommend that you [Set up Explorer to display data from multiple Regions and accounts](Explorer-resource-data-sync.md). Explorer and OpsCenter automatically synchronize OpsData and OpsItems for the AWS account and AWS Region you used when you completed Integrated Setup. You can aggregate OpsData and OpsItems from other accounts and Regions by creating a resource data sync.

# Setting up related services for Explorer


AWS Systems Manager Explorer and AWS Systems Manager OpsCenter collect information from, or interact with, other AWS services and Systems Manager tools. We recommend that you set up and configure these other services or tools before you use Integrated Setup.

The following table includes tasks that allow Explorer and OpsCenter to collect information from, or interact with, other AWS services and Systems Manager tools. 


****  

| Task | Information | 
| --- | --- | 
|  Verify permissions in Systems Manager Automation  |  Explorer and OpsCenter allow you to remediate issues with AWS resources by using Systems Manager Automation runbooks. To use this remediation tool, you must have permission to run Systems Manager Automation runbooks. For more information, see [Setting up Automation](automation-setup.md).  | 
|  Set up and configure Systems Manager Patch Manager  |  Explorer includes a widget that provides information about patch compliance. To view this data in Explorer, you must configure patching. For more information, see [AWS Systems Manager Patch Manager](patch-manager.md).  | 
|  Set up and configure Systems Manager State Manager  |  Explorer includes a widget that provides information about Systems Manager State Manager association compliance. To view this data in Explorer, you must configure State Manager. For more information, see [AWS Systems Manager State Manager](systems-manager-state.md).  | 
|  Turn on AWS Config Configuration Recorder  |  Explorer uses data provided by AWS Config configuration recorder to populate widgets with information about your EC2 instances. To view this data in Explorer, turn on AWS Config configuration recorder. For more information, see [Managing the Configuration Recorder](https://docs.aws.amazon.com/config/latest/developerguide/stop-start-recorder.html).  After you allow configuration recorder, Systems Manager can take up to six hours to display data in Explorer widgets that display information about your EC2 instances.   | 
|  Turn on AWS Trusted Advisor  |  Explorer uses data provided by Trusted Advisor to display a status of best practice checks for Amazon EC2 Reserved Instances in the areas of cost optimization, security, fault tolerance, performance, and service limits. To view this data in Explorer, you must have a business or enterprise support plan. For more information, see [Support](https://aws.amazon.com/premiumsupport/).  | 
|  Turn on AWS Compute Optimizer  |  Explorer uses data provided by Compute Optimizer to display details a count of **Under provisioned** and **Over provisioned** EC2 instances, optimization findings, on-demand pricing details, and recommendations for instance type and price. To view this data in Explorer, turn on Compute Optimizer. For more information, see [Getting started with AWS Compute Optimizer](https://docs.aws.amazon.com/compute-optimizer/latest/ug/getting-started.html).  | 
|  Turn on AWS Security Hub CSPM  |  Explorer uses data provided by Security Hub CSPM to populate widgets with information about your security findings. To view this data in Explorer, turn on Security Hub CSPM integration. For more information, see [What is AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html).  | 

# Configuring roles and permissions for Systems Manager Explorer
Configuring roles and permissions

Integrated Setup automatically creates and configures AWS Identity and Access Management (IAM) roles for AWS Systems Manager Explorer and AWS Systems Manager OpsCenter. If you completed Integrated Setup, then you don't need to perform any additional tasks to configure roles and permissions for Explorer. However, you must configure permission for OpsCenter, as described later in this topic.

Integrated Setup creates and configures the following roles for working with Explorer and OpsCenter.
+ `AWSServiceRoleForAmazonSSM`: Provides access to AWS resources managed or used by Systems Manager.
+ `OpsItem-CWE-Role`: Allows CloudWatch Events and EventBridge to create OpsItems in response to common events.
+ `AWSServiceRoleForAmazonSSM_AccountDiscovery`: Allows Systems Manager to call other AWS services to discover AWS account information when synchronizing data. For more information about this role, see [Using roles to collect AWS account information for OpsCenter and Explorer](using-service-linked-roles-service-action-2.md).
+ `AmazonSSMExplorerExport`: Allows Explorer to export OpsData to a comma-separated value (CSV) file.

If you configure Explorer to display data from multiple accounts and Regions by using AWS Organizations and a resource data sync, then Systems Manager creates the `AWSServiceRoleForAmazonSSM_AccountDiscovery` service-linked role. Systems Manager uses this role to get information about your AWS accounts in AWS Organizations. The role uses the following permissions policy.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "organizations:DescribeAccount",
            "organizations:DescribeOrganization",
            "organizations:ListAccounts",
            "organizations:ListAWSServiceAccessForOrganization",
            "organizations:ListChildren",
            "organizations:ListParents"
         ],
         "Resource":"*"
      }
   ]
}
```

------

For more information about the `AWSServiceRoleForAmazonSSM_AccountDiscovery` role, see [Using roles to collect AWS account information for OpsCenter and Explorer](using-service-linked-roles-service-action-2.md).

## Configuring permissions for Systems Manager OpsCenter


After you complete Integrated Setup, you must configure user, group, or role permissions so that users can perform actions in OpsCenter.

**Before you begin**  
You can configure OpsCenter to create and manage OpsItems for a single account or across multiple accounts. If you configure OpsCenter to create and manage OpsItems across multiple accounts, you can use either the Systems Manager delegated administrator account or the AWS Organizations management account to manually create, view, or edit OpsItems in other accounts. For more information about the Systems Manager delegated administrator account, see [Configuring a delegated administrator for Explorer](Explorer-setup-delegated-administrator.md).

If you configure OpsCenter for a single account, you can only view or edit OpsItems in the account where OpsItems were created. You can't share or transfer OpsItems across AWS accounts. For this reason, we recommend that you configure permissions for OpsCenter in the AWS account that is used to run your AWS workloads. You can then create users or groups in that account. In this way, multiple operations engineers or IT professionals can create, view, and edit OpsItems in the same AWS account.

Explorer and OpsCenter use the following API operations. You can use all features of Explorer and OpsCenter if your user, group, or role has access to these actions. You can also create more restrictive access, as described later in this section.
+  [CreateOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateOpsItem.html) 
+  [CreateResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateResourceDataSync.html) 
+  [DescribeOpsItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeOpsItems.html) 
+  [DeleteResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteResourceDataSync.html) 
+  [GetOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsItem.html) 
+  [GetOpsSummary](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsSummary.html) 
+  [ListResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListResourceDataSync.html) 
+  [UpdateOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateOpsItem.html) 
+  [UpdateResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateResourceDataSync.html) 

If you prefer, you can specify read-only permission by adding the following inline policy to your account, group, or role.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ssm:GetOpsItem",
        "ssm:GetOpsSummary",
        "ssm:DescribeOpsItems",
        "ssm:GetServiceSetting",
        "ssm:ListResourceDataSync"
      ],
      "Resource": "*"
    }
  ]
}
```

------

For more information about creating and editing IAM policies, see [Creating IAM Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*. For information about how to assign this policy to an IAM group, see [Attaching a Policy to an IAM Group](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html). 

Create a permission using the following and add it to your users, groups, or roles: 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ssm:GetOpsItem",
        "ssm:UpdateOpsItem",
        "ssm:DescribeOpsItems",
        "ssm:CreateOpsItem",
        "ssm:CreateResourceDataSync",
        "ssm:DeleteResourceDataSync",
        "ssm:ListResourceDataSync",
        "ssm:UpdateResourceDataSync"

      ],
      "Resource": "*"
    }
  ]
}
```

------

Depending on the identity application that you are using in your organization, you can select any of the following options to configure user access.

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### Restricting access to OpsItems by using tags


You can also restrict access to OpsItems by using an inline IAM policy that specifies tags. Here is an example that specifies a tag key of *Department* and a tag value of *Finance*. With this policy, the user can only call the *GetOpsItem* API operation to view OpsItems that were previously tagged with Key=Department and Value=Finance. Users can't view any other OpsItems.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ssm:GetOpsItem"
             ],
      "Resource": "*"
      ,
      "Condition": { "StringEquals": { "ssm:resourceTag/Department": "Finance" } }
    }
  ]
}
```

------

Here is an example that specifies API operations for viewing and updating OpsItems. This policy also specifies two sets of tag key-value pairs: Department-Finance and Project-Unity.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:GetOpsItem",
            "ssm:UpdateOpsItem"
         ],
         "Resource":"*",
         "Condition":{
            "StringEquals":{
               "ssm:resourceTag/Department":"Finance",
               "ssm:resourceTag/Project":"Unity"
            }
         }
      }
   ]
}
```

------

For information about adding tags to an OpsItem, see [Create OpsItems manually](OpsCenter-manually-create-OpsItems.md).

# Understanding default EventBridge rules created by Integrated Setup


During the integrated setup process for Explorer and OpsCenter, you can choose to enable a number of default rules that are based on events detected by Amazon EventBridge. When these events are detected, the system automatically creates OpsItems in AWS Systems Manager OpsCenter. 

For example, the rule `SSMOpsItems-Autoscaling-instance-termination-failure` results in an OpsItem being created when the termination of an EC2 auto scaling instance fails.

The rule `SSMOpsItems-SSM-maintenance-window-execution-failed` results in an OpsItem being created when a Systems Manager maintenace window fails to complete successfully.

For setup instructions and descriptions of all the EventBridge rules you can enable during the setup process, see [Set up OpsCenter](OpsCenter-setup.md).

If you don't want EventBridge to create OpsItems for these events, you can choose not to enable this option in Integrated Setup. If you prefer, you can specify OpsCenter as the target of specific EventBridge events. For more information, see [Configure EventBridge rules to create OpsItems](OpsCenter-automatically-create-OpsItems-2.md). 

You can disable a default rule or change its category and severity level in the OpsCenter **Settings** page by choosing **OpsCenter, Settings**, and then choosing **Edit** in the **OpsItem rules** area. 

You can also edit the category or severity assigned to an individual OpsItem created from these rules in the Systems Manager console. For information, see [Editing an OpsItem](OpsCenter-working-with-OpsItems-editing-details.md). 

![\[Default rules for creating OpsItems in Systems Manager Explorer\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/explorer-default-rules.png)


# Configuring a delegated administrator for Explorer


If you aggregate AWS Systems Manager Explorer data from multiple AWS Regions and accounts by using resource data sync with AWS Organizations, then we recommend that you configure a delegated administrator for Explorer. A delegated administrator improves Explorer security in the following ways.
+ You limit the number of Explorer administrators who can create or delete multi-account and Region resource data syncs to an individual AWS account.
+ You no longer need to be logged into the AWS Organizations management account to administer resource data syncs in Explorer.

A delegated administrator can use the following Explorer resource data sync APIs using the console, SDK, AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell: 
+ [CreateResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateResourceDataSync.html)
+ [DeleteResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteResourceDataSync.html)
+ [ListResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListResourceDataSync.html)
+ [UpdateResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateResourceDataSync.html)

A delegated administrator can search, filter, and aggregate Explorer data from the console or by using programmatic tools such as the SDK, the AWS CLI, or AWS Tools for Windows PowerShell. Search, filter, and data aggregation use the [GetOpsSummary](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsSummary.html) API operation.

A delegated administrator can create a maximum of five resource data syncs for either an entire organization or a subset of organizational units. Resource data syncs created by a delegated administrator are only available in the delegated administrator account. You can't view the syncs or the aggregated data in the AWS Organizations management account.

**Note**  
You can't use a delegated administrator account to create a resource data sync in [opt-in AWS Regions](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html#regions-opt-in-status). You must use an AWS Organizations management account.

For more information about resource data sync, see [Setting up Systems Manager Explorer to display data from multiple accounts and Regions](Explorer-resource-data-sync.md). For more information about AWS Organizations, see [What is AWS Organizations?](https://docs.aws.amazon.com/organizations/latest/userguide/) in the *AWS Organizations User Guide*.

**Topics**
+ [

## Before you begin
](#Explorer-setup-delegated-administrator-before-you-begin)
+ [

# Configure an Explorer delegated administrator
](Explorer-setup-delegated-administrator-configure.md)
+ [

# Deregister an Explorer delegated administrator
](Explorer-setup-delegated-administrator-deregister.md)

## Before you begin


The following list includes important information about Explorer delegated administration.
+ You can delegate only one account for Explorer administration.
+ The account ID that you specify as an Explorer delegated administrator must be listed as a member account in AWS Organizations. For more information, see [Creating an AWS account in your organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_create.html) in the *AWS Organizations User Guide*.
+ A delegated administrator can use all Explorer resource data sync API operations in the console or by using programmatic tools such as the SDK, the AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell. Resource data sync API operations include the following: [CreateResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateResourceDataSync.html), [DeleteResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteResourceDataSync.html), [ListResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ListResourceDataSync.html), and [UpdateResourceDataSync](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateResourceDataSync.html).
+ A delegated administrator can search, filter, and aggregate Explorer data in the console or by using programmatic tools such as the SDK, the AWS CLI, or AWS Tools for Windows PowerShell. Search, filter, and data aggregation use the [GetOpsSummary](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsSummary.html) API operation.
+ Resource data syncs created by a delegated administrator are only available in the delegated administrator account. You can't view the syncs or the aggregated data in the AWS Organizations management account.
+ A delegated administrator can create a maximum of five resource data syncs.
+ A delegated administrator can create a resource data sync for either an entire organization in AWS Organizations or a subset of organizational units.

# Configure an Explorer delegated administrator


Use the following procedure to register an Explorer delegated administrator.

**To register an Explorer delegated administrator**

1. Log into your AWS Organizations management account.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**.

1. In the **Delegated administrator for Explorer** section, verify that you have configured the required service-linked role and service access options. If necessary, choose the **Create role** and **Enable access** buttons to configure these options.

1. For **Account ID**, enter the AWS account ID. This account must be a member account in AWS Organizations.

1. Choose **Register delegated administrator**.

The delegated administrator now has access to the **Include all accounts from my AWS Organizations configuration** and **Select organization units in AWS Organizations** options on the **Create resource data sync** page. 

# Deregister an Explorer delegated administrator


Use the following procedure to deregister an Explorer delegated administrator. A delegated administrator account can only be deregistered by the AWS Organizations management account. When a delegated administrator account is deregistered, the system deletes all AWS Organizations resource data syncs created by the delegated administrator.

**To deregister an Explorer delegated administrator**

1. Log into your AWS Organizations management account.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**.

1. In the **Delegated administrator for Explorer** section, choose **Deregister**. The system displays a warning.

1. Enter the account ID and choose **Remove**.

The account no longer has access to the AWS Organizations resource data sync API operations. The system deletes all AWS Organizations resource data syncs created by the account.

# Setting up Systems Manager Explorer to display data from multiple accounts and Regions


AWS Systems Manager uses an integrated setup experience to help you get started with AWS Systems Manager Explorer *and* AWS Systems Manager OpsCenter. After completing Integrated Setup, Explorer and OpsCenter automatically synchronize data. More specifically, these tools synchronize OpsData and OpsItems for the AWS account and AWS Region you used when you completed Integrated Setup. If you want to aggregate OpsData and OpsItems from other accounts and Regions, you must create a resource data sync, as described in this topic.

**Note**  
For more information about Integrated Setup, see [Getting started with Systems Manager Explorer and OpsCenter](Explorer-setup.md).

**Topics**
+ [

# Understanding resource data syncs for Explorer
](Explorer-resource-data-sync-understanding.md)
+ [

# Understanding multiple account and Region resource data syncs
](Explorer-resource-data-sync-multiple-accounts-and-regions.md)
+ [

# Creating a resource data sync
](Explorer-resource-data-sync-configuring-multi.md)

# Understanding resource data syncs for Explorer


Resource data sync for Explorer offers two aggregation options:
+ **Single-account/Multiple-regions:** You can configure Explorer to aggregate OpsItems and OpsData data from multiple AWS Regions, but the data set is limited to the current AWS account.
+ **Multiple-accounts/Multiple-regions:** You can configure Explorer to aggregate data from multiple AWS Regions and accounts. This option requires that you set up and configure AWS Organizations. After you set up and configure AWS Organizations, you can aggregate data in Explorer by organizational unit (OU) or for an entire organization. Systems Manager aggregates the data into the AWS Organizations management account before displaying it in Explorer. For more information, see [What is AWS Organizations?](https://docs.aws.amazon.com/organizations/latest/userguide/) in the *AWS Organizations User Guide*.

**Warning**  
If you configure Explorer to aggregate data from an organization in AWS Organizations, the system enables OpsData in all member accounts in the organization. Enabling OpsData sources in all member accounts increases the number of calls to OpsCenter APIs like [CreateOpsItem](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_CreateOpsItem.html) and [GetOpsSummary](https://docs.aws.amazon.com//systems-manager/latest/APIReference/API_GetOpsSummary.html). You are charged for calls to these API actions.

The following diagram shows a resource data sync configured to work with AWS Organizations. In this scenario, the user has two accounts defined in AWS Organizations. Resource data sync aggregates data from both accounts and multiple AWS Regions into the AWS Organizations management account where it's then displayed in Explorer.

![\[Resource data sync for Systems Manager Explorer\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/ExplorerSyncFromSource.png)


# Understanding multiple account and Region resource data syncs


This section describes important details about multiple account and multiple Region resource data syncs that use AWS Organizations. Specifically, the information in this section applies if you choose one of the following options in the **Create resource data sync** page:
+ Include all accounts from my AWS Organizations configuration
+ Select organization units in AWS Organizations

If you don't plan to use one of these options, you can skip this section.

When you create a resource data sync in the SSM console, if you choose one of the AWS Organizations options, then Systems Manager automatically allows all OpsData sources in the selected Regions for all AWS accounts in your organization (or in the selected organizational units). For example, even if you haven't turned Explorer on in a Region, if you select an AWS Organizations option for your resource data sync, then Systems Manager automatically collects OpsData from that Region. To create a resource data sync without allowing OpsData sources, specify **EnableAllOpsDataSources** as false when creating the data sync. For more information, see the `EnableAllOpsDataSources` parameter details for the [ResourceDataSyncSource](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_ResourceDataSyncSource.html) data type in the *Amazon EC2 Systems Manager API Reference*.

If you don't choose one of the AWS Organizations options for a resource data sync, then you must complete Integrated Setup in each account and Region where you want Explorer to access data. If you don't, Explorer won't display OpsData and OpsItems for those accounts and Regions in which you didn't complete Integrated Setup.

If you add a child account to your organization, Explorer automatically allows all OpsData sources for the account. If, at a later time, you remove the child account from your organization, Explorer continues to collect OpsData from the account. 

If you update an existing resource data sync that uses one of the AWS Organizations options, the system prompts you to approve collection of all OpsData sources for all accounts and Regions affected by the change.

If you add a new service to your AWS account, and if Explorer collects OpsData for that service, Systems Manager automatically configures Explorer to collect that OpsData. For example, if your organization didn't use AWS Trusted Advisor when you previously created a resource data sync, but your organization signs up for this service, Explorer automatically updates your resource data syncs to collect this OpsData.

**Important**  
Note the following important information about multiple account and Region resource data syncs:  
Deleting a resource data sync doesn't turn off an OpsData source in Explorer. 
To view OpsData and OpsItems from multiple accounts, you must have the AWS Organizations **All features** mode turned on and you must be signed into the AWS Organizations management account.
Most AWS Regions are active by default for your AWS account, but certain Regions are activated only when you manually select them. These Regions are known as [opt-in Regions](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html#regions-opt-in-status). By default, Explorer cross-account/cross-Region resource data syncs don't support data aggregation in opt-in Regions. Support was added for the following opt-in Regions on June 30, 2025.   
Europe (Milan)
Africa (Cape Town)
Middle East (Bahrain)
Asia Pacific (Hong Kong)
Note that you can't use a delegated administrator account to create a resource data sync in opt-in Regions. You must use an AWS Organizations management account.

# Creating a resource data sync


Before you configure a resource data sync for Explorer, note the following details.
+ Explorer supports a maximum of five resource data syncs.
+ After you create a resource data sync for a Region, you can't change the *account options* for that sync. For example, if you create a sync in the us-east-2 (Ohio) Region and you choose the **Include only the current account** option, you can't edit that sync later and choose the **Include all accounts from my AWS Organizations configuration** option. Instead, you must delete the first resource data sync, and create a new one.
+ OpsData viewed in Explorer is read-only.

Use the following procedure to create a resource data sync for Explorer.

**To create a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**.

1. In the **Configure resource data sync** section, choose **Create resource data sync**.

1. For **Resource data sync name**, enter a name.

1. In the **Add accounts** section, choose an option.
**Note**  
To use either of the AWS Organizations options, you must be logged into the AWS Organizations management account or you must be logged into an Explorer delegated administrator account. For more information about the delegated administrator account, see [Configuring a delegated administrator for Explorer](Explorer-setup-delegated-administrator.md).

1. In the **Regions to include** section, choose one of the following options.
   + Choose **All current and future regions** to automatically sync data from all current AWS Regions and any new Regions that come online in the future.
   + Choose **All regions** to automatically sync data from all current AWS Regions.
   + Individually choose Regions that you want to include.

1. Choose **Create resource data sync**.

The system can take several minutes to populate Explorer with data after you create a resource data sync. You can view the sync by choosing it from the **Select a resource data sync** list in Explorer.

# Using Explorer
Using Explorer

This section includes information about how to customize AWS Systems Manager Explorer by changing the widget layout and by changing the data that displays in the dashboard.

**Topics**
+ [

# Editing EventBridge rules created for Explorer
](Explorer-using-editing-default-rules.md)
+ [

# Editing Systems Manager Explorer data sources
](Explorer-using-editing-data-sources.md)
+ [

# Customizing the Explorer display
](Explorer-using-filters.md)
+ [

# Receiving findings from AWS Security Hub CSPM in Explorer
](explorer-securityhub-integration.md)
+ [

# Deleting a Systems Manager Explorer resource data sync
](Explorer-using-resource-data-sync-delete.md)

# Editing EventBridge rules created for Explorer


When you complete Integrated Setup, the system allows more than a dozen rules in Amazon EventBridge. These rules automatically create OpsItems in AWS Systems Manager OpsCenter. AWS Systems Manager Explorer then displays aggregated information about the OpsItems.

Each rule includes a preset **Category** and **Severity** value. When the system creates OpsItems from an event, it automatically assigns the preset **Category** and **Severity**.

**Important**  
You can't edit the **Category** and **Severity** values for default rules but you can edit these values on OpsItems created from the default rules. 

![\[Default rules for creating OpsItems in Systems Manager Explorer\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/explorer-default-rules.png)


**To edit default rules for creating OpsItems**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**.

1. In the **OpsItems rules** section, choose **Edit**.

1. Expand **CWE rules**.

1. Clear the check box beside those rules that you don't want to use.

1. Use the **Category** and **Severity** lists to change this information for a rule. 

1. Choose **Save**.

Your changes take effect the next time the system creates an OpsItem.

# Editing Systems Manager Explorer data sources
Editing data sources

For AWS Regions that are available [by default](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html), AWS Systems Manager Explorer displays data from the following sources. You can edit Explorer settings to add or remove data sources:
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ AWS Systems Manager Inventory
+ AWS Systems Manager OpsCenter OpsItems
+ AWS Systems Manager Patch Manager patch compliance
+ AWS Systems Manager State Manager association compliance
+ AWS Trusted Advisor
+ AWS Compute Optimizer
+ AWS Support Center cases
+ AWS Config rule and resource compliance
+ AWS Security Hub CSPM findings

**Note**  
For the Asia Pacific (Osaka) Region, Explorer doesn't display data from AWS Compute Optimizer and AWS Security Hub CSPM findings.

For the [AWS opt-in Regions](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html#regions-opt-in-status), Explorer displays data from the following sources:
+ Amazon Elastic Compute Cloud (Amazon EC2)
+ AWS Systems Manager Inventory
+ AWS Systems Manager OpsCenter OpsItems
+ AWS Systems Manager Patch Manager patch compliance
+ AWS Systems Manager State Manager association compliance
+ AWS Trusted Advisor
+ AWS Support Center cases
+ AWS Config rule and resource compliance

**Note**  
To view Support Center cases in Explorer, you must have either an Enterprise or Business account set up with Support.
You can't configure Explorer to stop displaying OpsCenter OpsItem data.

**Before you begin**  
Verify that you set up and configured services that populate Explorer widgets with data. For more information, see [Setting up related services for Explorer](Explorer-setup-related-services.md).

**To edit data sources**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**, and then choose the **Configure Dashboard** tab.

1. In the **OpsData sources** section, in the **Status** column, turn on or turn off sources according to the data you want to view.

# Customizing the Explorer display


You can customize widget layout in AWS Systems Manager Explorer by using a drag-and-drop capability. You can also customize the OpsData and OpsItems displayed in Explorer by using filters, as described in this topic. 

Before you customize widget layout, verify that the widgets you want to view are currently displayed in Explorer. To view some widgets in Explorer (such as the AWS Config compliance widget), you must enable them on the **Configure dashboard** page. 

**To enable widgets to display in Explorer**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Dashboard actions**, **Configure dashboard**.

1. Choose the **Configure Dashboard** tab.

1. Either choose **Enable all** or turn on an individual widget or data source.

1. Choose **Explorer** to view your changes.

To customize widget layout in Explorer, choose a widget that you want to move. Click and hold the name of the widget and then drag it to its new location.

![\[Moving a widget in Systems Manager Explorer\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/explorer-customize.png)


Repeat this process for each widget that you want to reposition.

If you decide that you don't like the new layout, choose **Reset layout** to move all widgets back to their original location.

## Using filters to the change the data displayed in Explorer


By default, Explorer displays data for the current AWS account and the current Region. If you create one or more resource data syncs, you can use filters to change which sync is active. You can then choose to display data for a specific Region or all Regions. You can also use the Search bar to filter on different OpsItem and key-tag criteria.

**To change the data displayed in Explorer by using filters**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. In the **Filter** section, use the **Select a resource data sync** list to choose a sync.

1. Use the **Regions** list to choose either a specific AWS Region or choose **All Regions**.

1. Choose the Search bar, and then choose the criteria on which to filter the data.  
![\[Using the filters Search bar in Systems Manager Explorer\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/explorer-filters.png)

1. Press Enter.

Explorer retains the filter options you selected if you close and reopen the page.

# Receiving findings from AWS Security Hub CSPM in Explorer
Receiving Security Hub CSPM findings

[AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security state in AWS. The service collects security data, called *findings*, from across AWS accounts, services, and supported third-party products. Security Hub CSPM findings can help you check your environment against security industry standards and best practices, analyze your security trends, and identify the highest priority security issues.

Security Hub CSPM sends findings to Amazon EventBridge, which uses an event rule to send the findings to Explorer. After you enable integration, as described here, you can view Security Hub CSPM findings in an Explorer widget and view finding details in OpsCenter OpsItems. The widget provides a summary of all Security Hub CSPM findings based on severity. New findings in Security Hub CSPM are usually visible in Explorer within seconds of being created.

**Warning**  
Note the following important information:  
Explorer is integrated with OpsCenter, a tool in Systems Manager. After you enable Explorer integration with Security Hub CSPM, OpsCenter automatically creates OpsItems for Security Hub CSPM findings. Depending on your AWS environment, enabling integration can result in large numbers of OpsItems, at a cost.   
Before you continue, read about OpsCenter integration with Security Hub CSPM. The topic includes specific details about how changes and updates to findings and OpsItems are charged to your account. For more information, see [Understanding OpsCenter integration with AWS Security Hub CSPM](OpsCenter-applications-that-integrate.md#OpsCenter-integrate-with-security-hub). For OpsCenter pricing information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).
If you create a resource data sync in Explorer while logged into the administrator account, Security Hub CSPM integration is automatically enabled for the administrator and all member accounts in the sync. Once enabled, OpsCenter automatically creates OpsItems for Security Hub CSPM findings, at a cost. For more information about creating a resource data sync, see [Setting up Systems Manager Explorer to display data from multiple accounts and Regions](Explorer-resource-data-sync.md).

## Types of findings that Explorer receives


Explorer receives [all findings](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cwe-integration-types.html#securityhub-cwe-integration-types-all-findings) from Security Hub CSPM. You can see all findings based on severity in the Explorer widget when you turn on the Security Hub CSPM default settings. By default, Explorer creates OpsItems for critical and high severity findings. You can manually configure Explorer to create OpsItems for medium and low severity findings.

Though Explorer doesn't create OpsItems for informational findings, you can view informational operations data (OpsData) in the Security Hub CSPM findings summary widget. Explorer creates OpsData for all findings regardless of severity. For more information about Security Hub CSPM severity levels, see [Severity](https://docs.aws.amazon.com//securityhub/1.0/APIReference/API_Severity.html) in the *AWS Security Hub API Reference*.

## Enabling integration


This section describes how to enable and configure Explorer to start receiving Security Hub CSPM findings.

**Before you begin**  
Complete the following tasks before you configure Explorer to start receiving Security Hub CSPM findings.
+ Enable and configure Security Hub CSPM. For more information, see [Setting up Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html) in the *AWS Security Hub User Guide*.
+ Log into the AWS Organizations management account. Systems Manager requires access to AWS Organizations to create OpsItems from Security Hub CSPM findings. After you log in to the management account, you're prompted to select the **Enable access** button on the Explorer **Configure dashboard** tab, as described in the following procedure. If you don't log in to the AWS Organizations management account, you can't allow access and Explorer can't create OpsItems from Security Hub CSPM findings.

**To start receiving Security Hub CSPM findings**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Select **Settings**.

1. Select the **Configure dashboard** tab.

1. Select **AWS Security Hub CSPM**.

1. Select the **Disabled** slider to turn on **AWS Security Hub CSPM**.

   Critical and high severity findings are displayed by default. To display medium and low severity findings, select the **Disabled** slider next to **Medium,Low**.

1. In the **OpsItems created by Security Hub CSPM findings** section, choose **Enable access**. If you don't see this button, log in to the AWS Organizations management account and return to this page to select the button.

## How to view findings from Security Hub CSPM


The following procedure describes how to view Security Hub CSPM findings.

**To view Security Hub CSPM findings**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Find the **AWS Security Hub CSPM findings summary** widget. This displays your Security Hub CSPM findings. You can select a severity level to view a detailed description of the corresponding OpsItem.

## How to stop receiving findings


The following procedure describes how to stop receiving Security Hub CSPM findings.

**To stop receiving Security Hub CSPM findings**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Select **Settings**.

1. Select the **Configure dashboard** tab.

1. Select the **Enabled** slider to turn off **AWS Security Hub CSPM**.

**Important**  
If the option to disable Security Hub CSPM findings is grayed out in the console, you can disable this setting by running the following command in the AWS CLI. You must run the command while logged into either the AWS Organizations management account or the Systems Manager delegated administrator account. For the `region` parameter, specify the AWS Region where you want to stop receiving Security Hub CSPM findings in Explorer.   

```
aws ssm update-service-setting --setting-id /ssm/opsdata/SecurityHub --setting-value Disabled --region AWS Region
```
Here's an example.  

```
aws ssm update-service-setting --setting-id /ssm/opsdata/SecurityHub --setting-value Disabled --region us-east-1
```

# Deleting a Systems Manager Explorer resource data sync
Deleting a resource data sync

In AWS Systems Manager Explorer, you can aggregate OpsData and OpsItems from other accounts and Regions by creating a resource data sync.

You can't change the account options for a resource data sync. For example, if you created a sync in the us-east-2 (Ohio) Region and you chose the **Include only the current account** option, you can't edit that sync later and choose the **Include all accounts from my AWS Organizations configuration** option. Instead, you must delete the resource data sync, and create a new one, as described in the following procedure.

**To delete a resource data sync**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**.

1. In the **Configure resource data sync** section, choose the resource data sync that you want to delete.

1. Choose **Delete**.

# Exporting OpsData from Systems Manager Explorer
Exporting OpsData

You can export 5,000 OpsData items as a comma separated value (.csv) file to an Amazon Simple Storage Service (Amazon S3) bucket from AWS Systems Manager Explorer. Explorer uses the [https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-exportopsdatatos3.html](https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-aws-exportopsdatatos3.html) automation runbook to export OpsData. When you export OpsData, the system displays the automation runbook page where you can specify details, such as assumeRole, Amazon S3 bucket name, SNS topic ARN, and fields to be exported. 

**Topics**
+ [

## Step 1: Specifying an SNS topic
](#Explorer-specify-SNS-topic)
+ [

## Step 2: (Optional) Configuring data export
](#Explorer-configure-data-export)
+ [

## Step 3: Exporting OpsData
](#Explorer-export-OpsData)

## Step 1: Specifying an SNS topic


When you configure data export, you must specify an Amazon Simple Notification Service (Amazon SNS) topic that exists in the same AWS Region where you want to export the data. Systems Manager sends a notification to the Amazon SNS topic when an export is complete. For information about creating an Amazon SNS topic, see [Creating an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-topic.html).

## Step 2: (Optional) Configuring data export


You can configure data export settings from the **Settings** or **Export Ops Data to S3 Bucket** page. 

**To configure data export from Explorer**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose **Settings**.

1. In the **Configure data export** section, choose **Edit**.

1. To upload the data export file to an existing Amazon S3 bucket, choose **Select an existing S3 bucket** and choose the bucket from the list.

   To upload the data export file to a new Amazon S3 bucket, choose **Create a new S3 bucket** and enter the name that you want to use for the new bucket.
**Note**  
You can only edit the Amazon S3 bucket name and Amazon SNS topic ARN from the page where you configured those settings for the first time in Explorer. If you set up the Amazon S3 bucket and the Amazon SNS topic ARN from the **Settings** page, then you can only modify those settings from the **Settings** page. 

1. For **Select an Amazon SNS topic ARN**, choose the topic that you want to notify when the export is complete.

1. Choose **Create**.

## Step 3: Exporting OpsData


When you export Explorer data, Systems Manager creates an AWS Identity and Access Management (IAM) role named `AmazonSSMExplorerExportRole`. This role uses the following IAM policy.

------
#### [ JSON ]

****  

```
{
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "OpsSummaryExportAutomationServiceRoleStatement1",
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject"
              ],
              "Resource": [
                 "arn:aws:s3:::amzn-s3-demo-bucket/*"
              ]
          },
          {
              "Sid": "OpsSummaryExportAutomationServiceRoleStatement2",
              "Effect": "Allow",
              "Action": [
                  "s3:GetBucketAcl",
                  "s3:GetBucketLocation"
              ],
              "Resource": [
                 "arn:aws:s3:::amzn-s3-demo-bucket"
              ]
          },
          {
              "Sid": "OpsSummaryExportAutomationServiceRoleStatement3",
              "Effect": "Allow",
              "Action": [
                  "sns:Publish"
              ],
              "Resource": [
                  "arn:aws:sns:us-east-1:111122223333:SNSTopicName"
              ]
          },
          {
              "Sid": "OpsSummaryExportAutomationServiceRoleStatement4",
              "Effect": "Allow",
              "Action": [
                  "logs:DescribeLogGroups",
                  "logs:DescribeLogStreams"
              ],
              "Resource": [
                  "arn:aws:logs:us-east-1:111122223333:log-group:MyLogGroup"
              ]
          },
          {
              "Sid": "OpsSummaryExportAutomationServiceRoleStatement5",
              "Effect": "Allow",
              "Action": [
                  "logs:CreateLogGroup",
                  "logs:PutLogEvents",
                  "logs:CreateLogStream"
              ],
              "Resource": [
                  "*"
              ]
          },
          {
              "Sid": "OpsSummaryExportAutomationServiceRoleStatement6",
              "Effect": "Allow",
              "Action": [
                  "ssm:GetOpsSummary"
              ],
              "Resource": [
                  "*"
              ]
          }
      ]
  }
```

------

The role includes the following trust entity.

------
#### [ JSON ]

****  

```
{
      "Version":"2012-10-17",		 	 	 
      "Statement": [
        {
          "Sid": "OpsSummaryExportAutomationServiceRoleTrustPolicy",
          "Effect": "Allow",
          "Principal": {
            "Service": "ssm.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
```

------

**To export OpsData from Explorer**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Explorer**.

1. Choose a data link in an Explorer widget. For example, choose **Unresolved** or **Open issues** in the **OpsItems by status** widget. Or choose the **Critical** or **High** data link in the **OpsItem by severity** widget. Explorer opens the **OpsData** page for the selected data set.

1. Choose **Export Table**.
**Note**  
When you export OpsData for the first time, the system creates an assume role for the export. You can't modify the default assume role. 

1. For **S3 Bucket Name**, choose an existing bucket. You can choose **Create** to create an Amazon S3 bucket if needed. 

   If you can't change the S3 bucket name, it means that you configured the bucket name from the **Settings** page. You can only change the bucket name from the **Settings** page.
**Note**  
You can only edit the Amazon S3 bucket name and Amazon SNS topic ARN from the page where you configured those settings for the first time in Explorer. 

1. For **SNS Topic Arn**, choose an existing Amazon SNS topic ARN to notify when the download completes. 

   If you can't change the Amazon SNS topic ARN, it means that you configured the Amazon SNS topic ARN from the **Settings** page. You can only change the topic ARN from the **Settings** page.

1. (Optional) For **SNS Success Message**, specify a success message that you want to display when the export is successfully completed.

1. Choose **Submit**. The system navigates to the previous page and displays the message **Click to view status of export process. View details**. 

   You can choose **View details** to view the status of the runbook and progress in Systems Manager Automation.

You can now export OpsData from Explorer to the specified Amazon S3 bucket.

If you can't export data by using this procedure, verify that your user, group, or role includes the `iam:CreatePolicyVersion` and `iam:DeletePolicyVersion` actions. For information about adding these actions to your user, group, or role, see [Editing IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-edit.html) in the *IAM User Guide*.

# Troubleshooting Systems Manager Explorer
Troubleshooting

This topic includes information about how to troubleshoot common problems with AWS Systems Manager Explorer.

 **Resource data sync stopped working** 

If your resource data sync has stopped working for an *opt-in AWS Region*, verify that the Region was opted into for both the AWS Organizations management account and the member account for which the sync is configured.

Most AWS Regions are active by default for your AWS account, but certain Regions are activated only when you manually select them. These Regions are known as [opt-in Regions](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html#regions-opt-in-status). By default, Explorer cross-account/cross-Region resource data syncs don't support data aggregation in opt-in Regions. Support was added for the following opt-in Regions on June 30, 2025. 
+ Europe (Milan)
+ Africa (Cape Town)
+ Middle East (Bahrain)
+ Asia Pacific (Hong Kong)

Note that you can't use a delegated administrator account to create a resource data sync in opt-in Regions. You must use an AWS Organizations management account.

 **Not able to filter AWS resources in Explorer after updating tags on the **Settings** page** 

If you update tags keys or other data settings in Explorer, the system can take up to six hours to synchronize data based on your changes.

 **The AWS Organizations options on the *Create resource data sync* page are greyed out** 

The **Include all accounts from my AWS Organizations configuration** and **Select organization units in AWS Organizations** options on the **Create resource data sync** page are only available if you set up and configured AWS Organizations. If you set up and configured AWS Organizations, then either the AWS Organizations management account or an Explorer delegated administrator can create resource data syncs that use these options. 

For more information, see [Setting up Systems Manager Explorer to display data from multiple accounts and Regions](Explorer-resource-data-sync.md) and [Configuring a delegated administrator for Explorer](Explorer-setup-delegated-administrator.md).

 **Explorer doesn't display any data at all** 
+ Verify that you completed Integrated Setup in each account and Region where you want Explorer to access and display data. If you don't, Explorer won't display OpsData and OpsItems for those accounts and Regions in which you didn't complete Integrated Setup. For more information, see [Getting started with Systems Manager Explorer and OpsCenter](Explorer-setup.md).
+ When using Explorer to view data from multiple accounts and Regions, verify that you're logged into the AWS Organizations management account or the Explorer delegated administrator account. To view OpsData and OpsItems from multiple accounts and Regions, you must be signed in to this account.

 **Widgets about Amazon EC2 instances don't display data** 

If widgets about Amazon Elastic Compute Cloud (Amazon EC2) instances, such as the **Instance count**, **Managed instances**, and **Instance by AMI** widgets don't display data, then verify the following:
+ Verify that you waited several minutes. OpsData can take several minutes to display in Explorer after you completed Integrated Setup.
+ Verify that you configured AWS Config configuration recorder. Explorer uses data provided by AWS Config configuration recorder to populate widgets with information about your EC2 instances. For more information, see [Managing the Configuration Recorder](https://docs.aws.amazon.com/config/latest/developerguide/stop-start-recorder.html).
+ Verify that the **Amazon EC2** OpsData source is active on the **Settings** page. Also, verify that more than 6 hours have passed since you activated configuration recorder or since you made changes to your instances. Systems Manager can take up to six hours to display data from AWS Config in Explorer EC2 widgets after you initially activated configuration recorder or make changes to your instances.
+ Be aware that if an instance is either stopped or terminated, then Explorer stops showing those instances after 24 hours.
+ Verify that you're in the correct AWS Region where you configured your Amazon EC2 instances. Explorer doesn't display data about on-premises instances.
+ If you configured a resource data sync for multiple accounts and Regions, verify that you're signed in to the Organizations management account or the Explorer delegated administator account.

 **Patch widget doesn't display data** 

The **Non-compliant instances for patching** widget only displays data about patch instances that aren't compliant. This widget displays no data if your instances are compliant. If you suspect that you have non-compliant instances, then verify that you set up and configured Systems Manager patching and use AWS Systems Manager Patch Manager to check your patch compliance. For more information, see [AWS Systems Manager Patch Manager](patch-manager.md).

 **Miscellaneous issues** 

**Explorer doesn't let you edit or remediate OpsItems**: OpsItems viewed across accounts or Regions are read-only. They can only be updated and remediated from their home account or Region.

# AWS Systems Manager OpsCenter
OpsCenter

OpsCenter, a tool in AWS Systems Manager, provides a central location where operations engineers and IT professionals can view, investigate, and resolve operational work items (OpsItems) related to AWS resources. OpsCenter is designed to reduce mean time to resolution for issues impacting AWS resources. OpsCenter aggregates and standardizes OpsItems across services while providing contextual investigation data about each OpsItem, related OpsItems, and related resources. OpsCenter also provides Systems Manager Automation runbooks that you can use to quickly resolve issues. You can specify searchable, custom data for each OpsItem. You can also view automatically-generated summary reports about OpsItems by status and source. To get started with OpsCenter, open the [Systems Manager console](https://console.aws.amazon.com//systems-manager/opsitems). In the navigation pane, choose **OpsCenter**.

OpsCenter is integrated with Amazon EventBridge and Amazon CloudWatch. This means you can configure these services to automatically create an OpsItem in OpsCenter when a CloudWatch alarm enters the `ALARM` state or when EventBridge processes an event from any AWS service that publishes events. Configuring CloudWatch alarms and EventBridge events to automatically create OpsItems allows you to quickly diagnose and remediate issues with AWS resources from a single console.

To help you diagnose issues, each OpsItem includes contextually relevant information such as the name and ID of the AWS resource that generated the OpsItem, alarm or event details, alarm history, and an alarm timeline graph.

For the AWS resource, OpsCenter aggregates information from AWS Config, AWS CloudTrail logs, and Amazon CloudWatch Events, so you don't have to navigate across multiple console pages during your investigation.

The following list includes types of AWS resources and metrics for which customers configure CloudWatch alarms that create OpsItems.
+ Amazon DynamoDB: database read and write actions reach a threshold
+ Amazon EC2: CPU utilization reaches a threshold
+ AWS billing: estimated charges reach a threshold
+ Amazon EC2: an instance fails a status check
+ Amazon Elastic Block Store (EBS): disk space utilization reaches a threshold

The following list includes types of EventBridge rules customer configure to create OpsItems.
+ AWS Security Hub CSPM: security alert issued
+ DynamoDB: a throttling event
+ Amazon EC2 Auto Scaling: failure to launch an instance
+ Systems Manager: failure to run an automation
+ AWS Health: an alert for scheduled maintenance
+ EC2: instance state change from `Running` to `Stopped`

OpsCenter is also integrated with Amazon CloudWatch Application Insights for .NET and SQL Server. This means you can automatically create OpsItems for problems detected in your applications. You can also integrate OpsCenter with AWS Security Hub CSPM to aggregate and take action on your security, performance, and operational issues in Systems Manager. 

Operations engineers and IT professionals can create, view, and edit OpsItems by using the OpsCenter page in the AWS Systems Manager console, public API operations, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDKs. OpsCenter public API operations also allows you to integrate OpsCenter with your case management systems and health dashboards.

## How can OpsCenter benefit my organization?


OpsCenter provides a standard and unified experience for viewing, working on, and remediating issues related to AWS resources. A standard and unified experience improves the time it takes to remedy issues, investigate related issues, and train new operations engineers and IT professionals. A standard and unified experience also reduces the number of manual errors entered into the system of managing and remediating issues. 

More specifically, OpsCenter offers the following benefits for operations engineers and organizations:
+ You no longer need to navigate across multiple console pages to view, investigate, and resolve OpsItems related to AWS resources. OpsItems are aggregated, across services, in a central location.
+ You can view service-specific and contextually relevant data for OpsItems that are automatically generated by CloudWatch alarms, EventBridge events, and CloudWatch Application Insights for .NET and SQL Server.
+ You can specify the Amazon Resource Name (ARN) of a resource related to an OpsItem. By specifying related resources, OpsCenter uses built-in logic to help you avoid creating duplicate OpsItems.
+ You can view details and resolution information about similar OpsItems.
+ You can quickly view information about and run Systems Manager Automation runbooks to resolve issues.

## What are the features of OpsCenter?

+ **Automated and manual OpsItem creation**

  OpsCenter is integrated with Amazon CloudWatch. This means you can configure CloudWatch to automatically create an OpsItem in OpsCenter when an alarm enters the `ALARM` state or when Amazon EventBridge processes an event from any AWS service that publishes events. You can also manually create OpsItems.

  OpsCenter is also integrated with Amazon CloudWatch Application Insights for .NET and SQL Server. This means you can automatically create OpsItems for problems detected in your applications.
+ **Detailed and searchable OpsItems**

  Each OpsItem includes multiple fields of information, including a title, ID, priority, description, the source of the OpsItem, and the date/time it was last updated. Each OpsItem also includes the following configurable features:
  + **Status**: Open, In progress, Resolved, or Open and In progress.
  + **Related resources**: A related resource is the impacted resource or the resource that initiated the EventBridge event that created the OpsItem. Each OpsItem includes a **Related resources** section where OpsCenter automatically lists the Amazon Resource Name (ARN) of the related resource. You can also manually specify ARNs of related resources. For some ARN types, OpsCenter automatically creates a deep link that displays details about the resource without having to visit other console pages to view that information. For example, if you specify the ARN of an EC2 instance, you can view all of the EC2-provided details about that instance in OpsCenter. You can manually add the ARNs of additional related resources. Each OpsItem can list a maximum of 100 related resource ARNs. For more information, see [Adding related resources to an OpsItem](OpsCenter-working-with-OpsItems-adding-related-resources.md).
  + **Related and Similar OpsItems**: With the **Related OpsItems** feature, you can specify the IDs of OpsItems that are in some way related to the current OpsItem. The **Similar OpsItem** feature automatically reviews OpsItem titles and descriptions and then lists other OpsItems that might be related or of interest to you.
  + **Searchable and private operational data**: Operational data is custom data that provides useful reference details about the OpsItem. For example, you can specify log files, error strings, license keys, troubleshooting tips, or other relevant data. You enter operational data as key-value pairs. The key has a maximum length of 128 characters. The value has a maximum size of 20 KB.

    This custom data is searchable, but with restrictions. For the **Searchable operational data** feature, all users with access to the OpsItem Overview page (as provided by the [DescribeOpsItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeOpsItems.html) API operation) can view and search on the specified data. For the **Private operational data** feature, the data is only viewable by users who have access to the OpsItem (as provided by the [GetOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsItem.html) API operation).
  + **Deduplication**: By specifying related resources, OpsCenter uses built-in logic to help you avoid creating duplicate OpsItems. OpsCenter also includes a feature called **Operational insights**, which displays information about duplicate OpsItems. To further limit the number of duplicate OpsItems in your account, you can manually specify a deduplication string for an EventBridge event rule. For more information, see [Managing duplicate OpsItems](OpsCenter-working-deduplication.md). 
+ **Bulk edit OpsItems**: You can select multiple OpsItems in OpsCenter and edit one of the following fields: **Status**, **Priority**, **Severity**, **Category**. 
+ **Easy remediation using runbooks**

  Each OpsItem includes a **Runbooks** section with a list of Systems Manager Automation runbooks that you can use to automatically remediate common issues with AWS resources. If you open an OpsItem, choose an AWS resource for that OpsItem, and then choose the **Run automation** button in the console, then OpsCenter provides a list of Automation runbooks that you can run on the AWS resource that generated the OpsItem. After you run an Automation runbook from an OpsItem, the runbook is automatically associated with the related resource of the OpsItem for future reference. Additionally, if you automatically set up OpsItem rules in EventBridge by using OpsCenter, then EventBridge automatically associates runbooks for common events. OpsCenter keeps a 30-day record of Automation runbooks run for a specific OpsItem. For more information, see [Remediate OpsItem issues](OpsCenter-remediating.md).
+ **Change notification**: You can specify the ARN of an Amazon Simple Notification Service (SNS) topic and publish notifications anytime an OpsItem is changed or edited. The SNS topic must exist in the same AWS Region as the OpsItem.
+ **Comprehensive OpsItem search capabilities**: OpsCenter provides multiple search options to help you quickly locate OpsItems. Here are several examples of how you can search: OpsItem ID, Title, Last modified time, Operational data value, Source, and Automation ID of a runbook execution, to name a few. You can further limit search results by using status filters. 
+ **OpsItem summary reports**

  OpsCenter includes a summary report page that automatically displays the following sections:
  + **Status summary**: a summary of OpsItems by status (Open, In progress, Resolved, Open and In progress).
  + **Sources with most open OpsItems**: a breakdown of the top AWS services with open OpsItems.
  + **OpsItems by source and age**: a count of OpsItems grouped by source and days since creation.

  For more information about viewing OpsCenter summary reports, see [Viewing OpsCenter summary reports](OpsCenter-reports.md).
+ **Logging and auditing capability support**

  You can audit and log OpsCenter user actions in your AWS account through integration with other AWS services. For more information, see [Viewing OpsCenter logs and reports](OpsCenter-logging-auditing.md).
+ **Console, CLI, PowerShell, and SDK access to OpsCenter tool**

  You can work with OpsCenter by using the AWS Systems Manager console, AWS Command Line Interface (AWS CLI), AWS Tools for PowerShell, or the AWS SDK of your choice.

## Does OpsCenter integrate with my existing case management system?


OpsCenter is designed to complement your existing case management systems. You can integrate OpsItems into your existing case management system by using public API operations. You can also maintain manual lifecycle workflows in your current systems and use OpsCenter as an investigation and remediation hub. 

For information about OpsCenter public API operations, see the following API operations in the *AWS Systems Manager API Reference*.
+ [CreateOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateOpsItem.html)
+ [DescribeOpsItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeOpsItems.html)
+ [GetOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsItem.html)
+ [GetOpsSummary](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsSummary.html)
+ [UpdateOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_UpdateOpsItem.html)

## Is there a charge to use OpsCenter?


Yes. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).

## Does OpsCenter work with my on-premises and hybrid managed nodes?


Yes. You can use OpsCenter to investigate and remediate issues with your on-premises managed nodes that are configured for Systems Manager. For more information about setting up and configuring on-premises servers and virtual machines for Systems Manager, see [Managing nodes in hybrid and multicloud environments with Systems Manager](systems-manager-hybrid-multicloud.md).

## What are the quotas for OpsCenter?


You can view quotas for all Systems Manager tools in the [Systems Manager service quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html#limits_ssm) in the *Amazon Web Services General Reference*. Unless otherwise noted, each quota is Region-specific.

# Set up OpsCenter
Set up OpsCenter

AWS Systems Manager uses an integrated setup experience to help you get started with OpsCenter and Explorer, which are tools in Systems Manager. Explorer is a customizable operations dashboard that reports information about your AWS resources. In this documentation, Explorer and OpsCenter setup is called *Integrated Setup*.

You must use Integrated Setup to set up OpsCenter with Explorer. Integrated Setup is only available in the AWS Systems Manager console. You can't set up Explorer and OpsCenter programmatically. For more information, see [Getting started with Systems Manager Explorer and OpsCenter](Explorer-setup.md). 

**Before you begin**  
When you set up OpsCenter, you enable default rules in Amazon EventBridge that automatically create OpsItems. The following table describes the default EventBridge rules that automatically create OpsItems. You can disable EventBridge rules in the OpsCenter **Settings** page under **OpsItem rules**. 

**Important**  
Your account is charged for OpsItems created by default rules. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).


****  

| Rule name | Description | 
| --- | --- | 
|  SSMOpsItems-Autoscaling-instance-launch-failure  |  This rule creates OpsItems when the launch of an EC2 auto scaling instance failed.   | 
|  SSMOpsItems-Autoscaling-instance-termination-failure  |  This rule creates OpsItems when the termination of an EC2 auto scaling instance failed.  | 
|  SSMOpsItems-EBS-snapshot-copy-failed  |  This rule creates OpsItems when the system failed to copy an Amazon Elastic Block Store (Amazon EBS) snapshot.  | 
|  SSMOpsItems-EBS-snapshot-creation-failed  |  This rule creates OpsItems when the system failed to create an Amazon EBS snapshot.  | 
|  SSMOpsItems-EBS-volume-performance-issue  |  This rule corresponds to an AWS Health tracking rule. The rule creates OpsItems whenever there is a performance issue with an Amazon EBS volume (health event = `AWS_EBS_DEGRADED_EBS_VOLUME_PERFORMANCE`).  | 
|  SSMOpsItems-EC2-issue  |  This rule corresponds to an AWS Health tracking rule for unexpected events that affect AWS services or resources. The rule creates OpsItems when, for example, a service sends communications about operational issues that are causing service degradation or to raise awarness about localized resource-level issues. For example, this rule creates an OpsItem for the following event: `AWS_EC2_OPERATIONAL_ISSUE`.  | 
|  SSMOpsItems-EC2-scheduled-change  |  This rule corresponds to an AWS Health tracking rule. AWS can schedule events for your instances, such as rebooting, stopping, or starting instances. The rule creates OpsItems for EC2 scheduled events. For more information about scheduled events, see [Scheduled events for your instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html) in the *Amazon EC2 User Guide*.  | 
|  SSMOpsItems-RDS-issue  |  This rule corresponds to an AWS Health tracking rule for unexpected events that affect AWS services or resources. The rule creates OpsItems when, for example, a service sends communications about operational issues that are causing service degradation or to raise awarness about localized resource-level issues. For example, this rule creates an OpsItem for the following events: `AWS_RDS_MYSQL_DATABASE_CRASHING_REPEATEDLY`, `AWS_RDS_EXPORT_TASK_FAILED`, and `AWS_RDS_CONNECTIVITY_ISSUE`.   | 
|  SSMOpsItems-RDS-scheduled-change  |  This rule corresponds to an AWS Health tracking rule. The rule creates OpsItems for Amazon RDS scheduled events. Scheduled events provide information about upcoming changes to your Amazon RDS resources. Some events might recommend that you take action to avoid service disruptions. Other events occur automatically without any action on your part. Your resource might be temporarily unavailable during the scheduled change activity. For example, this rule creates an OpsItem for the following events: `AWS_RDS_SYSTEM_UPGRADE_SCHEDULED` and `AWS_RDS_MAINTENANCE_SCHEDULED`. For more information about scheduled events, see [Event type categories](https://docs.aws.amazon.com/health/latest/ug/aws-health-concepts-and-terms.html#event-type-categories) in the *AWS Health User Guide*.   | 
|  SSMOpsItems-SSM-maintenance-window-execution-failed  |  This rule creates OpsItems when the processing of the Systems Manager maintenance window failed.   | 
|  SSMOpsItems-SSM-maintenance-window-execution-timedout  |  This rule creates OpsItems when the launch of the Systems Manager maintenance window timed out.   | 

Use the following procedure to set up OpsCenter.

**To set up OpsCenter**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. On the OpsCenter home page, choose **Get started**.

1. On the OpsCenter setup page, choose **Enable this option to have Explorer configure AWS Config and Amazon CloudWatch events to automatically create OpsItems based on commonly-used rules and events**. If you don't choose this option, OpsCenter remains disabled.
**Note**  
Amazon EventBridge (formerly Amazon CloudWatch Events) provides all functionality of CloudWatch Events and some new features, such as custom event buses, third-party event sources and schema registry.

1. Choose **Enable OpsCenter**.

After you enable OpsCenter, you can do the following from **Settings**:
+ Create CloudWatch alarms using the **Open CloudWatch console** button. For more information, see [Configure CloudWatch alarms to create OpsItems](OpsCenter-create-OpsItems-from-CloudWatch-Alarms.md).
+ Enable operational insights. For more information, see [Analyzing operational insights to reduce OpsItems](OpsCenter-working-operational-insights.md).
+ Enable AWS Security Hub CSPM findings alarms. For more information, see [Understanding OpsCenter integration with AWS Security Hub CSPM](OpsCenter-applications-that-integrate.md#OpsCenter-integrate-with-security-hub).

**Topics**
+ [

# (Optional) Setting up OpsCenter to centrally manage OpsItems across accounts
](OpsCenter-setting-up-cross-account.md)
+ [

# (Optional) Set up Amazon SNS to receive notifications about OpsItems
](OpsCenter-getting-started-sns.md)

# (Optional) Setting up OpsCenter to centrally manage OpsItems across accounts
Setting up for multiple accounts

You can use Systems Manager OpsCenter to centrally manage OpsItems across multiple AWS accounts in a selected AWS Region. This feature is available after you set up your organization in AWS Organizations. AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. For more information, see [What is AWS Organizations?](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) in the *AWS Organizations User Guide*

Users who belong to the AWS Organizations management account can set up a delegated administrator account for Systems Manager. In the context of OpsCenter, delegated administrators can create, edit, and view OpsItems in member accounts. The delegated administrator can also use Systems Manager Automation runbooks to bulk resolve OpsItems or remediate issues with AWS resources that are generating OpsItems. 

**Note**  
You can assign only one account as the delegated administrator for Systems Manager. For more information, see [Creating an AWS Organizations delegated administrator for Systems Manager](setting_up_delegated_admin.md).

Systems Manager offers the following methods for setting up OpsCenter to centrally manage OpsItems across multiple AWS accounts.
+ **Quick Setup**: Quick Setup, a tool in Systems Manager, simplifies set up and configuration tasks for Systems Manager tools. For more information, see [AWS Systems Manager Quick Setup](systems-manager-quick-setup.md).

  Quick Setup for OpsCenter helps you complete the following tasks for managing OpsItems across accounts:
  + Registering an account as the delegated administrator (if the delegated administrator hasn't already been designated)
  + Creating required AWS Identity and Access Management (IAM) policies and roles
  + Specifying an AWS Organizations organization or organizational units (OUs) where a delegated administrator can manage OpsItems across accounts

  For more information, see [(Optional) Configure OpsCenter to manage OpsItems across accounts by using Quick Setup](OpsCenter-quick-setup-cross-account.md).
**Note**  
Quick Setup isn't available in all AWS Regions where Systems Manager is currently available. If Quick Setup isn't available in a Region where you want to use it to configure OpsCenter to centrally manage OpsItems across multiple accounts, then you must use the manual method. To view a list of AWS Regions where Quick Setup is available, see [Availability of Quick Setup in AWS Regions](systems-manager-quick-setup.md#quick-setup-getting-started-regions).
+ **Manual set up**: If Quick Setup isn't available in the Region where you want to configure OpsCenter to centrally manage OpsItems across accounts, then you can use the manual procedure to do so. For more information, see [(Optional) Manually set up OpsCenter to centrally manage OpsItems across accounts](OpsCenter-getting-started-multiple-accounts.md).

# (Optional) Configure OpsCenter to manage OpsItems across accounts by using Quick Setup
Configure using Quick Setup

Quick Setup, a tool in AWS Systems Manager, simplifies setup and configuration tasks for Systems Manager tools. Quick Setup for OpsCenter helps you complete the following tasks for managing OpsItems across accounts:
+ Specifying the delegated administrator account
+ Creating required AWS Identity and Access Management (IAM) policies and roles
+ Specifying an AWS Organizations organization, or a subset of member accounts, where a delegated administrator can manage OpsItems across accounts

When you configure OpsCenter to manage OpsItems across accounts by using Quick Setup, Quick Setup creates the following resources in the specified accounts. These resources give the specified accounts permission to work with OpsItems and use Automation runbooks to fix issues with AWS resources generating OpsItems.


****  

| Resources | Accounts | 
| --- | --- | 
|  `AWSServiceRoleForAmazonSSM_AccountDiscovery` AWS Identity and Access Management (IAM) service-linked role For more information about this role, see [Using roles to collect AWS account information for OpsCenter and Explorer](using-service-linked-roles-service-action-2.md).  |  AWS Organizations management account and delegated administrator account  | 
|  `OpsItem-CrossAccountManagementRole` IAM role  `AWS-SystemsManager-AutomationAdministrationRole` IAM role  |  Delegated administrator account  | 
|  `OpsItem-CrossAccountExecutionRole` IAM role  `AWS-SystemsManager-AutomationExecutionRole` IAM role  `AWS::SSM::ResourcePolicy` Systems Manager resource policy for the default OpsItem group (`OpsItemGroup`)  |  All AWS Organizations member accounts  | 

**Note**  
If you previously configured OpsCenter to manage OpsItems across accounts using the [manual method](https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter-getting-started-multiple-accounts.html), you must delete the AWS CloudFormation stacks or stack sets created during Steps 4 and 5 of that process. If those resources exist in your account when you complete the following procedure, Quick Setup fails to configure cross-account OpsItem management properly.

**To configure OpsCenter to manage OpsItems across accounts by using Quick Setup**

1. Sign in to the AWS Management Console using the AWS Organizations management account.

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. Choose the **Library** tab.

1. Scroll to the bottom and locate the **OpsCenter** configuration tile. Choose **Create**.

1. On the Quick Setup OpsCenter page, in the **Delegated administrator** section, enter an account ID. If you are unable to edit this field, then a delegated administrator account has already been specified for Systems Manager.

1. In the **Targets** section, choose an option. If you choose **Custom**, then select the organizational units (OU) where you want to manage OpsItems across accounts.

1. Choose **Create**.

Quick Setup creates the OpsCenter configuration and deploys the required AWS resources to the designated OUs. 

**Note**  
If you don't want to manage OpsItems across multiple accounts, you can delete the configuration from Quick Setup. When you delete the configuration, Quick Setup deletes the following IAM policies and roles created when the configuration was originally deployed:  
`OpsItem-CrossAccountManagementRole` from the delegated administrator account
`OpsItem-CrossAccountExecutionRole` and `SSM::ResourcePolicy` from all Organizations member accounts
Quick Setup removes the configuration from all organizational units and AWS Regions where the configuration was originally deployed.

## Troubleshooting issues with a Quick Setup configuration for OpsCenter


This section includes information to help you troubleshoot issues when configuring cross-account OpsItem management using Quick Setup.

**Topics**
+ [

### Deployment to these StackSets failed: delegatedAdmin
](#OpsCenter-quick-setup-cross-account-troubleshooting-stack-set-failed)
+ [

### Quick Setup configuration status shows Failed
](#OpsCenter-quick-setup-cross-account-troubleshooting-configuration-failed)

### Deployment to these StackSets failed: delegatedAdmin


When creating an OpsCenter configuration, Quick Setup deploys two AWS CloudFormation stack sets in the Organizations management account. The stack sets use the following prefix: `AWS-QuickSetup-SSMOpsCenter`. If Quick Setup displays the following error: `Deployment to these StackSets failed: delegatedAdmin` use the following procedure to fix this issue.

**To troubleshoot a StackSets failed:delegatedAdmin error**

1. If you received the `Deployment to these StackSets failed: delegatedAdmin` error in a red banner in the Quick Setup console, sign in to the delegated administrator account and the AWS Region designated as the Quick Setup home Region.

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose the stack created by your Quick Setup configuration. The stack name includes the following: **AWS-QuickSetup-SSMOpsCenter**.
**Note**  
Sometimes CloudFormation deletes failed stack deployments. If the stack isn't available in the **Stacks** table, choose **Deleted** from the filter list.

1. View the **Status** and **Status reason**. For more information about stack statuses, see [Stack status codes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html#cfn-console-view-stack-data-resources-status-codes) in the *AWS CloudFormation User Guide*. 

1. To understand the exact step that failed, view the **Events** tab and review each event's **Status**. For more information, see [Troubleshooting](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) in the *AWS CloudFormation User Guide*.

**Note**  
If you are unable to resolve the deployment failure using the CloudFormation troubleshooting steps, delete the configuration and try again.

### Quick Setup configuration status shows Failed


If the **Configuration details** table on the **Configuration details** page shows a configuration status of `Failed`, sign in to the AWS account and Region where it failed.

**To troubleshoot a Quick Setup failure to create an OpsCenter configuration**

1. Sign in to the AWS account and the AWS Region where the failure occurred.

1. Open the CloudFormation console at [https://console.aws.amazon.com/cloudformation](https://console.aws.amazon.com/cloudformation/).

1. Choose the stack created by your Quick Setup configuration. The stack name includes the following: **AWS-QuickSetup-SSMOpsCenter**.
**Note**  
Sometimes CloudFormation deletes failed stack deployments. If the stack isn't available in the **Stacks** table, choose **Deleted** from the filter list.

1. View the **Status** and **Status reason**. For more information about stack statuses, see [Stack status codes](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html#cfn-console-view-stack-data-resources-status-codes) in the *AWS CloudFormation User Guide*. 

1. To understand the exact step that failed, view the **Events** tab and review each event's **Status**. For more information, see [Troubleshooting](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html) in the *AWS CloudFormation User Guide*.

#### Member account configuration shows ResourcePolicyLimitExceededException


If a stack status shows `ResourcePolicyLimitExceededException`, the account has previously onboarded to OpsCenter cross-account management by using the [manual method](https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter-getting-started-multiple-accounts.html). To resolve this issue, you must delete the AWS CloudFormation stacks or stack sets created during Steps 4 and 5 of the manual onboarding process. For more information, see [Delete a stack set](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-delete.html) and [Deleting a stack on the CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html) in the *AWS CloudFormation User Guide*.

# (Optional) Manually set up OpsCenter to centrally manage OpsItems across accounts
Manually setting up for multiple accounts

This section describes how to manually configure OpsCenter for cross-account OpsItem management. While this process is still supported, it has been replaced by a newer process that uses Systems Manager Quick Setup. For more information, see [(Optional) Configure OpsCenter to manage OpsItems across accounts by using Quick Setup](OpsCenter-quick-setup-cross-account.md). 

You can set up a central account to create manual OpsItems for member accounts, and manage and remediate those OpsItems. The central account can be the AWS Organizations management account, or both the AWS Organizations management account and Systems Manager delegated administrator account. We recommend that you use the Systems Manager delegated administrator account as a central account. You can only use this feature after you configure AWS Organizations. 

With AWS Organizations, you can consolidate multiple AWS accounts into an organization that you create and manage centrally. The central account user can create OpsItems for all selected member accounts simultaneously, and manage those OpsItems.

Use the process in this section to enable the Systems Manager service principal in Organizations and configure AWS Identity and Access Management (IAM) permissions for working with OpsItems across accounts. 

**Topics**
+ [

## Before you begin
](#OpsCenter-before-you-begin)
+ [

## Step 1: Creating a resource data sync
](#OpsCenter-getting-started-multiple-accounts-onboarding-rds)
+ [

## Step 2: Enabling the Systems Manager service principal in AWS Organizations
](#OpsCenter-getting-started-multiple-accounts-onboarding-service-principal)
+ [

## Step 3: Creating the `AWSServiceRoleForAmazonSSM_AccountDiscovery` service-linked role
](#OpsCenter-getting-started-multiple-accounts-onboarding-SLR)
+ [

## Step 4: Configuring permissions to work with OpsItems across accounts
](#OpsCenter-getting-started-multiple-accounts-onboarding-resource-policy)
+ [

## Step 5: Configuring permissions to work with related resources across accounts
](#OpsCenter-getting-started-multiple-accounts-onboarding-related-resources-permissions)

**Note**  
Only OpsItems of type `/aws/issue` are supported when working in OpsCenter across accounts.

## Before you begin


Before you set up OpsCenter to work with OpsItems across accounts, ensure that you have set up the following:
+ A Systems Manager delegated administrator account. For more information, see [Configuring a delegated administrator for Explorer](Explorer-setup-delegated-administrator.md).
+ One organization set up and configured in Organizations. For more information, see [Creating and managing an organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_org.html) in the *AWS Organizations User Guide*.
+ You configured Systems Manager Automation to run automation runbooks across multiple AWS Regions and AWS accounts. For more information, see [Running automations in multiple AWS Regions and accounts](running-automations-multiple-accounts-regions.md).

## Step 1: Creating a resource data sync


After you set up and configure AWS Organizations, you can aggregate OpsItems in OpsCenter for an entire organization by creating a resource data sync. For more information, see [Creating a resource data sync](Explorer-resource-data-sync-configuring-multi.md). When you create the sync, in the **Add accounts** section, be sure to choose the **Include all accounts from my AWS Organizations configuration** option.

## Step 2: Enabling the Systems Manager service principal in AWS Organizations


To enable a user to work with OpsItems across accounts, the Systems Manager service principal must be enabled in AWS Organizations. If you previously configured Systems Manager for multi-account scenarios using other tools, the Systems Manager service principal might already be configured in Organizations. Run the following commands from the AWS Command Line Interface (AWS CLI) to verify. If you *haven't* configured Systems Manager for other multi-account scenarios, skip to the next procedure, *To enable the Systems Manager service principal in AWS Organizations*.

**To verify the Systems Manager service principal is enabled in AWS Organizations**

1. [Download](https://aws.amazon.com/cli/) the latest version of the AWS CLI to your local machine.

1. Open the AWS CLI, and run the following command to specify your credentials and an AWS Region.

   ```
   aws configure
   ```

   The system prompts you to specify the following. In the following example, replace each *user input placeholder* with your own information.

   ```
   AWS Access Key ID [None]: key_name
   AWS Secret Access Key [None]: key_name
   Default region name [None]: region
   Default output format [None]: ENTER
   ```

1. Run the following command to verify that the Systems Manager service principal is enabled for AWS Organizations.

   ```
   aws organizations list-aws-service-access-for-organization
   ```

   The command returns information similar to that shown in the following example.

   ```
   {
       "EnabledServicePrincipals": [
           {
               "ServicePrincipal": "member.org.stacksets.cloudformation.amazonaws.com",
               "DateEnabled": "2020-12-11T16:32:27.732000-08:00"
           },
           {
               "ServicePrincipal": "opsdatasync.ssm.amazonaws.com",
               "DateEnabled": "2022-01-19T12:30:48.352000-08:00"
           },
           {
               "ServicePrincipal": "ssm.amazonaws.com",
               "DateEnabled": "2020-12-11T16:32:26.599000-08:00"
           }
       ]
   }
   ```

**To enable the Systems Manager service principal in AWS Organizations**

If you haven't previously configured the Systems Manager service principal for Organizations, use the following procedure to do so. For more information about this command, see [enable-aws-service-access](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/organizations/enable-aws-service-access.html) in the *AWS CLI Command Reference*.

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already. For information, see [Installing CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and [Configuring CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html). 

1. [Download](https://aws.amazon.com/cli/) the latest version of the AWS CLI to your local machine.

1. Open the AWS CLI, and run the following command to specify your credentials and an AWS Region.

   ```
   aws configure
   ```

   The system prompts you to specify the following. In the following example, replace each *user input placeholder* with your own information.

   ```
   AWS Access Key ID [None]: key_name
   AWS Secret Access Key [None]: key_name
   Default region name [None]: region
   Default output format [None]: ENTER
   ```

1. Run the following command to enable the Systems Manager service principal for AWS Organizations.

   ```
   aws organizations enable-aws-service-access --service-principal "ssm.amazonaws.com"
   ```

## Step 3: Creating the `AWSServiceRoleForAmazonSSM_AccountDiscovery` service-linked role


A service-linked role such as the `AWSServiceRoleForAmazonSSM_AccountDiscovery` role is a unique type of IAM role that is linked directly to an AWS service, such as Systems Manager. Service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on your behalf. For more information about the `AWSServiceRoleForAmazonSSM_AccountDiscovery` service-linked role, see [Service-linked role permissions for Systems Manager account discovery](using-service-linked-roles-service-action-2.md#service-linked-role-permissions-service-action-2).

Use the following procedure to create the `AWSServiceRoleForAmazonSSM_AccountDiscovery` service-linked role by using the AWS CLI. For more information about the command used in this procedure, see [create-service-linked-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-service-linked-role.html) in the *AWS CLI Command Reference.*

**To create the `AWSServiceRoleForAmazonSSM_AccountDiscovery` service-linked role**

1. Sign in to the AWS Organizations management account.

1. While signed in to the Organizations management account, run the following command.

   ```
   aws iam create-service-linked-role \
       --aws-service-name accountdiscovery.ssm.amazonaws.com \
       --description "Systems Manager account discovery for AWS Organizations service-linked role"
   ```

## Step 4: Configuring permissions to work with OpsItems across accounts


Use AWS CloudFormation stacksets to create an `OpsItemGroup` resource policy and an IAM execution role that give users permission to work with OpsItems across accounts. To get started, download and unzip the [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/OpsCenterCrossAccountMembers.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/OpsCenterCrossAccountMembers.zip) file. This file contains the `OpsCenterCrossAccountMembers.yaml` CloudFormation template file. When you create a stack set by using this template, CloudFormation automatically creates the `OpsItemCrossAccountResourcePolicy` resource policy and the `OpsItemCrossAccountExecutionRole` execution role in the account. For more information about creating a stack set, see [Create a stack set](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-getting-started-create.html) in the *AWS CloudFormation User Guide*.

**Important**  
Note the following important information about this task:  
You must deploy the stackset while signed in to the AWS Organizations management account.
You must repeat this procedure while signed in to *every* account that you want to *target* for working with OpsItems across accounts, including the delegated administrator account.
If you want to enable cross-account OpsItems administration in different AWS Regions, choose **Add all regions** in the **Specify regions** section of the template. Cross-account OpsItem administration isn't supported for opt-in Regions.

## Step 5: Configuring permissions to work with related resources across accounts


An OpsItem can include detailed information about impacted resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances or Amazon Simple Storage Service (Amazon S3) buckets. The `OpsItemCrossAccountExecutionRole` execution role, which you created in the previous Step 4, provides OpsCenter with read-only permissions for member accounts to view related resources. You must also create an IAM role to provide management accounts with permission to view and interact with related resources, which you will complete in this task. 

To get started, download and unzip the [https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/OpsCenterCrossAccountManagementRole.zip](https://docs.aws.amazon.com/systems-manager/latest/userguide/samples/OpsCenterCrossAccountManagementRole.zip) file. This file contains the `OpsCenterCrossAccountManagementRole.yaml` CloudFormation template file. When you create a stack by using this template, CloudFormation automatically creates the `OpsCenterCrossAccountManagementRole` IAM role in the account. For more information about creating a stack, see [Creating a stack on the AWS CloudFormation console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stack.html) in the *AWS CloudFormation User Guide*.

**Important**  
Note the following important information about this task:  
If you plan to specify an account as a delegated administrator for OpsCenter, be sure to specify that AWS account when you create the stack. 
You must perform this procedure while signed in to the AWS Organizations management account and again while signed in to the delegated administrator account.

# (Optional) Set up Amazon SNS to receive notifications about OpsItems
Set up Amazon SNS

You can configure OpsCenter to send notifications to an Amazon Simple Notification Service (Amazon SNS) topic when the system creates an OpsItem or updates an existing OpsItem. 

Complete the following steps to receive notifications for OpsItems.
+ [Step 1: Creating and subscribing to an Amazon SNS topic](#OpsCenter-getting-started-sns-create-topic)
+ [Step 2: Updating the Amazon SNS access policy](#OpsCenter-getting-started-sns-encryption-policy)
+ [Step 3: Updating the AWS KMS access policy](#OpsCenter-getting-started-sns-KMS-policy)
**Note**  
If you turn on AWS Key Management Service (AWS KMS) server-side encryption in Step 2, then you must complete Step 3. Otherwise, you can skip Step 3. 
+ [Step 4: Turning on default OpsItems rules to send notifications for new OpsItems](#OpsCenter-getting-started-sns-default-rules)

## Step 1: Creating and subscribing to an Amazon SNS topic


To receive notifications, you must create and subscribe to an Amazon SNS topic. For more information, see [Creating an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/CreateTopic.html) and [Subscribing to an Amazon SNS topic](https://docs.aws.amazon.com/sns/latest/dg/sns-tutorial-create-subscribe-endpoint-to-topic.html) in the *Amazon Simple Notification Service Developer Guide*.

**Note**  
If you're using OpsCenter in multiple AWS Regions or accounts, you must create and subscribe to an Amazon SNS topic in *each* Region or account where you want to receive OpsItem notifications. 

## Step 2: Updating the Amazon SNS access policy


You have to associate an Amazon SNS topic with OpsItems. Use the following procedure to set up an Amazon SNS access policy so that Systems Manager can publish OpsItems notifications to the Amazon SNS topic that you created in Step 1.

1. Sign in to the AWS Management Console and open the Amazon SNS console at [https://console.aws.amazon.com/sns/v3/home](https://console.aws.amazon.com/sns/v3/home).

1. In the navigation pane, choose **Topics**.

1. Choose the topic that you created in Step 1, and then choose **Edit**.

1. Expand **Access policy**.

1. Add the following `Sid` block to the existing policy. Replace each *example resource placeholder* with your own information.

   ```
   {
         "Sid": "Allow OpsCenter to publish to this topic",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": "SNS:Publish",
         "Resource": "arn:aws:sns:region:account ID:topic name", // Account ID of the SNS topic owner
         "Condition": {
         "StringEquals": {
           "AWS:SourceAccount": "account ID" //  Account ID of the OpsItem owner
         }
      }
   }
   ```
**Note**  
The `aws:SourceAccount` global condition key protects against the confused deputy scenario. To use this condition key, set the value to the account ID of the OpsItem owner. For more information, see [Confused Deputy](https://docs.aws.amazon.com//IAM/latest/UserGuide/confused-deputy.html) in the *IAM User Guide*. 

1. Choose **Save changes**.

The system now sends notifications to the Amazon SNS topic when OpsItems are created or updated.

**Important**  
If you configure the Amazon SNS topic with an AWS Key Management Service (AWS KMS) server-side encryption key in the Step 2, then complete Step 3. Otherwise, you can skip Step 3. 

## Step 3: Updating the AWS KMS access policy


If you turned on AWS KMS server-side encryption for your Amazon SNS topic, you must also update the access policy of the AWS KMS key that you chose when you configured the topic. Use the following procedure to update the access policy so that Systems Manager can publish OpsItem notifications to the Amazon SNS topic you created in Step 1.

**Note**  
OpsCenter doesn't support publishing OpsItems to an Amazon SNS topic that is configured with an AWS managed key.

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. In the navigation pane, choose **Customer managed keys**.

1. Choose the ID of the KMS key that you chose when you created the topic.

1. In the **Key policy** section, choose **Switch to policy view**.

1. Choose **Edit**.

1. Add the following `Sid` block to the existing policy. Replace each *example resource placeholder* with your own information.

   ```
   {
         "Sid": "Allow OpsItems to decrypt the key",
         "Effect": "Allow",
         "Principal": {
           "Service": "ssm.amazonaws.com"
         },
         "Action": ["kms:Decrypt", "kms:GenerateDataKey*"],
          "Resource": "arn:aws:kms:region:account ID:key/key ID"
       }
   ```

    In the following example, the new block is entered at line 14.  
![\[Editing the AWS KMS access policy of an Amazon SNS topic.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_SNS_KMS_access_policy.png)

1. Choose **Save changes**.

## Step 4: Turning on default OpsItems rules to send notifications for new OpsItems


Default OpsItems rules in Amazon EventBridge aren't configured with an Amazon Resource Name (ARN) for Amazon SNS notifications. Use the following procedure to edit a rule in EventBridge and enter a `notifications` block. 

**To add a notifications block to a default OpsItem rule**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose the **OpsItems** tab, and then choose **Configure sources**.

1. Choose the name of the source rule that you want to configure with a `notifications` block, as shown in the following example.  
![\[Choosing an Amazon EventBridge rule to add an Amazon SNS notifications block.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_SNS_Setup_2.png)

   The rule opens in Amazon EventBridge.

1. On the rule details page, on the **Targets** tab, choose **Edit**.

1. In the **Additional settings** section, choose **Configure input transformer**.

1. In the **Template** box, add a `notifications` block in the following format.

   ```
   "notifications":[{"arn":"arn:aws:sns:region:account ID:topic name"}],
   ```

   Here's an example.

   ```
   "notifications":[{"arn":"arn:aws:sns:us-west-2:1234567890:MySNSTopic"}],
   ```

   Enter the notifications block before the `resources` block, as shown in the following example for the US West (Oregon) (us-west-2) Region.

   ```
   {
       "title": "EBS snapshot copy failed",
       "description": "CloudWatch Event Rule SSMOpsItems-EBS-snapshot-copy-failed was triggered. Your EBS snapshot copy has failed. See below for more details.",
       "category": "Availability",
       "severity": "2",
       "source": "EC2",
       "notifications": [{
           "arn": "arn:aws:sns:us-west-2:1234567890:MySNSTopic"
       }],
       "resources": <resources>,
       "operationalData": {
           "/aws/dedup": {
               "type": "SearchableString",
               "value": "{\"dedupString\":\"SSMOpsItems-EBS-snapshot-copy-failed\"}"
           },
           "/aws/automations": {
               "value": "[ { \"automationType\": \"AWS:SSM:Automation\", \"automationId\": \"AWS-CopySnapshot\" } ]"
           },
           "failure-cause": {
               "value": <failure - cause>
           },
           "source": {
               "value": <source>
           },
           "start-time": {
               "value": <start - time>
           },
           "end-time": {
               "value": <end - time>
           }
       }
   }
   ```

1. Choose **Confirm**.

1. Choose **Next**.

1. Choose **Next**.

1. Choose **Update rule**.

The next time that the system creates an OpsItem for the default rule, it publishes a notification to the Amazon SNS topic.

# Integrate OpsCenter with other AWS services


OpsCenter, a tool in AWS Systems Manager, integrates with multiple AWS services to diagnose and remediate issues with AWS resources. You must set up the AWS service before you integrate it with OpsCenter.

By default, the following AWS services are integrated with OpsCenter and can create OpsItems automatically: 
+ [Amazon CloudWatch](#OpsCenter-about-cloudwatch)
+ [Amazon CloudWatch Application Insights](#OpsCenter-about-cloudwatch-insights)
+ [Amazon EventBridge](#OpsCenter-about-eventbridge)
+ [AWS Config](#OpsCenter-about-AWS-config)
+ [AWS Systems Manager Incident Manager](#OpsCenter-about-incident-manager)

You have to integrate the following services with OpsCenter to create OpsItems automatically:
+ [Amazon DevOps Guru](#OpsCenter-integrate-with-devops-guru)
+ [AWS Security Hub CSPM](#OpsCenter-integrate-with-security-hub)

When any of these services create an OpsItem, you can manage and remediate the OpsItem from OpsCenter. For more information, see [Manage OpsItems](OpsCenter-working-with-OpsItems.md) and [Remediate OpsItem issues](OpsCenter-remediating.md). 

For more information about each AWS service and how it integrates with OpsCenter, see the following topics.

**Topics**
+ [

## Understanding OpsCenter integration with Amazon CloudWatch
](#OpsCenter-about-cloudwatch)
+ [

## Understanding OpsCenter integration with Amazon CloudWatch Application Insights
](#OpsCenter-about-cloudwatch-insights)
+ [

## Understanding OpsCenter integration with Amazon DevOps Guru
](#OpsCenter-integrate-with-devops-guru)
+ [

## Understanding OpsCenter integration with Amazon EventBridge
](#OpsCenter-about-eventbridge)
+ [

## Understanding OpsCenter integration with AWS Config
](#OpsCenter-about-AWS-config)
+ [

## Understanding OpsCenter integration with AWS Security Hub CSPM
](#OpsCenter-integrate-with-security-hub)
+ [

## Understanding OpsCenter integration with Incident Manager
](#OpsCenter-about-incident-manager)

## Understanding OpsCenter integration with Amazon CloudWatch


Amazon CloudWatch monitors your AWS resources and services, and displays metrics on every AWS service that you use. CloudWatch creates an OpsItem when an alarm enters the alarm state. For example, you can configure an alarm to automatically create an OpsItem if there is a spike in HTTP errors generated by your Application Load Balancer.

Some alarms that you can configure in CloudWatch to create OpsItems are shown in the following list: 
+ Amazon DynamoDB: database read and write actions reach a threshold
+ Amazon EC2: CPU utilization reaches a threshold
+ AWS billing: estimated charges reach a threshold
+ Amazon EC2: an instance fails a status check
+ Amazon Elastic Block Store (EBS): disk space utilization reaches a threshold

You can either create an alarm or edit an existing alarm to create an OpsItem. For more information, see [Configure CloudWatch alarms to create OpsItems](OpsCenter-create-OpsItems-from-CloudWatch-Alarms.md).

When you enable OpsCenter using Integrated Setup, it integrates CloudWatch with OpsCenter. 

## Understanding OpsCenter integration with Amazon CloudWatch Application Insights


Using Amazon CloudWatch Application Insights, you can set up the most appropriate monitors for your application resources to continuously analyze data for signs of problems with your applications. When you configure application resources in CloudWatch Application Insights, you can choose to have the system create OpsItems in OpsCenter. An OpsItem is created on the OpsCenter console for every problem detected with the application. For information, see [Set up, configure, and manage your application for monitoring](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/appinsights-setting-up.html) in the *Amazon CloudWatch User Guide*. 

**Note**  
Starting October 16, 2023, the title and description for OpsItems created by CloudWatch Application Insights now use the following improved format:

```
OpsItem title: [<APPLICATION NAME>: <RESOURCE ID>] <PROBLEM SUMMARY>

OpsItem description:       

CloudWatch Application Insights has detected a problem in application <APPLICATION NAME>.
Problem summary: <PROBLEM SUMMARY>
Problem ID: <PROBLEM ID> (hyperlinks to the Application Insights problem summary page)
Problem Status: <PROBLEM STATUS>
Insight: <INSIGHT>
```

Here is an example:

![\[Screen shot showing the new format of an OpsItem created from a CloudWatch Insight.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItem-CWinsight.png)


## Understanding OpsCenter integration with Amazon DevOps Guru


Amazon DevOps Guru applies machine learning to analyze your operational data, application metrics, and application events to identify behaviors that deviate from normal operating patterns. If you enable DevOps Guru to generate an OpsItem in OpsCenter, each insight generates a new OpsItem. You can use OpsCenter to manage your OpsItems. 

DevOps Guru automatically creates OpsItems. You can enable Amazon DevOps Guru to create OpsItems by using Quick Setup, which is a tool in Systems Manager. The system creates OpsItems by using the [AWSServiceRoleForDevOpsGuru](https://docs.aws.amazon.com/devops-guru/latest/userguide/using-service-linked-roles.html) AWS Identity and Access Management (IAM) service-linked role.

**To integrate OpsCenter with DevOps Guru**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **Quick Setup**.

1. On the **Customize DevOps Guru configuration options** page, choose the **Library** tab. 

1. In the **DevOps Guru** pane, choose **Create**.

1. For **Configuration options**, select **Enable AWS Systems Manager OpsItems.**

1. Select **Create** after you complete the setup.

## Understanding OpsCenter integration with Amazon EventBridge


Amazon EventBridge delivers a stream of events that describe changes in AWS resources. When you enable OpsCenter using Integrated Setup, it integrates EventBridge with OpsCenter, and enables default EventBridge rules. Based on these rules, EventBridge creates OpsItems. Using rules, you can filter and route events to OpsCenter for investigation and remediation. 

**Note**  
Amazon EventBridge (formerly Amazon CloudWatch Events) provides all functionality of CloudWatch Events and some new features, such as custom event buses, third-party event sources and schema registry.

Following are some rules that you can configure in EventBridge to create an OpsItem: 
+ Security Hub CSPM: security alert issued
+ Amazon DynamoDB a throttling event
+ Amazon Elastic Compute Cloud Auto Scaling: failure to launch an instance 
+ Systems Manager: failure to run an automation 
+ AWS Health: an alert for scheduled maintenance
+ Amazon EC2: instance state changed from running to stop 

Based on your requirements, you can either create a rule or edit an existing rule to create an OpsItems. For instructions on how to edit a rule to create an OpsItem, see [Configure EventBridge rules to create OpsItems](OpsCenter-automatically-create-OpsItems-2.md).

## Understanding OpsCenter integration with AWS Config


AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. 

AWS Config does not integrate *directly* with OpsCenter. Instead, you create an AWS Config rule that sends an event to Amazon EventBridge, such as when AWS Config detects a noncompliant instance. Then EventBridge evaluates that event against an EventBridge rule you've created. If the rule matches, EventBridge transforms the event to an OpsItem and transmits it to OpsCenter as the destination target. 

Using this OpsItem, you can track details of the noncompliant resource, record investigative actions, and provide access to consistent remediation actions.

**Related info**

[Configure EventBridge rules to create OpsItems](OpsCenter-automatically-create-OpsItems-2.md)

[Using AWS Systems Manager OpsCenter and AWS Config for compliance monitoring](https://aws.amazon.com/blogs/mt/using-aws-systems-manager-opscenter-and-aws-config-for-compliance-monitoring/)

## Understanding OpsCenter integration with AWS Security Hub CSPM


AWS Security Hub CSPM collects security data, called *findings*, from across AWS accounts and services. Using a set of rules to detect and generate findings, Security Hub CSPM helps you identify, prioritize, and remediate security issues for the resources you manage. After you configure integration, as described in this topic, Systems Manager creates OpsItems for Security Hub CSPM findings in OpsCenter. 

**Note**  
OpsCenter has bidirectional integration with Security Hub CSPM. This means that if you update the **Status** or **Severity** field for an OpsItem related to a security finding, the system synchronizes the changes with Security Hub CSPM. Likewise, any changes to a finding are automatically updated in the corresponding OpsItems in OpsCenter.  
When an OpsItem is created from a Security Hub CSPM finding, Security Hub CSPM metadata is automatically added to the operational data field of the OpsItem. If this metadata is deleted, the bidirectional updates no longer function.

By default, Systems Manager creates OpsItems for critical and high severity findings. You can manually configure OpsCenter to create OpsItems for medium and low severity findings. OpsCenter doesn’t create OpsItems for informational findings as they don't require remediation. For more information about Security Hub CSPM severity levels, see [Severity](https://docs.aws.amazon.com//securityhub/1.0/APIReference/API_Severity.html) in the *AWS Security Hub API Reference*.

**Before you begin**  
Before you configure OpsCenter to create OpsItems based on Security Hub CSPM findings, verify that you completed the Security Hub CSPM set up tasks. For more information, see [Setting up Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-settingup.html) in the *AWS Security Hub User Guide*.

When you integrate Security Hub CSPM with OpsCenter, the system creates OpsItems by using the `AWSServiceRoleForSystemsManagerOpsDataSync` IAM service-linked role. For more information about this role, see [Using roles to create OpsData and OpsItems for Explorer](using-service-linked-roles-service-action-3.md).

**Warning**  
Note the following important information about pricing for OpsCenter integration with Security Hub CSPM:  
If you are logged into the Security Hub CSPM administrator account when you configure OpsCenter and Security Hub CSPM integration, the system creates OpsItems for findings in the administrator *and* all member accounts. The OpsItems are all created *in the administrator account*. Depending on a variety of factors, this can lead to an unexpectedly large bill from AWS.  
If you are logged into a member account when you configure integration, the system only creates OpsItems for findings in that individual account. For more information about the Security Hub CSPM administrator account, member accounts, and their relation to the EventBridge event feed for findings, see [Types of Security Hub CSPM integration with EventBridge](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cwe-integration-types.html) in the *AWS Security Hub User Guide*.
For each finding that creates an OpsItem, you are charged the regular price for creating the OpsItem. You are also charged if you edit the OpsItem or if the corresponding finding is updated in Security Hub CSPM (which triggers an OpsItem update).
 OpsItems that are created by an integration with AWS Security Hub CSPM are *not* currently limited by the maximum quota of 500,000 OpsItems per account in a Region. It is therefore possible for Security Hub CSPM alerts to create more than 500,000 chargeable OpsItems in each Region in an account.  
For high-production environments, we therefore recommend limiting the scope of Security Hub CSPM findings to high severity issues only.

**To configure OpsCenter to create OpsItems for Security Hub CSPM findings**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose **Settings**.

1. In the **Security Hub CSPM findings** section, choose **Edit.**

1. Choose the slider to change **Disabled** to **Enabled**.

1. If you want the system to create OpsItems for medium or low severity findings, toggle these options.

1. Choose **Save** to save your configuration.

Use the following procedure if you no longer want the system to create OpsItems for Security Hub CSPM findings.

**To stop receiving OpsItems for Security Hub CSPM findings**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose **Settings**.

1. In the **Security Hub CSPM findings** section, choose **Edit.**

1. Choose the slider to change **Enabled** to **Disabled**. If you aren't able to toggle the slider, Security Hub CSPM hasn't been enabled for your AWS account.

1. Choose **Save** to save your configuration. OpsCenter no longer creates OpsItems based on Security Hub CSPM findings.

**Important**  
A Systems Manager delegated administrator or the AWS Organizations management account can enable Security Hub CSPM findings in OpsCenter for multiple accounts and AWS Regions by creating a resource data sync in Explorer. If the **Security Hub CSPM** source is enabled in Explorer and a resource data sync exists that targets the member account where you disabled Security Hub CSPM integration, then the settings selected by your administrator take precedence. OpsCenter continues to create OpsItems for Security Hub CSPM findings. To stop creating OpsItems for Security Hub CSPM findings in a member account targeted by a resource data sync, contact your administrator and ask them to remove your account from the resource data sync or turn off the **Security Hub CSPM** source in Explorer. For information about changing settings in Explorer, see [Editing Systems Manager Explorer data sources](Explorer-using-editing-data-sources.md).

## Understanding OpsCenter integration with Incident Manager


Incident Manager, a tool in AWS Systems Manager, provides an incident management console that helps you mitigate and recover from incidents affecting your AWS hosted applications. An *incident* is any unplanned interruption or reduction in quality of services. After you set up and configure [Incident Manager](https://docs.aws.amazon.com/incident-manager/latest/userguide/what-is-incident-manager.html), the system automatically creates OpsItems in OpsCenter. 

When the system creates an incident in Incident Manager, it also creates an OpsItem in OpsCenter, and displays the incident as a related item. If the OpsItem already exists, Incident Manager doesn't create an OpsItem. The first OpsItem is known as the parent OpsItem. If an incident grows in scale and scope, you can add incidents to an existing OpsItem. If required, you can manually create an incident for an OpsItem. After an incident is closed, you can create an analysis in Incident Manager to review and improve the remediation process for similar issues. 

By default, OpsCenter integrates with Incident Manager. If Incident Manager is not set up, the OpsCenter page displays a message to set up Incident Manager. When Incident Manager creates an OpsItem, you can manage and remediate the OpsItem from OpsCenter. For instructions on creating an incident for an OpsItem, see [Creating an incident for an OpsItem](OpsCenter-working-with-OpsItems-create-an-incident.md). 

# Create OpsItems


After you set up OpsCenter, a tool in AWS Systems Manager, and integrate it with your AWS services, your AWS services automatically create OpsItems based on default rules, events, or alarms. 

You can view the statuses and severity levels of default Amazon EventBridge rules. If required, you can create or edit these rules from Amazon EventBridge. You can also view alarms from Amazon CloudWatch, and create or edit alarms. Using rules and alarms, you can configure events for which you want to generate OpsItems automatically.

When the system creates an OpsItem, it's in the **Open** status. You can change the status to **In progress** when you start investigation of the OpsItem and to **Resolved** after you remediate the OpsItem. For more information about how to configure alarms and rules in AWS services to create OpsItems and how to create OpsItems manually, see the following topics. 

**Topics**
+ [

# Configure EventBridge rules to create OpsItems
](OpsCenter-automatically-create-OpsItems-2.md)
+ [

# Configure CloudWatch alarms to create OpsItems
](OpsCenter-create-OpsItems-from-CloudWatch-Alarms.md)
+ [

# Create OpsItems manually
](OpsCenter-manually-create-OpsItems.md)

# Configure EventBridge rules to create OpsItems
Configure EventBridge rules

When Amazon EventBridge receives an event, it creates a new OpsItem based on default rules. You can create a rule or edit an existing rule to set OpsCenter as the target of an EventBridge event. For information about how to create an event rule, see [Creating a rule for an AWS service](https://docs.aws.amazon.com/eventbridge/latest/userguide/create-eventbridge-rule.html) in the *Amazon EventBridge User Guide*.

**To configure an EventBridge rule to create OpsItems in OpsCenter**

1. Open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. On the **Rules** page, for **Event bus**, choose **default**.

1. For **Rules**, choose a rule by selecting the check box next to its name.

1. Select the name of the rule to open its details page. In **Rule details**, verify that **Status** is set to **Enabled**.
**Note**  
If required, you can update the status using **Edit** in the upper-right corner of the page.

1. Choose the **Targets** tab. 

1. On the **Targets** tab, choose **Edit**.

1. For **Target types**, select **AWS service**.

1. For **Select a target**, choose **Systems Manager OpsItem**.

1. For many target types, EventBridge needs permission to send events to the target. In these cases, EventBridge can create the AWS Identity and Access Management (IAM) role needed for your rule to run: 
   + To create an IAM role automatically, choose **Create a new role for this specific resource**.
   + To use an IAM role that you created to give EventBridge permission to create OpsItems in OpsCenter, choose **Use existing role**.

1. In **Additional settings**, for **Configure target input**, choose **Input Transformer**.

   You can use the **Input transformer** option to specify a deduplication string and other important information for OpsItems, such as title and severity.

1. Choose **Configure input transformer**.

1. In **Target input transformer**, for **Input path**, specify the values to parse from the triggering event. For example, to parse the start time, end time, and other details from the event that triggers the rule, use the following JSON.

   ```
   {
       "end-time": "$.detail.EndTime",
       "failure-cause": "$.detail.cause",
       "resources": "$.resources[0]",
       "source": "$.detail.source",
       "start-time": "$.detail.StartTime"
   }
   ```

1. For **Template**, specify the information to send to the target. For example, use the following JSON to pass information to OpsCenter. The information is used to create an OpsItem.
**Note**  
If the input template is in the JSON format, then the object value in the template can't include quotes. For example, the values for resources, failure-cause, source, start time, and end time can't be in quotes. 

   ```
   {
       "title": "EBS snapshot copy failed",
       "description": "CloudWatch Event Rule SSMOpsItems-EBS-snapshot-copy-failed was triggered. Your EBS snapshot copy has failed. See below for more details.",
       "category": "Availability",
       "severity": "2",
       "source": "EC2",
       "operationalData": {
           "/aws/dedup": {
               "type": "SearchableString",
               "value": "{\"dedupString\":\"SSMOpsItems-EBS-snapshot-copy-failed\"}"
           },
           "/aws/automations": {
               "value": "[ { \"automationType\": \"AWS:SSM:Automation\", \"automationId\": \"AWS-CopySnapshot\" } ]"
           },
           "failure-cause": {
               "value": <failure-cause>
           },
           "source": {
               "value": <source>
           },
           "start-time": {
               "value": <start-time>
           },
           "end-time": {
               "value": <end-time>
           },
            },
           "resources": {
               "value": <resources>
           }
       }
   }
   ```

   For more information about these fields, see [Transforming target input](https://docs.aws.amazon.com/eventbridge/latest/userguide/transform-input.html) in the *Amazon EventBridge User Guide*.

1. Choose **Confirm**.

1. Choose **Next**.

1. Choose **Next**.

1. Choose **Update rule**.

After an OpsItem is created from an event, you can view the event details by opening the OpsItem and scrolling down to the **Private operational data** section. For information about how to configure the options in an OpsItem, see [Manage OpsItems](OpsCenter-working-with-OpsItems.md).

# Configure CloudWatch alarms to create OpsItems
Configure CloudWatch alarms

During the integrated setup of OpsCenter, a tool in AWS Systems Manager, you enable Amazon CloudWatch to automatically create OpsItems based on common alarms. You can create an alarm or edit an existing alarm to create OpsItems in OpsCenter. 

CloudWatch creates a new service-linked role in AWS Identity and Access Management (IAM) when you configure an alarm to create OpsItems. The new role is named `AWSServiceRoleForCloudWatchAlarms_ActionSSM`. For more information about CloudWatch service-linked roles, see [Using service-linked roles for CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-service-linked-roles.html) in the *Amazon CloudWatch User Guide*. 

When a CloudWatch alarm generates an OpsItem, the OpsItem displays **CloudWatch alarm - '*alarm\$1name*' is in ALARM state**. 

To view details about a specific OpsItem, choose the OpsItem and then choose the **Related resource details** tab. You can manually edit OpsItems to change details, such as the severity or category. However, when you edit the severity or the category of an alarm, Systems Manager can't update the severity or category of OpsItems that are already created from the alarm. If an alarm created an OpsItem and if you specified a deduplication string, the alarm won't create additional OpsItems even if you edit the alarm in CloudWatch. If the OpsItem is resolved in OpsCenter, CloudWatch will create a new OpsItem.

For more information about configuring CloudWatch alarms, see the following topics.

**Topics**
+ [

# Configuring a CloudWatch alarm to create OpsItems (console)
](OpsCenter-creating-or-editing-existing-alarm-console.md)
+ [

# Configuring an existing CloudWatch alarm to create OpsItems (programmatically)
](OpsCenter-configuring-an-existing-alarm-programmatically.md)

# Configuring a CloudWatch alarm to create OpsItems (console)
Configuring alarms (console)

You can manually create an alarm or update an existing alarm to create OpsItems from Amazon CloudWatch.

**To create a CloudWatch alarm and configure Systems Manager as a target of that alarm**

1. Complete steps 1–9 as specified in [Create a CloudWatch alarm based on a static threshold](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html) in the *Amazon CloudWatch User Guide*.

1. In the **Systems Manager action ** section, choose **Add Systems Manager OpsCenter action**.

1. Choose **OpsItems**.

1. For **Severity**, choose from 1 to 4. 

1. (Optional) For **Category**, choose a category for the OpsItem.

1. Complete steps 11–13 as specified in [Create a CloudWatch alarm based on a static threshold](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ConsoleAlarms.html) in the *Amazon CloudWatch User Guide*.

1. Choose **Next** and complete the wizard.

**To edit an existing alarm and configure Systems Manager as a target of that alarm**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Alarms**.

1. Select the alarm, and then choose **Actions**, **Edit**.

1. (Optional) Change settings in the **Metrics** and **Conditions** sections, and then choose **Next**.

1. In the **Systems Manager** section, choose **Add Systems Manager OpsCenter action**. 

1. For **Severity**, choose a number. 
**Note**  
Severity is a user-defined value. You or your organization determine what each severity value means and any service-level agreement associated with each severity.

1. (Optional) For **Category**, choose an option. 

1. Choose **Next** and complete the wizard.

# Configuring an existing CloudWatch alarm to create OpsItems (programmatically)
Configuring alarms (programmatically)

You can configure Amazon CloudWatch alarms to create OpsItems programmatically by using the AWS Command Line Interface (AWS CLI), AWS CloudFormation templates, or Java code snippets.

**Topics**
+ [

## Before you begin
](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-before-you-begin)
+ [

## Configuring CloudWatch alarms to create OpsItems (AWS CLI)
](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-manually-configure-cli)
+ [

## Configuring CloudWatch alarms to create or update OpsItems (CloudFormation)
](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-programmatically-configure-CloudFormation)
+ [

## Configuring CloudWatch alarms to create or update OpsItems (Java)
](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-programmatically-configure-java)

## Before you begin


If you edit an existing alarm programmatically or create an alarm that creates OpsItems, you must specify an Amazon Resource Name (ARN). This ARN identifies Systems Manager OpsCenter as the target for OpsItems created from the alarm. You can customize the ARN so that OpsItems created from the alarm include specific information such as severity or category. Each ARN includes the information described in the following table.


****  

| Parameter | Details | 
| --- | --- | 
|  `Region` (required)  |  The AWS Region where the alarm exists. For example: `us-west-2`. For information about AWS Regions where you can use OpsCenter, see [AWS Systems Manager endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ssm.html).  | 
|  `account_ID` (required)  |  The same AWS account ID used to create the alarm. For example: `123456789012`. The account ID must be followed by a colon (`:`) and the parameter `opsitem` as shown in the following examples.  | 
|  `severity` (required)  |  A user-defined severity level for OpsItems created from the alarm. Valid values: `1`, `2`, `3`, `4`  | 
|  `Category`(optional)  |  A category for OpsItems created from the alarm. Valid values: `Availability`, `Cost`, `Performance`, `Recovery`, and `Security`.  | 

Create the ARN by using the following syntax. This ARN doesn't include the optional `Category` parameter.

```
arn:aws:ssm:Region:account_ID:opsitem:severity
```

Following is an example.

```
arn:aws:ssm:us-west-2:123456789012:opsitem:3
```

To create an ARN that uses the optional `Category` parameter, use the following syntax.

```
arn:aws:ssm:Region:account_ID:opsitem:severity#CATEGORY=category_name
```

Following is an example.

```
arn:aws:ssm:us-west-2:123456789012:opsitem:3#CATEGORY=Security
```

## Configuring CloudWatch alarms to create OpsItems (AWS CLI)


This command requires that you specify an ARN for the `alarm-actions` parameter. For information about how to create the ARN, see [Before you begin](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-before-you-begin).

**To configure a CloudWatch alarm to create OpsItems (AWS CLI)**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command to collect information about the alarm that you want to configure.

   ```
   aws cloudwatch describe-alarms --alarm-names "alarm name"
   ```

1. Run the following command to update an alarm. Replace each *example resource placeholder* with your own information.

   ```
   aws cloudwatch put-metric-alarm --alarm-name name \
   --alarm-description "description" \
   --metric-name name --namespace namespace \
   --statistic statistic --period value --threshold value \
   --comparison-operator value \
   --dimensions "dimensions" --evaluation-periods value \
       --alarm-actions arn:aws:ssm:Region:account_ID:opsitem:severity#CATEGORY=category_name \
   --unit unit
   ```

   Here's an example.

------
#### [ Linux & macOS ]

   ```
   aws cloudwatch put-metric-alarm --alarm-name cpu-mon \
   --alarm-description "Alarm when CPU exceeds 70 percent" \
   --metric-name CPUUtilization --namespace AWS/EC2 \
   --statistic Average --period 300 --threshold 70 \
   --comparison-operator GreaterThanThreshold \
   --dimensions "Name=InstanceId,Value=i-12345678" --evaluation-periods 2 \
   --alarm-actions arn:aws:ssm:us-east-1:123456789012:opsitem:3#CATEGORY=Security \
   --unit Percent
   ```

------
#### [ Windows ]

   ```
   aws cloudwatch put-metric-alarm --alarm-name cpu-mon ^
   --alarm-description "Alarm when CPU exceeds 70 percent" ^
   --metric-name CPUUtilization --namespace AWS/EC2 ^
   --statistic Average --period 300 --threshold 70 ^
   --comparison-operator GreaterThanThreshold ^
   --dimensions "Name=InstanceId,Value=i-12345678" --evaluation-periods 2 ^
   --alarm-actions arn:aws:ssm:us-east-1:123456789012:opsitem:3#CATEGORY=Security ^
   --unit Percent
   ```

------

## Configuring CloudWatch alarms to create or update OpsItems (CloudFormation)


This section includes AWS CloudFormation templates that you can use to configure CloudWatch alarms to automatically create or update OpsItems. Each template requires that you specify an ARN for the `AlarmActions` parameter. For information about how to create the ARN, see [Before you begin](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-before-you-begin).

**Metric alarm** – Use the following CloudFormation template to create or update a CloudWatch metric alarm. The alarm specified in this template monitors Amazon Elastic Compute Cloud (Amazon EC2) instance status checks. If the alarm enters the `ALARM` state, it creates an OpsItem in OpsCenter. 

```
    {
      "AWSTemplateFormatVersion": "2010-09-09",
      "Parameters" : {
        "RecoveryInstance" : {
          "Description" : "The EC2 instance ID to associate this alarm with.",
          "Type" : "AWS::EC2::Instance::Id"
        }
      },
      "Resources": {
        "RecoveryTestAlarm": {
          "Type": "AWS::CloudWatch::Alarm",
          "Properties": {
            "AlarmDescription": "Run a recovery action when instance status check fails for 15 consecutive minutes.",
            "Namespace": "AWS/EC2" ,
            "MetricName": "StatusCheckFailed_System",
            "Statistic": "Minimum",
            "Period": "60",
            "EvaluationPeriods": "15",
            "ComparisonOperator": "GreaterThanThreshold",
            "Threshold": "0",
            "AlarmActions": [ {"Fn::Join" : ["", ["arn:arn:aws:ssm:Region:account_ID:opsitem:severity#CATEGORY=category_name", { "Ref" : "AWS::Partition" }, ":ssm:", { "Ref" : "AWS::Region" }, { "Ref" : "AWS:: AccountId" }, ":opsitem:3" ]]} ],
            "Dimensions": [{"Name": "InstanceId","Value": {"Ref": "RecoveryInstance"}}]
          }
        }
      }
    }
```

**Composite alarm** – Use the following CloudFormation template to create or update a composite alarm. A composite alarm consists of multiple metric alarms. If the alarm enters the `ALARM` state, it creates an OpsItem in OpsCenter.

```
"Resources":{
       "HighResourceUsage":{
          "Type":"AWS::CloudWatch::CompositeAlarm",
          "Properties":{
             "AlarmName":"HighResourceUsage",
             "AlarmRule":"(ALARM(HighCPUUsage) OR ALARM(HighMemoryUsage)) AND NOT ALARM(DeploymentInProgress)",
             "AlarmActions":"arn:aws:ssm:Region:account_ID:opsitem:severity#CATEGORY=category_name",
             "AlarmDescription":"Indicates that the system resource usage is high while no known deployment is in progress"
          },
          "DependsOn":[
             "DeploymentInProgress",
             "HighCPUUsage",
             "HighMemoryUsage"
          ]
       },
       "DeploymentInProgress":{
          "Type":"AWS::CloudWatch::CompositeAlarm",
          "Properties":{
             "AlarmName":"DeploymentInProgress",
             "AlarmRule":"FALSE",
             "AlarmDescription":"Manually updated to TRUE/FALSE to disable other alarms"
          }
       },
       "HighCPUUsage":{
          "Type":"AWS::CloudWatch::Alarm",
          "Properties":{
             "AlarmDescription":"CPUusageishigh",
             "AlarmName":"HighCPUUsage",
             "ComparisonOperator":"GreaterThanThreshold",
             "EvaluationPeriods":1,
             "MetricName":"CPUUsage",
             "Namespace":"CustomNamespace",
             "Period":60,
             "Statistic":"Average",
             "Threshold":70,
             "TreatMissingData":"notBreaching"
          }
       },
       "HighMemoryUsage":{
          "Type":"AWS::CloudWatch::Alarm",
          "Properties":{
             "AlarmDescription":"Memoryusageishigh",
             "AlarmName":"HighMemoryUsage",
             "ComparisonOperator":"GreaterThanThreshold",
             "EvaluationPeriods":1,
             "MetricName":"MemoryUsage",
             "Namespace":"CustomNamespace",
             "Period":60,
             "Statistic":"Average",
             "Threshold":65,
             "TreatMissingData":"breaching"
          }
       }
    }
```

## Configuring CloudWatch alarms to create or update OpsItems (Java)


This section includes Java code snippets that you can use to configure CloudWatch alarms to automatically create or update OpsItems. Each snippet requires that you specify an ARN for the `validSsmActionStr` parameter. For information about how to create the ARN, see [Before you begin](#OpsCenter-create-OpsItems-from-CloudWatch-Alarms-before-you-begin).

**A specific alarm** – Use the following Java code snippet to create or update a CloudWatch alarm. The alarm specified in this template monitors Amazon EC2 instance status checks. If the alarm enters the `ALARM` state, it creates an OpsItem in OpsCenter.

```
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
    import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
    import com.amazonaws.services.cloudwatch.model.ComparisonOperator;
    import com.amazonaws.services.cloudwatch.model.Dimension;
    import com.amazonaws.services.cloudwatch.model.PutMetricAlarmRequest;
    import com.amazonaws.services.cloudwatch.model.PutMetricAlarmResult;
    import com.amazonaws.services.cloudwatch.model.StandardUnit;
    import com.amazonaws.services.cloudwatch.model.Statistic;
     
    private void putMetricAlarmWithSsmAction() {
        final AmazonCloudWatch cw =
                AmazonCloudWatchClientBuilder.defaultClient();
     
        Dimension dimension = new Dimension()
                .withName("InstanceId")
                .withValue(instanceId);
     
        String validSsmActionStr = "arn:aws:ssm:Region:account_ID:opsitem:severity#CATEGORY=category_name";
     
        PutMetricAlarmRequest request = new PutMetricAlarmRequest()
                .withAlarmName(alarmName)
                .withComparisonOperator(
                        ComparisonOperator.GreaterThanThreshold)
                .withEvaluationPeriods(1)
                .withMetricName("CPUUtilization")
                .withNamespace("AWS/EC2")
                .withPeriod(60)
                .withStatistic(Statistic.Average)
                .withThreshold(70.0)
                .withActionsEnabled(false)
                .withAlarmDescription(
                        "Alarm when server CPU utilization exceeds 70%")
                .withUnit(StandardUnit.Seconds)
                .withDimensions(dimension)
                .withAlarmActions(validSsmActionStr);
     
        PutMetricAlarmResult response = cw.putMetricAlarm(request);
    }
```

**Update all alarms** – Use the following Java code snippet to update all CloudWatch alarms in your AWS account to create OpsItems when an alarm enters the `ALARM` state. 

```
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
    import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
    import com.amazonaws.services.cloudwatch.model.DescribeAlarmsRequest;
    import com.amazonaws.services.cloudwatch.model.DescribeAlarmsResult;
    import com.amazonaws.services.cloudwatch.model.MetricAlarm;
     
    private void listMetricAlarmsAndAddSsmAction() {
        final AmazonCloudWatch cw = AmazonCloudWatchClientBuilder.defaultClient();
     
        boolean done = false;
        DescribeAlarmsRequest request = new DescribeAlarmsRequest();
     
        String validSsmActionStr = "arn:aws:ssm:Region:account_ID:opsitem:severity#CATEGORY=category_name";
     
        while(!done) {
     
            DescribeAlarmsResult response = cw.describeAlarms(request);
     
            for(MetricAlarm alarm : response.getMetricAlarms()) {
                // assuming there are no alarm actions added for the metric alarm
                alarm.setAlarmActions(ImmutableList.of(validSsmActionStr));
            }
     
            request.setNextToken(response.getNextToken());
     
            if(response.getNextToken() == null) {
                done = true;
            }
        }
    }
```

# Create OpsItems manually
Create OpsItems manually

When you find an operational issue, you can manually create an OpsItem from OpsCenter, a tool in AWS Systems Manager, to manage and resolve the issue. 

If you set up OpsCenter for cross-account administration, a Systems Manager delegated administrator or AWS Organizations management account can create OpsItems for member accounts. For more information, see [(Optional) Manually set up OpsCenter to centrally manage OpsItems across accounts](OpsCenter-getting-started-multiple-accounts.md).

You can create OpsItems by using the AWS Systems Manager console, the AWS Command Line Interface (AWS CLI), or AWS Tools for Windows PowerShell.

**Topics**
+ [

# Creating OpsItems manually (console)
](OpsCenter-creating-OpsItems-console.md)
+ [

# Creating OpsItems manually (AWS CLI)
](OpsCenter-creating-OpsItems-CLI.md)
+ [

# Creating OpsItems manually (PowerShell)
](OpsCenter-creating-OpsItems-Powershell.md)

# Creating OpsItems manually (console)
Creating OpsItems (console)

 You can manually create OpsItems using the AWS Systems Manager console. When you create an OpsItem, it's displayed in your OpsCenter account. If you set up OpsCenter for cross-account administration, OpsCenter provides the delegated administrator or management account with the option to create OpsItems for selected member accounts. For more information, see [(Optional) Manually set up OpsCenter to centrally manage OpsItems across accounts](OpsCenter-getting-started-multiple-accounts.md).

**To create an OpsItem using the AWS Systems Manager console**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose **Create OpsItem**. If you don't see this button, choose the **OpsItems** tab, and then choose **Create OpsItem**.

1.  (Optional) Choose **Other account**, and then choose the account where you want to create the OpsItem. 
**Note**  
This step is required if you're creating OpsItems for a member account. 

1. For **Title**, enter a descriptive name to help you understand the purpose of the OpsItem.

1. For **Source**, enter the type of impacted AWS resource or other source information to help users understand the origin of the OpsItem.
**Note**  
You can't edit the **Source** field after you create the OpsItem.

1. (Optional) For **Priority**, choose the priority level.

1. (Optional) For **Severity**, choose the severity level.

1. (Optional) For **Category**, choose a category.

1. For **Description**, enter information about this OpsItem including (if applicable) steps for reproducing the issue. 
**Note**  
The console supports most markdown formatting in the OpsItem description field. For more information, see [Using Markdown in the Console](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/aws-markdown.html) in the *Getting Started with the AWS Management Console Getting Started Guide.*

1. For **Deduplication string**, enter words that the system can use to check for duplicate OpsItems. For more information about deduplication strings, see [Managing duplicate OpsItems](OpsCenter-working-deduplication.md). 

1. (Optional) For **Notifications**, specify the Amazon Resource Name (ARN) of the Amazon SNS topic where you want notifications sent when this OpsItem is updated. You must specify an Amazon SNS ARN that is in the same AWS Region as the OpsItem.

1. (Optional) For **Related resources**, choose **Add** to specify the ID or ARN of the impacted resource and any related resources.

1. Choose **Create OpsItem**.

If successful, the page displays the OpsItem. When a delegated administrator or management account creates an OpsItem for selected member accounts, the new OpsItems are displayed in the OpsCenter of the administrator and members accounts. For information about how to configure the options in an OpsItem, see [Manage OpsItems](OpsCenter-working-with-OpsItems.md).

# Creating OpsItems manually (AWS CLI)
Creating OpsItems (AWS CLI)

The following procedure describes how to create an OpsItem by using the AWS Command Line Interface (AWS CLI).

**To create an OpsItem using the AWS CLI**

1. Install and configure the AWS Command Line Interface (AWS CLI), if you haven't already.

   For information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Open the AWS CLI and run the following command to create an OpsItem. Replace each *example resource placeholder* with your own information.

   ```
   aws ssm create-ops-item \
       --title "Descriptive_title" \
       --description "Information_about_the_issue" \
       --priority Number_between_1_and_5 \
       --source Source_of_the_issue \
       --operational-data Up_to_20_KB_of_data_or_path_to_JSON_file \
       --notifications Arn="SNS_ARN_in_same_Region" \
       --tags "Key=key_name,Value=a_value"
   ```

   **Specify operational data from a file**

   When you create an OpsItem, you can specify operational data from a file. The file must be a JSON file, and the contents of the file must use the following format.

   ```
   {
     "key_name": {
       "Type": "SearchableString",
       "Value": "Up to 20 KB of data"
     }
   }
   ```

   Here is an example.

   ```
   aws ssm create-ops-item ^
       --title "EC2 instance disk full" ^
       --description "Log clean up may have failed which caused the disk to be full" ^
       --priority 2 ^
       --source ec2 ^
       --operational-data file:///Users/TestUser1/Desktop/OpsItems/opsData.json ^
       --notifications Arn="arn:aws:sns:us-west-1:12345678:TestUser1" ^
       --tags "Key=EC2,Value=Production"
   ```
**Note**  
For information about how to enter JSON-formatted parameters on the command line on different local operating systems, see [Using quotation marks with strings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-quoting-strings.html) in the *AWS Command Line Interface User Guide*.

   The system returns information like the following.

   ```
   {
       "OpsItemId": "oi-1a2b3c4d5e6f"
   }
   ```

1. Run the following command to view details about the OpsItem that you created.

   ```
   aws ssm get-ops-item --ops-item-id ID
   ```

   The system returns information like the following.

   ```
   {
       "OpsItem": {
           "CreatedBy": "arn:aws:iam::12345678:user/TestUser",
           "CreatedTime": 1558386334.995,
           "Description": "Log clean up may have failed which caused the disk to be full",
           "LastModifiedBy": "arn:aws:iam::12345678:user/TestUser",
           "LastModifiedTime": 1558386334.995,
           "Notifications": [
               {
                   "Arn": "arn:aws:sns:us-west-1:12345678:TestUser"
               }
           ],
           "Priority": 2,
           "RelatedOpsItems": [],
           "Status": "Open",
           "OpsItemId": "oi-1a2b3c4d5e6f",
           "Title": "EC2 instance disk full",
           "Source": "ec2",
           "OperationalData": {
               "EC2": {
                   "Value": "12345",
                   "Type": "SearchableString"
               }
           }
       }
   }
   ```

1. Run the following command to update the OpsItem. This command changes the status from `Open` (the default) to `InProgress`.

   ```
   aws ssm update-ops-item --ops-item-id ID --status InProgress
   ```

   The command has no output.

1. Run the following command again to verify that the status changed to `InProgress`.

   ```
   aws ssm get-ops-item --ops-item-id ID
   ```

## Examples of creating an OpsItem


The following code examples show you how to create an OpsItem by using the Linux management portal, macOS, or Windows Server. 

**Linux management portal or macOS**

The following command creates an OpsItem when an Amazon Elastic Compute Cloud (Amazon EC2) instance disk is full. 

```
aws ssm create-ops-item \
    --title "EC2 instance disk full" \
    --description "Log clean up may have failed which caused the disk to be full" \
    --priority 2 \
    --source ec2 \
    --operational-data '{"EC2":{"Value":"12345","Type":"SearchableString"}}' \
    --notifications Arn="arn:aws:sns:us-west-1:12345678:TestUser1" \
    --tags "Key=EC2,Value=ProductionServers"
```

The following command uses the `/aws/resources` key in `OperationalData` to create an OpsItem with an Amazon DynamoDB related resource.

```
aws ssm create-ops-item \
    --title "EC2 instance disk full" \
    --description "Log clean up may have failed which caused the disk to be full" \
    --priority 2 \
    --source ec2 \
    --operational-data '{"/aws/resources":{"Value":"[{\"arn\": \"arn:aws:dynamodb:us-west-2:12345678:table/OpsItems\"}]","Type":"SearchableString"}}' \
    --notifications Arn="arn:aws:sns:us-west-2:12345678:TestUser"
```

The following command uses the `/aws/automations` key in `OperationalData` to create an OpsItem that specifies the `AWS-ASGEnterStandby` document as an associated Automation runbook.

```
aws ssm create-ops-item \
    --title "EC2 instance disk full" \
    --description "Log clean up may have failed which caused the disk to be full" \
    --priority 2 \
    --source ec2 \
    --operational-data '{"/aws/automations":{"Value":"[{\"automationId\": \"AWS-ASGEnterStandby\", \"automationType\": \"AWS::SSM::Automation\"}]","Type":"SearchableString"}}' \
    --notifications Arn="arn:aws:sns:us-west-2:12345678:TestUser"
```

**Windows**

The following command creates an OpsItem when an Amazon Relational Database Service (Amazon RDS) instance is not responding. 

```
aws ssm create-ops-item ^
    --title "RDS instance not responding" ^
    --description "RDS instance not responding to ping" ^
    --priority 1 ^
    --source RDS ^
    --operational-data={\"RDS\":{\"Value\":\"abcd\",\"Type\":\"SearchableString\"}} ^
    --notifications Arn="arn:aws:sns:us-west-1:12345678:TestUser1" ^
    --tags "Key=RDS,Value=ProductionServers"
```

The following command uses the `/aws/resources` key in `OperationalData` to create an OpsItem with an Amazon EC2 instance related resource.

```
aws ssm create-ops-item ^
    --title "EC2 instance disk full" ^
    --description "Log clean up may have failed which caused the disk to be full" ^
    --priority 2 ^
    --source ec2 ^
    --operational-data={\"/aws/resources\":{\"Value\":\"[{\\"""arn\\""":\\"""arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0\\"""}]\",\"Type\":\"SearchableString\"}}
```

The following command uses the `/aws/automations` key in `OperationalData` to create an OpsItem that specifies the `AWS-RestartEC2Instance` runbook as an associated Automation runbook.

```
aws ssm create-ops-item ^
    --title "EC2 instance disk full" ^
    --description "Log clean up may have failed which caused the disk to be full" ^
    --priority 2 ^
    --source ec2 ^
    --operational-data={\"/aws/automations\":{\"Value\":\"[{\\"""automationId\\""":\\"""AWS-RestartEC2Instance\\”"",\\"""automationType\\""":\\"""AWS::SSM::Automation\\"""}]\",\"Type\":\"SearchableString\"}}
```

# Creating OpsItems manually (PowerShell)
Creating OpsItems (PowerShell)

The following procedure describes how to create an OpsItem by using AWS Tools for Windows PowerShell. 

**To create an OpsItem using AWS Tools for Windows PowerShell**

1. Open AWS Tools for Windows PowerShell and run the following command to specify your credentials. 

   ```
   Set-AWSCredentials –AccessKey key-name –SecretKey key-name
   ```

1. Run the following command to set the AWS Region for your PowerShell session.

   ```
   Set-DefaultAWSRegion -Region Region
   ```

1. Run the following command to create a new OpsItem. Replace each *example resource placeholder* with your own information. This command specifies a Systems Manager Automation runbook for remediating this OpsItem. 

   ```
   $opsItem = New-Object Amazon.SimpleSystemsManagement.Model.OpsItemDataValue
   $opsItem.Type = [Amazon.SimpleSystemsManagement.OpsItemDataType]::SearchableString 
   $opsItem.Value = '[{\"automationId\":\"runbook_name\",\"automationType\":\"AWS::SSM::Automation\"}]'
   $newHash = @{" /aws/automations"=[Amazon.SimpleSystemsManagement.Model.OpsItemDataValue]$opsItem}
   
   New-SSMOpsItem `
       -Title "title" `
       -Description "description" `
       -Priority priority_number `
       -Source AWS_service `
       -OperationalData $newHash
   ```

   If successful, the command outputs the ID of the new OpsItem.

The following example specifies the Amazon Resource Name (ARN) of an impaired Amazon Elastic Compute Cloud (Amazon EC2) instance.

```
$opsItem = New-Object Amazon.SimpleSystemsManagement.Model.OpsItemDataValue
$opsItem.Type = [Amazon.SimpleSystemsManagement.OpsItemDataType]::SearchableString 
$opsItem.Value = '[{\"arn\":\"arn:aws:ec2:us-east-1:123456789012:instance/i-1234567890abcdef0\"}]'
$newHash = @{" /aws/resources"=[Amazon.SimpleSystemsManagement.Model.OpsItemDataValue]$opsItem}
New-SSMOpsItem -Title "EC2 instance disk full still" -Description "Log clean up may have failed which caused the disk to be full" -Priority 2 -Source ec2 -OperationalData $newHash
```

# Manage OpsItems


OpsCenter, a tool in AWS Systems Manager, tracks OpsItems from their creation to resolution. If you set up OpsCenter for cross-account administration, a delegated administrator or management account can manage OpsItems from their account. For more information, see [(Optional) Manually set up OpsCenter to centrally manage OpsItems across accounts](OpsCenter-getting-started-multiple-accounts.md). 

You can view and manage OpsItems by using the following pages in the Systems Manager console: 
+ **Summary** – Displays a count of open and in-progress OpsItems, count of OpsItems by source and age, and operational insights. You can filter OpsItems by source and OpsItems status. 
+ **OpsItems** – Displays a list of OpsItems with multiple fields of information, such as title, ID, priority, description, the source of the OpsItem, and the date and time of last update. Using this page, you can manually create OpsItems, configure sources, change the status of an OpsItem, and filter OpsItems by new incidents. You can choose an OpsItem to display its **OpsItems details** page. 
+ **OpsItem details** – Provides detailed insights and tools that you can use to manage an OpsItem. The OpsItems details page has the following tabs: 
  + **Overview** – Displays related resources, runbooks that ran in the last 30 days, and a list of available runbooks that you can run. You can also view similar OpsItems, add operational data, and add related OpsItems.
  + **Related resource details** – Displays information about the resource from several AWS services. Expand the **Resource details** section to view information about this resource as provided by the AWS service that hosts it. You can also toggle through other related resources associated with this OpsItem by using the **Related resources** list. 

For more information about how to manage OpsItems, see the following topics.

**Topics**
+ [

# Viewing details of an OpsItem
](OpsCenter-working-with-OpsItems-viewing-details.md)
+ [

# Editing an OpsItem
](OpsCenter-working-with-OpsItems-editing-details.md)
+ [

# Adding related resources to an OpsItem
](OpsCenter-working-with-OpsItems-adding-related-resources.md)
+ [

# Adding related OpsItems to an OpsItem
](OpsCenter-working-with-OpsItems-adding-related-OpsItems.md)
+ [

# Adding operational data to an OpsItem
](OpsCenter-working-with-OpsItems-adding-operational-data.md)
+ [

# Creating an incident for an OpsItem
](OpsCenter-working-with-OpsItems-create-an-incident.md)
+ [

# Managing duplicate OpsItems
](OpsCenter-working-deduplication.md)
+ [

# Analyzing operational insights to reduce OpsItems
](OpsCenter-working-operational-insights.md)
+ [

# Viewing OpsCenter logs and reports
](OpsCenter-logging-auditing.md)

# Viewing details of an OpsItem
Viewing details

To get a comprehensive view of an OpsItem, use the **OpsItem details** page in the OpsCenter console. The **Overview** page displays the following information: 
+ **OpsItems details**– Displays general information for the selected OpsItem.
+ **Related Resources** – A related resource is the impacted resource or the resource that initiated the event that created the OpsItem. 
+ **Automation executions in the last 30 days ** – A list of runbooks that ran in last 30 days. 
+ **Runbooks** – You can choose a runbook from a list of available runbooks. 
+ **Similar OpsItems** – This is a system-generated list of OpsItems that might be related or of interest to you. To generate the list, the system scans the titles and descriptions of all OpsItems and returns OpsItems that use similar words. 
+ **Operational data** – Operational data is custom data that provides useful reference details about the OpsItem. For example, you can specify log files, error strings, license keys, troubleshooting tips, or other relevant data.
+ **Related OpsItems** – You can specify the IDs of OpsItems that are in some way related to the current OpsItem.
+ **Related Resource Details** – Displays data providers, including Amazon CloudWatch metrics and alarms, AWS CloudTrail logs, and details from AWS Config.

**To view details of an OpsItem**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose an OpsItem to view its details.

# Editing an OpsItem
Editing an OpsItem

The **OpsItem details** section includes information about an OpsItem,  including the description, title, source, OpsItem ID, and the status.  You can edit a single OpsItem or you can select multiple OpsItems and edit the  following fields: **Status**, **Priority**,  **Severity**, **Category**. 

When Amazon EventBridge creates an OpsItem, it populates the **Title**, **Source**, and **Description** fields. You can edit the **Title** and **Description** fields, but you can't edit the **Source** field.

**Note**  
The console supports most markdown formatting in the OpsItem description field. For more information, see [Using Markdown in the Console](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/aws-markdown.html) in the *Getting Started with the AWS Management Console Getting Started Guide.*

Generally, you can edit the following configurable data for an OpsItem:
+ **Title** – Name of the OpsItem. The source creates the title of the OpsItem. 
+ **Description** – Information about this OpsItem including (if applicable) steps for reproducing the issue.
+ **Status** – Status of an OpsItem. For a list of valid status values, see [OpsItem Status](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_OpsItem.html#systemsmanager-Type-OpsItem-Status) in the *AWS Systems Manager API Reference*.
+ **Priority** – Priority of an OpsItem can be between 1 and 5. We recommend that your organization determine what each priority level means and a corresponding service level agreement for each level. 
+ **Severity** – Severity of an OpsItem can be between 1 to 4, where 1 is critical, 2 is high, 3 is medium, and 4 is low. 
+ **Category** – Category of an OpsItem can be availability, cost, performance, recovery, or security. 
+ **Notifications** – When you edit an OpsItem, you can specify the Amazon Resource Name (ARN) of an Amazon Simple Notification Service topic in the **Notifications** field. By specifying an ARN, you ensure that all stakeholders receive a notification when the OpsItem is edited, including a status change. For more information, see the [https://docs.aws.amazon.com/sns/latest/dg/](https://docs.aws.amazon.com/sns/latest/dg/).
**Important**  
The Amazon SNS topic must exist in the same AWS Region as the OpsItem. If the topic and the OpsItem are in different Regions, the system returns an error.

OpsCenter has bidirectional integration with AWS Security Hub CSPM. When you update an OpsItem status and severity related to a security finding, those changes are automatically sent to Security Hub CSPM to ensure you always see the latest and correct information.

When an OpsItem is created from a Security Hub CSPM finding, Security Hub CSPM metadata is automatically added to the operational data field of the OpsItem. If this metadata is deleted, the bidirectional updates no longer function.

**To edit OpsItem details**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose an OpsItem ID to open the details page or choose multiple OpsItems. If you choose multiple OpsItems, you can only edit the status, priority, severity, or category. If you edit multiple OpsItems, OpsCenter updates and saves your changes as soon as you choose the new status, priority, severity, or category.

1. In the **OpsItem details** section, choose **Edit**.

1. Edit the details of the OpsItem according to the requirements and guidelines specified by your organization.

1. When you're finished, choose **Save**.

# Adding related resources to an OpsItem
Adding related resources

Each OpsItem includes a **Related resources** section that lists the Amazon Resource Name (ARN) of the related resource. A *related resource* is the impacted AWS resource that needs to be investigated. 

If Amazon EventBridge creates the OpsItem, the system automatically populates the OpsItem with the ARN of the resource. You can manually specify ARNs of related resources. For certain ARN types, OpsCenter automatically creates a deep link that displays details about the resource directly in the OpsCenter console. For example, if you specify the ARN of an Amazon Elastic Compute Cloud (Amazon EC2) instance as a related resource, then OpsCenter pulls in details about that EC2 instance. This allows you to view detailed information about your impacted AWS resources without having to leave OpsCenter. 

**To view and add related resources to an OpsItem**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose the **OpsItems** tab.

1. Choose an OpsItem ID.  
![\[A new OpsItem on the OpsCenter Overview page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_working_scenario_1.png)

1. To view information about the impacted resource, choose the **Related resources details** tab.  
![\[Viewing the Related resource details tab for an OpsItem.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_working_scenario_1_5.png)

   This tab displays information about the resource from several AWS services. Expand the **Resource details** section to view information about this resource as provided by the AWS service that hosts it. You can also toggle through other related resources associated with this OpsItem by using the **Related resources** list.

1. To add additional related resources, choose the **Overview** tab.

1. In the **Related resources** section, choose **Add**.

1. For **Resource type**, choose a resource from the list.

1. For **Resource ID**, enter either the ID or the Amazon Resource Name (ARN). The type of information you choose depends on the resource that you chose in the previous step.

**Note**  
You can manually add the ARNs of additional related resources. Each OpsItem can list a maximum of 100 related resource ARNs.

The following table lists the resource types that automatically create deep links to related resources.


**Supported resource types**  

| Resource name | ARN format | 
| --- | --- | 
|  AWS Certificate Manager certificate  |  <pre>arn:aws:acm:region:account-id:certificate/certificate-id</pre>  | 
|  Amazon EC2 Auto Scaling group  |  <pre>arn:aws:autoscaling:region:account-id:autoScalingGroup:groupid:autoScalingGroupName/groupfriendlyname</pre>  | 
|  Amazon CloudFront distribution  |  <pre>arn:aws:cloudfront::account-id:*</pre>  | 
|  AWS CloudFormation stack  |  <pre>arn:aws:cloudformation:region:account-id:stack/stackname/additionalidentifier</pre>  | 
|  Amazon CloudWatch alarm  |  <pre>arn:aws:cloudwatch:region:account-id:alarm:alarm-name</pre>  | 
|  AWS CloudTrail trail  |  <pre>arn:aws:cloudtrail:region:account-id:trail/trailname</pre>  | 
|  AWS CodeBuild project  |  <pre>arn:aws:codebuild:region:account-id:resourcetype/resource</pre>  | 
|  AWS CodePipeline  |  <pre>arn:aws:codepipeline:region:account-id:resource-specifier</pre>  | 
|  Amazon DevOps Guru insight  |  <pre>arn:aws:devops-guru:region:account-id:insight/proactive or reactive/resource-id</pre>  | 
|  Amazon DynamoDB table  |  <pre>arn:aws:dynamodb:region:account-id:table/tablename</pre>  | 
|  Amazon Elastic Compute Cloud (Amazon EC2) customer gateway  |  <pre>arn:aws:ec2:region:account-id:customer-gateway/cgw-id</pre>  | 
|  Amazon EC2 elastic IP  |  <pre>arn:aws:ec2:region:account-id:eip/eipalloc-id</pre>  | 
|  Amazon EC2 Dedicated Host  |  <pre>arn:aws:ec2:region:account-id:dedicated-host/host-id</pre>  | 
|  Amazon EC2 instance  |  <pre>arn:aws:ec2:region:account-id:instance/instance-id</pre>  | 
|  Amazon EC2 internet gateway  |  <pre>arn:aws:ec2:region:account-id:internet-gateway/igw-id</pre>  | 
|  Amazon EC2 network access control list (network ACL)  |  <pre>arn:aws:ec2:region:account-id:network-acl/nacl-id</pre>  | 
|  Amazon EC2 network interface  |  <pre>arn:aws:ec2:region:account-id:network-interface/eni-id</pre>  | 
|  Amazon EC2 route table  |  <pre>arn:aws:ec2:region:account-id:route-table/route-table-id</pre>  | 
|  Amazon EC2 security group  |  <pre>arn:aws:ec2:region:account-id:security-group/security-group-id</pre>  | 
|  Amazon EC2 subnet  |  <pre>arn:aws:ec2:region:account-id:subnet/subnet-id</pre>  | 
|  Amazon EC2 volume  |  <pre>arn:aws:ec2:region:account-id:volume/volume-id</pre>  | 
|  Amazon EC2 VPC  |  <pre>arn:aws:ec2:region:account-id:vpc/vpc-id</pre>  | 
|  Amazon EC2 VPN connection  |  <pre>arn:aws:ec2:region:account-id:vpn-connection/vpn-id</pre>  | 
|  Amazon EC2 VPN gateway  |  <pre>arn:aws:ec2:region:account-id:vpn-gateway/vgw-id</pre>  | 
|  AWS Elastic Beanstalk application  |  <pre>arn:aws:elasticbeanstalk:region:account-id:application/applicationname</pre>  | 
|  Elastic Load Balancing (Classic Load Balancer)  |  <pre>arn:aws:elasticloadbalancing:region:account-id:loadbalancer/name</pre>  | 
|  Elastic Load Balancing (Application Load Balancer)  |  <pre>arn:aws:elasticloadbalancing:region:account-id:loadbalancer/app/load-balancer-name/load-balancer-id</pre>  | 
|  Elastic Load Balancing (Network Load Balancer)  |  <pre>arn:aws:elasticloadbalancing:region:account-id:loadbalancer/net/load-balancer-name/load-balancer-id</pre>  | 
|  AWS Identity and Access Management (IAM) group  |  <pre>arn:aws:iam::account-id:group/group-name</pre>  | 
|  IAM policy  |  <pre>arn:aws:iam::account-id:policy/policy-name</pre>  | 
|  IAM role  |  <pre>arn:aws:iam::account-id:role/role-name</pre>  | 
|  IAM user  |  <pre>arn:aws:iam::account-id:user/user-name</pre>  | 
|  AWS Lambda function  |  <pre>arn:aws:lambda:region:account-id:function:function-name</pre>  | 
|  Amazon Relational Database Service (Amazon RDS) cluster  |  <pre>arn:aws:rds:region:account-id:cluster:db-cluster-name</pre>  | 
|  Amazon RDS database instance  |  <pre>arn:aws:rds:region:account-id:db:db-instance-name</pre>  | 
|  Amazon RDS subscription  |  <pre>arn:aws:rds:region:account-id:es:subscription-name</pre>  | 
|  Amazon RDS security group  |  <pre>arn:aws:rds:region:account-id:secgrp:security-group-name</pre>  | 
|  Amazon RDS cluster snapshot  |  <pre>arn:aws:rds:region:account-id:cluster-snapshot:cluster-snapshot-name</pre>  | 
|  Amazon RDS subnet group  |  <pre>arn:aws:rds:region:account-id:subgrp:subnet-group-name</pre>  | 
|  Amazon Redshift cluster  |  <pre>arn:aws:redshift:region:account-id:cluster:cluster-name</pre>  | 
|  Amazon Redshift parameter group  |  <pre>arn:aws:redshift:region:account-id:parametergroup:parameter-group-name</pre>  | 
|  Amazon Redshift security group  |  <pre>arn:aws:redshift:region:account-id:securitygroup:security-group-name</pre>  | 
|  Amazon Redshift cluster snapshot  |  <pre>arn:aws:redshift:region:account-id:snapshot:cluster-name/snapshot-name</pre>  | 
|  Amazon Redshift subnet group  |  <pre>arn:aws:redshift:region:account-id:subnetgroup:subnet-group-name</pre>  | 
|  Amazon Simple Storage Service (Amazon S3) bucket  |  <pre>arn:aws:s3:::bucket_name</pre>  | 
|  AWS Config recording of AWS Systems Manager managed node inventory  |  <pre>arn:aws:ssm:region:account-id:managed-instance-inventory/node_id</pre>  | 
|  Systems Manager State Manager association  |  <pre>arn:aws:ssm:region:account-id:association/association_ID</pre>  | 

# Adding related OpsItems to an OpsItem
Adding related OpsItems

By using **Related OpsItems** of the **OpsItems Details** page, you can investigate operations issues and provide context for an issue. OpsItems can be related in different ways, including a parent-child relationship between OpsItems, a root cause, or a duplicate. You can associate one OpsItem with another to display it in the **Related OpsItem** section.You can specify a maximum of 10 IDs for other OpsItems that are related to the current OpsItem. 

![\[Viewing related OpsItems.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_working_scenario_4.png)


**To add a related OpsItem**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose an OpsItem ID to open the details page.

1. In the **Related OpsItem** section, choose **Add**.

1. For **OpsItem ID**, specify an ID.

1. Choose **Add**.

# Adding operational data to an OpsItem
Adding operational data

Operational data is custom data that provides useful reference details about an OpsItem. You can enter multiple key-value pairs of operational data. For example, you can specify log files, error strings, license keys, troubleshooting tips, or other relevant data. The maximum length of the key can be 128 characters, and the maximum size of the value can be 20 KB. 

![\[Viewing operational data for an OpsItem.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_working_scenario_7.png)


You can make the data searchable by other users in the account, or you can restrict search access. Searchable data means that all users with access to the OpsItem **Overview** page (as provided by the [DescribeOpsItems](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeOpsItems.html) API operation) can view and search on the specified data. Operational data that isn't searchable is only viewable by users who have access to the OpsItem (as provided by the [GetOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsItem.html) API operation).

**To add operational data to an OpsItem**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose an OpsItem ID to open its details page.

1. Expand **Operational data**.

1. If no operational data exists for the OpsItem, choose **Add**. If operational data already exists for the OpsItem, choose **Manage**.

   After you create operational data, you can edit the key and the value, remove the operational data, or add additional key-value pairs by choosing **Manage**. 

1. For **Key**, specify a word or words to help users understand the purpose of the data.
**Important**  
Operational data keys *can't* begin with the following: `amazon`, `aws`, `amzn`, `ssm`, `/amazon`, `/aws`, `/amzn`, `/ssm`.

1. For **Value**, specify the data.

1. Choose **Save**.

**Note**  
You can filter OpsItems by using the **Operational data** operator on the **OpsItems** page. In the **Search** box, choose **Operational data**, and then enter a key-value pair in JSON. You must enter the key-value pair by using the following format: `{"key":"key_name","value":"a_value"}`

# Creating an incident for an OpsItem
Creating an incident

Use the following procedure to manually create an incident for an OpsItem to track and manage it in AWS Systems Manager Incident Manager, which is a tool in AWS Systems Manager. An *incident* is any unplanned interruption or reduction in quality of services. For more information about Incident Manager, see [Integrate OpsCenter with other AWS services](OpsCenter-applications-that-integrate.md).

**To manually create an incident for an OpsItem**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. If Incident Manager created an OpsItem for you, choose it and go to step 5. If not, choose **Create OpsItem** and complete the form. If you don't see this button, choose the **OpsItems** tab, and then choose **Create OpsItem**.

1. If you created an OpsItem, open it.

1. Choose **Start Incident**.

1. For **Response plan**, choose the Incident Manager response plan that you want to assign to this incident.

1. (Optional) For **Title**, enter a descriptive name to help other team members understand the nature of the incident. If you don't enter a new title, OpsCenter creates the OpsItem and the corresponding incident in Incident Manager using the title in the response plan.

1. (Optional) For **Incident impact**, choose an impact level for this incident. If you don't choose an impact level, OpsCenter creates the OpsItem and the corresponding incident in Incident Manager using the impact level in the response plan.

1. Choose **Start**.

# Managing duplicate OpsItems


OpsCenter can receive multiple duplicate OpsItems for a single source from multiple AWS services. OpsCenter uses a combination of built-in logic and configurable deduplication strings to avoid creating duplicate OpsItems. AWS Systems Manager applies deduplication built-in logic when the [CreateOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateOpsItem.html) API operation is called. 

AWS Systems Manager uses the following deduplication logic:

1. When creating the OpsItem, Systems Manager creates and stores a hash based on the deduplication string and the resource that initiated the OpsItem. 

1. When another request is made to create an OpsItem, the system checks the deduplication string of the new request.

1. If a matching hash exists for this deduplication string, Systems Manager checks the status of the existing OpsItem. If the status of an existing OpsItem is open or in progress, the OpsItem is not created. If the existing OpsItem is resolved, Systems Manager creates a new OpsItem.

After you create an OpsItem, you *can't* edit or change the deduplication strings in that OpsItem.

To manage duplicate OpsItems, you can do the following:
+ Edit the deduplication string for an Amazon EventBridge rule that targets OpsCenter. For more information, see [Editing a deduplication string in a default EventBridge rule](#OpsCenter-working-deduplication-editing-cwe). 
+ Specify a deduplication string when you manually create an OpsItem. For more information, see [Specifying a deduplication string using AWS CLI](#OpsCenter-working-deduplication-configuring-manual-cli).
+ Review and resolve duplicate OpsItems using operational insights. You can use runbooks to resolve duplicate OpsItems.

  To help you resolve duplicate OpsItems and reduce the number of OpsItems created by a source, Systems Manager provides automation runbooks. For information, see [Resolving duplicate OpsItems based on insights](OpsCenter-working-operational-insights.md#OpsCenter-working-operational-insights-resolve).

## Editing a deduplication string in a default EventBridge rule


Use the following procedure to specify a deduplication string for an EventBridge rule that targets OpsCenter.

**To edit a deduplication string for an EventBridge rule**

1. Sign in to the AWS Management Console and open the Amazon EventBridge console at [https://console.aws.amazon.com/events/](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose a rule, and then choose **Edit**.

1. Go to the **Select target(s)** page.

1. In the **Additional settings** section, choose **Configure input transformer**.

1. In the **Template** box, locate the `"operationalData": { "/aws/dedup"` JSON entry and the deduplication strings that you want to edit.

   The deduplication string entry in EventBridge rules uses the following JSON format.

   ```
   "operationalData": { "/aws/dedup": {"type": "SearchableString","value": "{\"dedupString\":\"Words the system should use to check for duplicate OpsItems\"}"}}
   ```

   Here is an example.

   ```
   "operationalData": { "/aws/dedup": {"type": "SearchableString","value": "{\"dedupString\":\"SSMOpsCenter-EBS-volume-performance-issue\"}"}}
   ```

1. Edit the deduplication strings, and then choose **Confirm**.

1. Choose **Next**.

1. Choose **Next**.

1. Choose **Update rule**.

## Specifying a deduplication string using AWS CLI


You can specify a deduplication string when you manually create a new OpsItem by using either the AWS Systems Manager console or the AWS CLI. For information about entering deduplication strings when you manually create an OpsItem in the console, see [Create OpsItems manually](OpsCenter-manually-create-OpsItems.md). If you're using the AWS CLI, you can enter the deduplication string for the `OperationalData` parameter. The parameter syntax uses JSON, as shown in the following example.

```
--operational-data '{"/aws/dedup":{"Value":"{\"dedupString\": \"Words the system should use to check for duplicate OpsItems\"}","Type":"SearchableString"}}'
```

Here is an example command that specifies a deduplication string of `disk full`.

------
#### [ Linux & macOS ]

```
aws ssm create-ops-item \
    --title "EC2 instance disk full" \
    --description "Log clean up may have failed which caused the disk to be full" \
    --priority 1 \
    --source ec2 \
    --operational-data '{"/aws/dedup":{"Value":"{\"dedupString\": \"disk full\"}","Type":"SearchableString"}}' \
    --tags "Key=EC2,Value=ProductionServers" \
    --notifications Arn="arn:aws:sns:us-west-1:12345678:TestUser"
```

------
#### [ Windows ]

```
aws ssm create-ops-item ^
    --title "EC2 instance disk full" ^
    --description "Log clean up may have failed which caused the disk to be full" ^
    --priority 1 ^
    --source EC2 ^
    --operational-data={\"/aws/dedup\":{\"Value\":\"{\\"""dedupString\\""":\\"""disk full\\"""}\",\"Type\":\"SearchableString\"}} ^
    --tags "Key=EC2,Value=ProductionServers" --notifications Arn="arn:aws:sns:us-west-1:12345678:TestUser"
```

------

# Analyzing operational insights to reduce OpsItems
Analyzing insights

OpsCenter *operational insights* display information about duplicate OpsItems. OpsCenter automatically analyzes OpsItems in your account and generates three types of *insights*. You can view this information in the **Operational insights** section of the OpsCenter **Summary** tab. 
+ **Duplicate OpsItems** – An insight is generated when eight or more OpsItems have the same title for the same resource.
+ **Most common titles** – An insight is generated when more than 50 OpsItems have the same title.
+ **Resources generating the most OpsItems** – An insight is generated when an AWS resource has more than 10 open OpsItems. These insights and their corresponding resources are displayed in the **Resources generating the most OpsItems** table on the OpsCenter **Summary** tab. Resources are listed in decreasing order of OpsItem count.

**Note**  
OpsCenter creates **Resources generating the most OpsItems** insights for the following resource types:  
Amazon Elastic Compute Cloud (Amazon EC2) instances
Amazon EC2 security groups
Amazon EC2 Auto Scaling group
Amazon Relational Database Service (Amazon RDS) database
Amazon RDS cluster
AWS Lambda function
Amazon DynamoDB table
Elastic Load Balancing load balancer
Amazon Redshift cluster
AWS Certificate Manager certificate
Amazon Elastic Block Store volume

OpsCenter enforces a limit of 15 insights per type. If a type reaches this limit, OpsCenter stops displaying more insights for that type. To view additional insights, you must resolve all OpsItems associated with an OpsInsight of that type. If a pending insight is prevented from being displayed in the console because of the 15-insight limit, that insight becomes visible after another insight is closed. 

When you choose an insight, OpsCenter displays information about the affected OpsItems and resources. The following screenshot shows an example with the details of a duplicate OpsItem insight. 

![\[Detailed view of an OpsCenter insight with information about OpsItems.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsCenter-insights-detailed.png)


Operational insights are turned off by default. For more information about working with operational insights, see the following topics.

**Topics**
+ [

## Enabling operational insights
](#OpsCenter-working-operational-insights-viewing)
+ [

## Resolving duplicate OpsItems based on insights
](#OpsCenter-working-operational-insights-resolve)
+ [

## Disabling operational insights
](#OpsCenter-working-operational-insights-disable)

## Enabling operational insights


You can enable operational insights on the **OpsCenter** page in the Systems Manager console. When you enable operational insights, Systems Manager creates an AWS Identity and Access Management (IAM) service-linked role called `AWSServiceRoleForAmazonSSM_OpsInsights`. A service-linked role is a unique type of IAM role that is linked directly to Systems Manager. Service-linked roles are predefined and include all the permissions that the service requires to call other AWS services on your behalf. For more information about the `AWSServiceRoleForAmazonSSM_OpsInsights` service-linked role, see [Using roles to create operational insight OpsItems in Systems Manager OpsCenter](using-service-linked-roles-service-action-4.md).

**Note**  
Note the following important information:  
Your AWS account is charged for operational insights. For more information, see [AWS Systems Manager Pricing](https://aws.amazon.com/systems-manager/pricing/).
OpsCenter periodically refreshes insights using a batch process. This means the list of insights displayed in OpsCenter might be out of sync.

Use the following procedure to enable and view operational insights in OpsCenter.

**To enable and view operational insights**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. In the **Operational insight is available** message box, choose **Enable**. If you don't see this message, scroll down to the **Operational insights** section and choose **Enable**.

1. After you enable this feature, on the **Summary** tab, scroll down to the **Operational insights** section. 

1. To view a filtered list of insights, choose the link beside **Duplicate OpsItems**, **Most common titles**, or **Resources generating the most OpsItems**. To view all insights, choose **View all operational insights**.

1. Choose an insight ID to view more information.

## Resolving duplicate OpsItems based on insights


To resolve insights, you must first resolve all OpsItems associated with an insight. You can use the `AWS-BulkResolveOpsItemsForInsight` runbook to resolve OpsItems associated with an insight. 

To help you resolve duplicate OpsItems and reduce the number of OpsItems created by a source, Systems Manager provides the following automation runbooks:
+ The `AWS-BulkResolveOpsItems` runbook resolves OpsItems that match a specified filter.
+ The `AWS-AddOpsItemDedupStringToEventBridgeRule` runbook adds a deduplication string for all OpsItem targets that are associated with a specific Amazon EventBridge rule. This runbook doesn't add a deduplication string if a rule already has one.
+ The `AWS-DisableEventBridgeRule` turns off a rule in EventBridge if the rule is generating dozens or hundreds of OpsItems.

**To resolve an operational insight**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. On the **Overview** tab, scroll down to **Operational insights**.

1. Choose **View all operational insights**.

1. Choose an insight ID to view more information.

1. Choose a runbook and choose **Execute**.

## Disabling operational insights


When you turn off operational insights, the system stops creating new insights and stops displaying insights in the console. Any active insights remain unchanged in the system, although you won't see them displayed in the console. If you enable this feature again, the system displays previously unresolved insights and starts creating new insights. Use the following procedure to turn off operational insights.

**To turn off operational insights**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose **Settings**.

1. In the **Operational insights** section, choose **Edit** and then toggle the **Disable** option.

1. Choose **Save**.

# Viewing OpsCenter logs and reports


AWS CloudTrail logs AWS Systems Manager OpsCenter API calls to the console, the AWS Command Line Interface (AWS CLI), and the SDK. You can view the information in the CloudTrail console or in an Amazon Simple Storage Service (Amazon S3) bucket. Amazon S3 uses one bucket to store all CloudTrail logs for your account.

Logs of OpsCenter actions show create, update, get, and describe OpsItem activities. For more information about viewing and using CloudTrail logs of Systems Manager activity, see [Logging AWS Systems Manager API calls with AWS CloudTrail](monitoring-cloudtrail-logs.md).

AWS Systems Manager OpsCenter provides you with the following information about OpsItems:
+ **OpsItem status summary** – Provides a summary of OpsItems by status (Open and In progress, Open, or In Progress).
+ **Sources with most open OpsItems** – Provides a breakdown of the top AWS services with open OpsItems.
+ **OpsItems by source and age** – Provides a count of OpsItems, grouped by source and days since creation.

**To view the OpsCenter summary report**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. On the OpsItems **Overview** page, choose **Summary**.

1. Under **OpsItems by source and age**, choose the Search bar to filter OpsItems according to **Source**. Use the list to filter according to **Status**.

# Delete OpsItems


You can delete an individual OpsItem by calling the [DeleteOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DeleteOpsItem.html) API operation using the AWS Command Line Interface or the AWS SDK. You can't delete an OpsItem in the AWS Management Console. To delete an OpsItem, your AWS Identity and Access Management (IAM) user, group, or role must have either administrator permission or you must have been granted permission to call the `DeleteOpsItem` API operation. 

**Important**  
Note the following important information about this operation.  
Deleting an OpsItem is irreversible. You can't restore a deleted OpsItem.
This operation uses an *eventual consistency model*, which means the system can take a few minutes to complete this operation. If you delete an OpsItem and immediately call, for example, [GetOpsItem](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetOpsItem.html), the deleted OpsItem might still appear in the response. 
This operation is idempotent. The system doesn't throw an exception if you repeatedly call this operation for the same OpsItem. If the first call is successful, all additional calls return the same successful response as the first call.
This operation doesn't support cross-account calls. A delegated administrator or management account can't delete OpsItems in other accounts, even if OpsCenter has been set up for cross-account administration. For more information about cross-account administration, see [(Optional) Setting up OpsCenter to centrally manage OpsItems across accounts](OpsCenter-setting-up-cross-account.md).
If you receive the `OpsItemLimitExceededException`, you can delete one or more OpsItems to reduce your total number of OpsItems below the quota limits. For more information about this exception, see [Troubleshooting issues with OpsCenter](OpsCenter-troubleshooting.md).

## Deleting an OpsItem


Use the following procedure to delete an OpsItem.

**To delete an OpsItem**

1. Install and configure the AWS CLI, if you haven't already. For more information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

1. Run the following command. Replace *ID* with the ID of the OpsItem you want to delete.

   ```
   aws ssm delete-ops-item --ops-item-id ID
   ```

If successful, the command returns no data.

# Remediate OpsItem issues
Remediate OpsItem issues

Using AWS Systems Manager Automation runbooks, you can remediate issues with AWS resources that are identified in an OpsItem. Automation uses predefined runbooks to remediate common issues with AWS resources.

Each OpsItem includes the **Runbooks** section that provides a list of runbooks that you can use for remediation. When you choose an Automation runbook from the list, OpsCenter automatically displays some of the fields required to run the document. When you run an Automation runbook, the system associates the runbook with the related resource of the OpsItem. If Amazon EventBridge creates an OpsItem, it associates a runbook with the OpsItem. OpsCenter keeps a 30-day record of Automation runbooks for an OpsItem. 

 You can choose a status to view important details about the runbook, such as the reason why an automation failed and which step of the Automation runbook was running when the failure occurred, as shown in the following example. 

![\[Status information for the last time an Automation runbook was run.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_automation_results.png)


The **Related resource details** page for a selected OpsItem includes the **Run automation** list. You can choose recent or resource-specific Automation runbooks and run them to remediate issues. This page also includes data providers, including Amazon CloudWatch metrics and alarms, AWS CloudTrail logs, and details from AWS Config.

![\[Metrics available on the Related Resources tab.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_automation_related_resource_details.png)


You can view information about an Automation runbook by either choosing its name in the console or by using the [Systems Manager Automation Runbook Reference](automation-documents-reference.md).

## Remediating an OpsItem using a runbook


Before you use an Automation runbook to remediate an OpsItem issue, do the following:
+ Verify that you have permission to run Systems Manager Automation runbooks. For more information, see [Setting up Automation](automation-setup.md).
+ Collect resource-specific ID information for the automation that you want to run. For example, if you want to run an automation that restarts an EC2 instance, then you must specify the ID of the EC2 instance to restart.

**To run an Automation runbook to remediate an OpsItem issue**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose the OpsItem ID to open the details page.  
![\[A new OpsItem on the OpsCenter Overview page.\]](http://docs.aws.amazon.com/systems-manager/latest/userguide/images/OpsItems_working_scenario_1.png)

1. Scroll to the **Runbooks** section.

1. Use the search bar or the numbers in the upper right to find the Automation runbook that you want to run.

1. Choose a runbook, and then choose **Execute**.

1. Enter the required information for the runbook, and then choose **Submit**.

   Once you start the runbook, the system returns to the previous screen and displays the status. 

1. In the **Automation executions in the last 30 days** section, choose the **Execution ID** link to view steps and the status of the execution.

## Remediating an OpsItem using an associated runbook


After you run an Automation runbook from an OpsItem, OpsCenter associates the runbook with the OpsItem. An *associated* runbook is ranked higher than other runbooks in the **Runbooks** list.

Use the following procedure to run an Automation runbook that has already been associated with a related resource in an OpsItem. For information about adding related resources, see [Manage OpsItems](OpsCenter-working-with-OpsItems.md).

**To run a resource-associated runbook to remediate an OpsItem issue**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Open the OpsItem.

1. In the **Related resources** section, choose the resource on which you want to run the Automation runbook.

1. Choose **Run automation**, and then choose the associated Automation runbook that you want to run.

1. Enter the required information for the runbook, and then choose **Execute**.

   Once you start the runbook, the system returns to the previous screen and displays the status. 

1. In the **Automation executions in the last 30 days** section, choose the **Execution ID** link to view steps and the status of the execution.

# Viewing OpsCenter summary reports


AWS Systems Manager OpsCenter includes a summary page that automatically displays the following information:
+ **OpsItem status summary** – A summary of OpsItems by status, such as `Open` and `In progress`.
+ **Sources with most open OpsItems** – A breakdown of the top AWS services that have open OpsItems.
+ **OpsItems by source and age** – A count of OpsItems, grouped by source and number of days since creation.

**To view OpsCenter summary reports**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**, and then choose the **Summary** tab.

1. In the **OpsItems by source and age** section, do the following:

   1. (Optional) In the filter field, choose **Source**, select `Equal`, `Begin With`, or `Not Equal`, and then enter a search parameter.

   1. In the adjacent list, select one of the following status values:
      + `Open`
      + `In progress`
      + `Resolved`
      + `Open and in progress`
      + `All`

# Troubleshooting issues with OpsCenter


This topic includes information to help you troubleshoot common errors and issues with OpsCenter.

## You receive the OpsItemLimitExceededException


If your AWS account has reached the maximum number of OpsItems allowed when you call the CreateOpsItem API operation, you receive an `OpsItemLimitExceededException`. OpsCenter returns the exception if your call would exceed the maximum number of OpsItems for either of the following quotas:
+ Total number of OpsItems per AWS account per Region (including `Open` and `Resolved` OpsItems): 500,000 
+ Maximum number of OpsItems per AWS account per month: 10,000

These quotas apply to OpsItems created from any source except the following:
+ OpsItems created by AWS Security Hub CSPM findings
+ OpsItems that are auto-generated when an Incident Manager incident is opened

OpsItems created from these sources don't count against your OpsItem quotas, but you are charged for each OpsItem.

If you receive an `OpsItemLimitExceededException`, you can manually delete OpsItems until you are below the quota preventing you from creating a new OpsItem. Again, deleting OpsItems created for Security Hub CSPM findings or Incident Manager incidents won't reduce your total number of OpsItems enforced by the quotas. You must delete OpsItems from other sources. For information about how to delete an OpsItem, see [Delete OpsItems](OpsCenter-delete-OpsItems.md).

## You receive a large bill from AWS for large numbers of auto-generated OpsItems


If you configured integration with AWS Security Hub CSPM, OpsCenter creates OpsItems for Security Hub CSPM findings. Depending on the number of finding Security Hub CSPM generates and the account you were logged into when you configured integration, OpsCenter can generate large numbers of OpsItems, at a cost. Here are more specific details related to OpsItems generated by Security Hub CSPM findings:
+ If you are logged into the Security Hub CSPM administrator account when you configure OpsCenter and Security Hub CSPM integration, the system creates OpsItems for findings in the administrator *and* all member accounts. The OpsItems are all created *in the administrator account*. Depending on a variety of factors, this can lead to an unexpectedly large bill from AWS.

  If you are logged into a member account when you configure integration, the system only creates OpsItems for findings in that individual account. For more information about the Security Hub CSPM administrator account, member accounts, and their relation to the EventBridge event feed for findings, see [Types of Security Hub CSPM integration with EventBridge](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cwe-integration-types.html) in the *AWS Security Hub User Guide*.
+ For each finding that creates an OpsItem, you are charged the regular price for creating the OpsItem. You are also charged if you edit the OpsItem or if the corresponding finding is updated in Security Hub CSPM (which triggers an OpsItem update).
+ OpsItems that are created by an integration with AWS Security Hub CSPM are *not* currently limited by the maximum quota of 500,000 OpsItems per account in a Region. It is therefore possible for Security Hub CSPM alerts to create more than 500,000 chargeable OpsItems in each Region in an account.

  For high-production environments, we therefore recommend limiting the scope of Security Hub CSPM findings to high severity issues only.

**Important**  
If you believe a large number of OpsItems were created in error and your AWS bill is unwarranted, contact Support.

Use the following procedure if you no longer want the system to create OpsItems for Security Hub CSPM findings.

**To stop receiving OpsItems for Security Hub CSPM findings**

1. Open the AWS Systems Manager console at [https://console.aws.amazon.com/systems-manager/](https://console.aws.amazon.com/systems-manager/).

1. In the navigation pane, choose **OpsCenter**.

1. Choose **Settings**.

1. In the **Security Hub CSPM findings** section, choose **Edit.**

1. Choose the slider to change **Enabled** to **Disabled**. If you aren't able to toggle the slider, Security Hub CSPM hasn't been enabled for your AWS account.

1. Choose **Save** to save your configuration. OpsCenter no longer creates OpsItems based on Security Hub CSPM findings.

**Important**  
If OpsCenter toggles the setting back to **Enabled** and continues to create OpsItems for findings, log into the Systems Manager delegated administrator account or the AWS Organizations management account and repeat this procedure. If you don't have permission to log into either of those accounts, contact your administrator and ask them to repeat this procedure to disable integration for your account.

# Using Amazon CloudWatch dashboards hosted by Systems Manager
CloudWatch Dashboards

Amazon CloudWatch dashboards are customizable home pages in the CloudWatch console that you can use to monitor your resources in a single view, even those resources that are spread across different AWS Regions. You can use CloudWatch dashboards to create customized views of the metrics and alarms for your AWS resources. With dashboards, you can create the following:
+ A single view for selected metrics and alarms to help you assess the health of your resources and applications across one or more AWS Regions. You can select the color used for each metric on each graph, so that you can track the same metric across multiple graphs.
+ An operational playbook that provides guidance for team members during operational events about how to respond to specific incidents.
+ A common view of critical resource and application measurements that can be shared by team members for faster communication flow during operational events.

You can create dashboards by using the console, the AWS Command Line Interface (AWS CLI), or by using the CloudWatch `PutDashboard` API. For more information, see [Using Amazon CloudWatch dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html) in the *Amazon CloudWatch User Guide*.