

 AWS Cloud9 is no longer available to new customers. Existing customers of AWS Cloud9 can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/)

# Working with AWS Toolkit
<a name="toolkit-welcome"></a>

You can navigate and interact with AWS services using the AWS Toolkit through the AWS Explorer window.

## Why use the AWS Toolkit?
<a name="toolkit-why"></a>

The AWS Toolkit is an extension for the AWS Cloud9 integrated development environment (IDE). You can access and work with a wide range of AWS services through this extension. The AWS Toolkit replaces the functionality that's provided by the Lambda plugin for AWS Cloud9. For more information, see [Disabling AWS Toolkit](#disable-toolkit).

**Important**  
AWS Toolkit support is an integrated feature of AWS Cloud9. Currently, you can't customize the AWS Cloud9 IDE with third-party extensions.

**Warning**  
If you are using Mozilla Firefox as your preferred browser with AWS Cloud9 IDE, there is a 3rd party cookie setting which prevents AWS Cloud9 webview and AWS Toolkits from working correctly in the browser. As a workaround to this issue, you must ensure that you have not blocked *Cookies* in the *Privacy & Security* section of your browser settings, as displayed in the image below.  

![\[Displaying the cookie settings for Firefox\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/firefox-workaround.png)


At present, the following AWS services and resources can be accessed through the AWS Toolkit extension:
+ [AWS App Runner](using-apprunner.md)
+ [API Gateway](api-gateway-toolkit.md)
+ [CloudFormation stacks](cloudformation-toolkit.md)
+ [CloudWatch Logs](cloudwatch-logs-toolkit.md)
+ [AWS Lambda](lambda-toolkit.md)
+ [Resources](more-resources.md)
+ [Amazon S3 buckets and objects](s3-toolkit.md)
+ [AWS Serverless Application Model applications](serverless-apps-toolkit.md)
+ [Step Functions and state machines](bulding-stepfunctions.md)
+ [Systems Manager automation documents](systems-manager-automation-docs.md)
+ [Working with Amazon ECR in AWS Cloud9 IDE](ecr.md)
+ [AWS IoT](iot-start.md)
+ [Working with Amazon Elastic Container Service](ecs.md)
+ [Amazon EventBridge](eventbridge.md)
+ [Working with AWS Cloud Development Kit (AWS CDK)](cdk-explorer.md)

## Enabling AWS Toolkit
<a name="access-toolkit"></a>

If the AWS Toolkit isn't available in your environment, you can enable it in the **Preferences** tab.<a name="enabling-toolkit"></a>

**To enable the AWS Toolkit**

1. Choose **AWS Cloud9**, **Preferences** on the menu bar. 

1. On the **Preferences** tab, in the side navigation pane, choose **AWS Settings**. 

1. In the **AWS Resources** pane, enable **AWS Toolkit** so that it displays a check mark on a green background. 

   When you enable the AWS Toolkit, the integrated development environment (IDE) refreshes to show the updated **Enable AWS Toolkit** setting. The AWS Toolkit option at the side of the IDE below the **Environment** option also appears.

**Important**  
If your AWS Cloud9 environment's EC2 instance doesn't have access to the internet (that is, no outbound traffic allowed), a message might display after you enable AWS Toolkit and relaunch the IDE. This message states that the dependencies that are required by AWS Toolkit couldn't be downloaded. If this is the case, you also can't use the AWS Toolkit.   
To fix this issue, create a VPC endpoint for Amazon S3. This grants access to an Amazon S3 bucket in your AWS Region that contains the dependencies that are required to keep your IDE up to date.  
For more information, see [Configuring VPC endpoints for Amazon S3 to download dependencies](ec2-ssm.md#configure-s3-endpoint).



## Managing access credentials for AWS Toolkit
<a name="credentials-for-toolkit"></a>

AWS Toolkit interacts with a wide range of AWS services. To manage access control, make sure that the IAM entity for your AWS Toolkit service has the necessary permissions for this range of services. As a quick start, use [AWS managed temporary credentials](security-iam.md#auth-and-access-control-temporary-managed-credentials) to obtain the necessary permission. These managed credentials work by granting your EC2 environment access to AWS services on behalf of an AWS entity, such as an IAM user.

However, if you've launched your development environment's EC2 instance into a **private subnet**, AWS managed temporary credentials aren't available to you. So, as an alternative, you can allow AWS Toolkit to access your AWS services by manually creating your own set of credentials. This set is called a *profile*. Profiles feature long-term credentials called access keys. You can get these access keys from the IAM console.<a name="manual-credentials"></a>

**Create a profile to provide access credential for AWS Toolkit**

1. To get your access keys (consisting of an *access key ID* and *secret access key*), go to the IAM console at [ https://console.aws.amazon.com/iam](https://console.aws.amazon.com/iam).

1. Choose **Users** from the navigation bar and then choose your AWS user name (not the check box).

1. Choose the **Security credentials** tab, and then choose **Create access key**.
**Note**  
If you already have an access key but you can't access your secret key, make the old key inactive and create a new one.

1. In the dialog box that shows your access key ID and secret access key, choose **Download .csv file** to store this information in a secure location.

1. After you downloaded your access keys, launch an AWS Cloud9 environment and start a terminal session by choosing **Window**, **New Terminal**. 

1. In the terminal window, run the following command.

   ```
   aws configure --profile toolkituser
   ```

   In this case, `toolkituser` is the profile name being used, but you can choose your own.

1. At the command line, enter the `AWS Access Key ID` and `AWS Secret Access Key` that you previously downloaded from the IAM console.
   + For `Default region name`, specify an AWS Region (for example, `us-east-1`). 
   + For `Default output format`, specify a file format (for example, `json`). 
**Note**  
For information about the options for configuring a profile, see [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) in the *AWS Command Line Interface User Guide*.

1. After you created your profile, launch the AWS Toolkit, go to the [**AWS Toolkit menu**](toolkit-navigation.md#toolkit-menu), and choose **Connect to AWS**.

1. For the **Select an AWS credential profile** field, choose the profile that you just created in the terminal (for example, `profile:toolkituser`).

If the selected profile contains valid access credentials, the **AWS Explorer** pane refreshes to display the AWS services that you can now access.

### Using IAM roles to grant permissions to applications on EC2 instances
<a name="ec2-instance-credentials"></a>

You can also use an IAM role to manage temporary credentials for applications that run on an EC2 instance. The role supplies temporary permissions that applications can use when they make calls to other AWS resources. When you launch an EC2 instance, you specify an IAM role to associate with the instance. Applications that run on the instance can then use the role-supplied temporary credentials when making API requests against AWS services.

After you created the role, assign this role and its associated permission to the instance by creating an *instance profile*. The instance profile is attached to the instance and can provide the role's temporary credentials to an application that runs on the instance.

For more information, see [Using an IAM role to grant permissions to applications running on Amazon EC2 instances](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html#roles-usingrole-ec2instance-get-started) in the *IAM User Guide*.

## Identifying AWS Toolkit components
<a name="ui-components"></a>

The following screenshot shows three key UI components of the AWS Toolkit.

![\[Labelled screenshot showing key UI components of the AWS Toolkit\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/toolkit-UI-overview-labelled.png)


1. **AWS Explorer** window: Used to interact with the AWS services that are accessible through the Toolkit. You can toggle between showing and hiding the **AWS Explorer** using the AWS option at the left side of the integrated development environment (IDE). For more about using this interface component and accessing AWS services for different AWS Regions, see [Using AWS Explorer to work with services and resources in multiple Regions](toolkit-navigation.md#working-with-aws-explorer).

1. **Toolkit** menu: Used to manage connections to AWS, customize the display of the **AWS Explorer** window, create and deploy serverless applications, work with GitHub repositories, and access documentation. For more information, see [Accessing and using the AWS Toolkit menu](toolkit-navigation.md#toolkit-menu).

1. **AWS Configuration** pane: Used to customize the behavior of AWS services that you interact with using the Toolkit. For more information, see [Modifying AWS Toolkit settings using the AWS Configuration pane](toolkit-navigation.md#configuration-options). 

## Disabling AWS Toolkit
<a name="disable-toolkit"></a>

You can disable the AWS Toolkit in the **Preferences** tab.<a name="disabling-toolkit"></a>

**To disable the AWS Toolkit**

1. Choose **AWS Cloud9**, **Preferences** on the menu bar. 

1. On the **Preferences** tab, in the side navigation pane, choose **AWS Settings**. 

1. In the **AWS Resources** pane, turn off **AWS AWS Toolkit**. 

   When you disable the AWS Toolkit, the integrated development environment (IDE) refreshes to remove the AWS Toolkit option at the side of the IDE below the **Environment** option.



## AWS Toolkit topics
<a name="toolkit-resources-info"></a>
+ [Navigating and configuring the AWS Toolkit](toolkit-navigation.md)
+ [Using AWS App Runner with AWS Toolkit](using-apprunner.md)
+ [Working with API Gateway using the AWS Toolkit](api-gateway-toolkit.md)
+ [Working with AWS CloudFormation stacks using AWS Toolkit](cloudformation-toolkit.md)
+ [Working with AWS Lambda functions using the AWS Toolkit](lambda-toolkit.md)
+ [Working with resources](more-resources.md)
+ [Working with Amazon S3 using AWS Toolkit](s3-toolkit.md)
+ [Working with AWS SAM using the AWS Toolkit](serverless-apps-toolkit.md)
+ [Working with Amazon CodeCatalyst](ide-toolkits-cloud9.md)
+ [](ecr.md)

# Navigating and configuring the AWS Toolkit
<a name="toolkit-navigation"></a>

You can access resources and modify settings through the following AWS Toolkit interface elements:
+ [**AWS Explorer** window](#working-with-aws-explorer): Access AWS services from different AWS Regions.
+ [**AWS Toolkit** menu](#toolkit-menu): Create and deploy serverless applications, show or hide AWS Regions, access user assistance, and interact with Git repositories. 
+ [**AWS Configuration** pane](#configuration-options): Modify settings that affect how you can interact with AWS services in AWS Toolkit.

## Using AWS Explorer to work with services and resources in multiple Regions
<a name="working-with-aws-explorer"></a>

With the **AWS Explorer** window, you can select AWS services and work with specific resources that are associated with that service. In **AWS Explorer**, choose a service name node (for example, API Gateway or Lambda). Then, choose a specific resource associated with that service (for example, a REST API or a Lambda function). When you choose a specific resource, a menu displays available interaction options such as upload or download, invoke, or copy.

Consider the following example. If your AWS account credentials can access Lambda functions, expand the Lambda node listed for an AWS Region, and then select a specific Lambda function to be invoked or uploaded as code to the AWS Cloud9 IDE. You can also open the context (right-click) menu for the node's title to start creating an application that uses the AWS Serverless Application Model. 

**Note**  
If you can't see the option to view the **AWS Explorer** window in the integrated development environment (IDE), verify that you enabled the AWS Toolkit. Then, after you verify it's enabled, try again. For more information, see [Enabling AWS Toolkit](toolkit-welcome.md#access-toolkit).

The **AWS Explorer** window can also display services hosted in multiple AWS Regions.

## To access AWS services from a selected Region


1. In the **AWS Explorer** window, choose the **Toolkit** menu, **Show region in the Explorer**.

1. From the **Select a region to show in the AWS Explorer** list, choose an AWS Region.

   The selected Region is added to the **AWS Explorer** window. To access available services and resources, choose the arrow (>) in front of the Region's name. 

**Note**  
You can also hide selected AWS Regions in the **AWS Explorer window** using the following options:  
Open the context (right-click) menu for the Region and choose **Hide region from the Explorer**.
In the AWS Toolkit menu, choose **Hide region from the Explorer** and select a Region to hide.

## Accessing and using the AWS Toolkit menu
<a name="toolkit-menu"></a>

The **AWS Toolkit** provides access for options to create and deploy [serverless applications](serverless-apps-toolkit.md). You can use this menu to manage connections, update the **AWS: Explorer** window, access documentation, and interact with GitHub repositories.

To access the **Toolkit** menu, choose the scroll icon opposite the **AWS: Explorer** title in the **AWS Explorer** window.

![\[Labelled screenshot showing the location of the Toolkit menu for the AWS Toolkit\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/toolkit-UI-menu-location.png)


The following tables provides an overview of the options available on the **Toolkit** menu.


****Toolkit** menu options**  

| Menu option | Description | 
| --- | --- | 
|  **Refresh AWS Explorer**  |  Choose this option to refresh **AWS Explorer** to show any AWS services that were modified since you last opened the window.  | 
|  **Connect to AWS**  |  Connects AWS Toolkit to an AWS account using credentials that are stored in a *profile*. For more information, see [Managing access credentials for AWS Toolkit](toolkit-welcome.md#credentials-for-toolkit).  | 
|  **Show region in the Explorer**  |  Displays an AWS Region in the **AWS Explorer** window. For more information, see [Using AWS Explorer to work with services and resources in multiple Regions](#working-with-aws-explorer).  | 
|  **Hide region from the Explorer**  |  Hides an AWS Region in the **AWS Explorer** window. For more information, see [Using AWS Explorer to work with services and resources in multiple Regions](#working-with-aws-explorer)  | 
|  **Create new SAM Application**  |  Generates a set of code files for a new AWS serverless application. For more information about how to create and deploy SAM applications, see [Working with AWS SAM using the AWS Toolkit](serverless-apps-toolkit.md).  | 
|  **Deploy SAM Application**  |  Deploys a serverless application to AWS. For more information about how to create and deploy SAM applications, see [Working with AWS SAM using the AWS Toolkit](serverless-apps-toolkit.md).  | 
|  **View Quick Start**  |  Opens the Quick Start guide.  | 
|  **View Toolkit Documentation**  |  Opens the user guide for AWS Toolkit.  | 
|  **View Source on GitHub**  |  Opens the GitHub repository for the AWS Toolkit.  | 
|  **Create a New Issue on GitHub**  |  Opens the AWS Toolkit's New Issue page on Github  | 
|  **Submit Quick Feedback**  |  Submit private, one-way feedback to the AWS Toolkit development team. For issues that require conversations or bug fixes, submit an issue in Github by selecting the **Create a New Issue on Github** menu option.  | 
|  **About AWS Toolkit**  |  Displays information about the version of the Toolkit running and the Amazon operating system that it's configured for.  | 

## Modifying AWS Toolkit settings using the AWS Configuration pane
<a name="configuration-options"></a>

To access the **AWS Configuration** pane, choose **AWS Cloud9**, **Preferences**. Next, in the **Preferences** window, under **Project Settings**, choose **AWS Configuration.** 

![\[Labelled screenshot showing the location of the AWS Configuration menu for the AWS Toolkit\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/toolkit-UI-aws-config-location.png)


The following table provides an overview of the options available on the **AWS Configuration** pane.


****  

| Menu option | Description | 
| --- | --- | 
|  **AWS: Profile**  |  Sets the name of the credentials profile to obtain credentials from.  | 
|  **AWS: On Default Region Missing**  |  Indicates the action to take if the default AWS Region for the selected credentials profile isn't available in the **AWS Explorer** window. You can select from three options: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/toolkit-navigation.html)  | 
|  **AWS > S3: Max Items Per Page**  |  Specifies how many Amazon S3 objects or folders are displayed at one time in the **AWS Explorer** window. When the maximum number is displayed, you can choose **Load More** to display the next batch.  The range of accepted values for this field is between 3 and 1000. This setting applies only to the number of objects or folders displayed at one time. All the buckets you've created are displayed at once. By default, you can create up to 100 buckets in each of your AWS accounts.   | 
|  **AWS > Samcli: Location**  |  Indicates the location of the SAM CLI that's used to create, build, package, and deploy [serverless applications](serverless-apps-toolkit.md).  | 
|  **AWS > Samcli > Debug > Attach> Retry: Maximum:**  |  Specifies how many times the Toolkit tries to attach the SAM CLI debugger before giving up. The default quota is 30 tries. When you locally invoke a Lambda function in debug mode within the AWS SAMCLI, you can then attach a debugger to it.  | 
|  **AWS > Samcli > Debug > Attach> Timeout: Millis:**  |  Specifies how long the Toolkit tries to attach the SAM CLI debugger before giving up. The default timeout is 30,000 milliseconds (30 seconds). When you locally invoke a Lambda function in debug mode within the AWS SAMCLI, you can then attach a debugger to it.  | 
|  **AWS : Log Level:**  |  Sets the category of workflow events that are logged. The following are the available levels: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/toolkit-navigation.html)  | 
|  **AWS : Telemetry**  |  Enables or disables the sending of usage data to AWS. Enabled by default  | 

# Working with API Gateway using the AWS Toolkit
<a name="api-gateway-toolkit"></a>

You can use API Gateway to create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. For more information about how to create and manage APIs with API Gateway, see the [https://docs.aws.amazon.com/apigateway/latest/developerguide/](https://docs.aws.amazon.com/apigateway/latest/developerguide/).

With the AWS Toolkit, you can configure a call to a REST API by specifying the REST resource, method type, and data that's passed in as input.

## Invoking REST APIs in API Gateway
<a name="api-gateway-toolkit-invoke"></a>

**Important**  
Calling API methods using the AWS Toolkit might result in changes to resources that can't be undone. For example, if you call a `POST` method, the API's resources are updated if the call is successful. 

You can invoke an API Gateway on AWS from the AWS Toolkit.

## To invoke a REST API


1. In the **AWS Explorer** window, choose the API Gateway node to view the list of REST APIs available in the current AWS Region.

1. Right-click a REST API, and then choose **Invoke on AWS**.
**Note**  
You can use the context menu to copy the REST API's URL, name, and Amazon Resource Name (ARN). 

   The **Invoke methods** window displays. You can configure the call to the API.

1. For **Select a resource**, choose the REST resource that you want to interact with.

1. For **Select a method**, choose one of the following method types:
   + **GET**: Gets a resource from the backend service that's accessed through the API.
   + **OPTIONS**: Requests information about the methods and operations that are supported by the API Gateway.
   + **POST**: Creates a new resource on the backend service that's accessed through the API.

1. To supply input to your API method call, you can use a query string or JSON-formatted payload:
   + **Query string**: Enter a query string using the format: `parameter1=value1&parameter2=value2`. (Before you use query strings, create a [mapping template](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html) to transform incoming web requests before they're sent to the integration back end.)
   + JSON format: You can define a JSON-formatted payload in the large text field in **Invoke methods** window.

     For example, you can add a new resource with a `POST` method that contains the following payload:

     ```
     {"type": "soda", "price" : 3.99}       
     ```

1. Choose the **Invoke** button to call the REST API resource.

   The REST API response is displayed in the **AWS Remote Invocations** tab. The response body contains the JSON-formatted resource data .

# Using AWS App Runner with AWS Toolkit
<a name="using-apprunner"></a>

[AWS App Runner](https://docs.aws.amazon.com/apprunner/latest/dg/what-is-apprunner.html) provides a quick and cost-effective way to deploy from source code or a container image directly to a scalable and secure web application in the AWS Cloud. Using it, you don't need to learn new technologies, decide which compute service to use, or know how to provision and configure AWS resources.

You can use AWS App Runner to create and manage services based on a *source image* or *source code*. If you use a source image, you can choose a public or private container image that's stored in an image repository. App Runner supports the following image repository providers:
+ Amazon Elastic Container Registry (Amazon ECR): Stores private images in your AWS account.
+ Amazon Elastic Container Registry Public (Amazon ECR Public): Stores publicly readable images.

If you choose the source code option, you can deploy from a source code repository that's maintained by a supported repository provider. Currently, App Runner supports [GitHub](https://github.com/) as a source code repository provider.

## Prerequisites
<a name="apprunner-prereqs"></a>

To interact with App Runner using the AWS Toolkit requires the following:
+ An AWS account
+ A version of AWS Toolkit that features AWS App Runner

In addition to those core requirements, make sure that all relevant IAM users have permissions to interact with the App Runner service. Make sure also to obtain specific information about your service source such as the container image URI and the connection to the GitHub repository. You need this information when creating your App Runner service.

### Configuring IAM permissions for App Runner
<a name="app-runner-permissions"></a>

To grant the permissions that are required for App Runner quickly, attach an existing AWS managed policy to the relevant AWS Identity and Access Management (IAM) entity. In particular, you can attach a policy to either a user or group. App Runner provides two managed policies that you can attach to your IAM users:
+ `AWSAppRunnerFullAccess`: Allows users to perform all App Runner actions.
+ `AWSAppRunnerReadOnlyAccess`: Allow users to list and view details about App Runner resources. 

If you choose a private repository from the Amazon Elastic Container Registry (Amazon ECR) as the service source, you must also create the following access role for your App Runner service:
+ `AWSAppRunnerServicePolicyForECRAccess`: Allows App Runner to access Amazon Elastic Container Registry (Amazon ECR) images in your account.

You can create this role automatically when configuring your service instance with the AWS Toolkit command pane.

**Note**  
The **AWSServiceRoleForAppRunner** service-linked role allows AWS App Runner to complete the following tasks:  
Push logs to Amazon CloudWatch Logs log groups.
Create Amazon CloudWatch Events rules to subscribe to Amazon Elastic Container Registry (Amazon ECR) image push.
You don't need to manually create the service-linked role. When you create an AWS App Runner in the AWS Management Console or by using API operations that are called by AWS Toolkit, AWS App Runner creates this service-linked role for you. 

For more information, see [Identity and access management for App Runner](https://docs.aws.amazon.com/apprunner/latest/dg/security-iam.html) in the *AWS App Runner Developer Guide*.

### Obtaining service sources for App Runner
<a name="app-runner-sources"></a>

You can use AWS App Runner to deploy services from a source image or source code. 

------
#### [ Source image ]

If you're deploying from a source image, obtain a link to the repository for that image from a private or public AWS image registry. 
+ Amazon ECR private registry: Copy the URI for a private repository that uses the Amazon ECR console at [https://console.aws.amazon.com/ecr/repositories](https://console.aws.amazon.com/ecr/repositories). 
+ Amazon ECR public registry: Copy the URI for a public repository that uses the Amazon ECR Public Gallery at [https://gallery.ecr.aws/](https://gallery.ecr.aws).

**Note**  
You can also obtain the URI for a private Amazon ECR repository directly from **AWS Explorer** in the AWS Toolkit:  
Open **AWS Explorer** and expand the **ECR** node to view the list of repositories for that AWS Region.
Open the context (right-click) menu for a repository and choose **Copy Repository URI** to copy the link to your clipboard.

You specify the URI for the image repository when configuring your service instance with the AWS Toolkit command pane.

For more information, see [App Runner service based on a source image](https://docs.aws.amazon.com/apprunner/latest/dg/service-source-image.html) in the *AWS App Runner Developer Guide*.

------
#### [ Source code ]

For your source code to be deployed to an AWS App Runner service, that code must be stored in a Git repository. This Git repository must be maintained by a supported repository provider. App Runner supports one source code repository provider: [GitHub](https://github.com/).

For information about setting up a GitHub repository, see the [Getting started documentation](https://docs.github.com/en/github/getting-started-with-github) on GitHub.

To deploy your source code to an App Runner service from a GitHub repository, App Runner establishes a connection to GitHub. If your repository is private (that is, it isn't publicly accessible on GitHub), you must provide App Runner with connection details. 

**Important**  
To create GitHub connections, you must use the App Runner console ([https://console.aws.amazon.com/apprunner](https://console.aws.amazon.com/apprunner)) to create a connection that links GitHub to AWS. You can select the connections that are available on the **GitHub connections** page when configuring your service instance with the AWS Toolkit's command pane.  
For more information, see [Managing App Runner connections](https://docs.aws.amazon.com/apprunner/latest/dg/manage-connections.html) in the *AWS App Runner Developer Guide*.

The App Runner service instance provides a managed runtime that allows your code to build and run. AWS App Runner currently supports the following runtimes:
+ Python managed runtime 
+ Node.js managed runtime

As part of your service configuration, you provide information about how the App Runner service builds and starts your service. You can enter this information using the **Command Palette** or specify a YAML-formatted [App Runner configuration file](https://docs.aws.amazon.com/apprunner/latest/dg/config-file.html). Values in this file instruct App Runner how to build and start your service, and provide runtime context. This includes relevant network settings and environment variables. The configuration file is named `apprunner.yaml`. It's automatically added to root directory of your application’s repository.

 

------

## Pricing
<a name="app-runner-pricing"></a>

You're charged for the compute and memory resources that your application uses. In addition, if you automate your deployments, you also pay a set monthly fee for each application that covers all automated deployments for that month. If you opt to deploy from source code, you pay a build fee for the time that it takes App Runner to build a container from your source code.

For more information, see [AWS App Runner Pricing](https://aws.amazon.com/apprunner/pricing/).

**Topics**
+ [Prerequisites](#apprunner-prereqs)
+ [Pricing](#app-runner-pricing)
+ [Creating App Runner services](creating-service-apprunner.md)
+ [Managing App Runner services](managing-service-apprunner.md)

# Creating App Runner services
<a name="creating-service-apprunner"></a>

You can create an App Runner service in AWS Toolkit by using the **AWS Explorer**. After you choose to create a service in a specific AWS Region, the AWS Toolkit's command pane describe how to configure the service instance where your application runs. 

Before you create an App Runner service, make sure that you completed the [prerequisites](using-apprunner.md#apprunner-prereqs). This includes providing the relevant IAM permissions and confirming the specific source repository that you want to deploy.<a name="create-service"></a>

# To create an App Runner service
<a name="create-service"></a>

1. Open AWS Explorer, if it isn't already open.

1. Right-click the **App Runner** node and choose **Create Service**.

   The AWS Toolkit command pane displays.

1. For **Select a source code location type**, choose **ECR** or **Repository**. 

   If you choose **ECR**, you specify a container image in a repository maintained by Amazon Elastic Container Registry. If you choose **Repository**, you specify a source code repository that's maintained by a supported repository provider. Currently, App Runner supports [GitHub](https://github.com/) as a source code repository provider. 

## Deploying from ECR
<a name="deploying-from-ECR"></a>

1. For **Select or enter an image repository**, choose or enter the URL of the image repository that's maintained by your Amazon ECR private registry or the Amazon ECR Public Gallery.
**Note**  
If you specify a repository from the Amazon ECR Public Gallery, make sure that automatic deployments are turned off. App Runner doesn't support automatic deployments for an image in an ECR Public repository.  
Automatic deployments are switched off by default. This is indicated when the icon on the command pane header features a diagonal line through it. If you chose to switch on automatic deployments, a message informs you that this option can incur additional costs. 

1. If the step in the command pane reports that **No tags found**, go back a step to select a repository that contains a tagged container image.

1. For **Port**, enter the IP port that's used by the service (for example, port `8000`).

1. (Optional) For **Configure environment variables**, specify a file that contains the environment variables that are used to customize behavior in your service instance.

1. If you're using an Amazon ECR private registry, you need the **AppRunnerECRAccessRole** ECR access role. This role allows App Runner to access Amazon Elastic Container Registry (Amazon ECR) images in your account. Choose the "\$1" icon on the command pane header to create this role. If your image is stored in Amazon ECR Public where images are publicly available, an access role isn't required.

1. For **Name your service**, enter a unique name and press **Enter**. The name cannot contain spaces.

1. For **Select instance configuration**, choose a combination of CPU units and memory (both in GB) for your service instance.

   When your service is being created, its status changes from **Creating** to **Running**.

1.  After your service starts running, open a context (right-click) menu for it and choose **Copy Service URL**. 

1. To access your deployed application, paste the copied URL into the address bar of your web browser. 

## Deploying from a remote repository
<a name="deploying-from-repository"></a>

1.  For **Select a connection**, choose a connection that links GitHub to AWS. The connections that are available for selection are listed on the **GitHub connections** page on the App Runner console. 

1.  For **Select a remote GitHub repository**, choose or enter a URL for the remote repository.

    Remote repositories that are already configured with AWS Cloud9 source control management are available for selection. If the repository isn't listed, you can also paste a link to the repository.

1. For **Select a branch**, choose which Git branch of your source code that you want to deploy.

1. For **Choose configuration source**, specify how you want to define your runtime configuration.

   If you choose **Use configuration file**, your service instance is configured by settings that are defined by the `apprunner.yaml` configuration file. This file is in the root directory of your application’s repository.

   If you choose **Configure all settings here**, use the command pane to specify the following:
   + **Runtime**: Choose **Python 3** or **Nodejs 12**.
   + **Build command**: Enter the command to build your application in the runtime environment of your service instance.
   + **Start command**: Enter the command to start your application in the runtime environment of your service instance.

1. For **Port**, enter the IP port that the service uses (for example, port `8000`).

1. (Optional) For **Configure environment variables**, specify a file that contains environment variables to customize behavior in your service instance.

1. For **Name your service**, enter a unique name and press **Enter**. The name cannot contain spaces.

1. For **Select instance configuration**, choose a combination of CPU units and memory in GB for your service instance.

   While your service is being created, its status changes from **Creating** to **Running**.

1. After your service starts running, open the context (right-click) menu for it and choose **Copy Service URL**.

1. To access your deployed application, paste the copied URL into the address bar of your web browser.

**Note**  
If your attempt to create an App Runner service fails, the service shows a status of **Create failed** in **AWS Explorer**. For troubleshooting information, see [When service creation fails](https://docs.aws.amazon.com/apprunner/latest/dg/manage-create.html#manage-create.failure) in the *App Runner Developer Guide*.

# Managing App Runner services
<a name="managing-service-apprunner"></a>

After creating an App Runner service, you can manage it by using the AWS Explorer pane to carry out the following activities:
+ [Pausing and resuming App Runner services](#pause-resume-apprunner)
+ [Deploying App Runner services](#deploying-apprunner)
+ [Viewing logs streams for App Runner](#viewing-logs-apprunner)
+ [Deleting App Runner services](#deleting-apprunner)

## Pausing and resuming App Runner services
<a name="pause-resume-apprunner"></a>

If you need to disable your web application temporarily and stop the code from running, you can pause your AWS App Runner service. App Runner reduces the compute capacity for the service to zero. When you're ready to run your application again, resume your App Runner service. App Runner provisions new compute capacity, deploys your application to it, and runs the application.

**Important**  
You're billed for App Runner only when it's running. Therefore, you can pause and resume your application as needed to manage costs. This is particularly helpful in development and testing scenarios.<a name="pause-app-runner"></a>

## To pause your App Runner service
<a name="pause-app-runner"></a>

1. Open AWS Explorer, if it isn't already open.

1. Expand **App Runner** to view the list of services.

1. Right-click your service and choose **Pause**.

1. In the dialog box that displays, choose **Confirm**.

   While the service is pausing, the service status changes from **Running** to **Pausing** and then to **Paused**.<a name="pause-app-runner"></a>

## To resume your App Runner service
<a name="pause-app-runner"></a>

1. Open AWS Explorer, if it isn't already open.

1. Expand **App Runner** to view the list of services.

1. Right-click your service and choose **Resume**.

   While the service is resuming, the service status changes from **Resuming** to **Running**.

## Deploying App Runner services
<a name="deploying-apprunner"></a>

If you choose the manual deployment option for your service, you need to explicitly initiate each deployment to your service. <a name="deploy-app-runner"></a>

1. Open AWS Explorer, if it isn't already open.

1. Expand **App Runner** to view the list of services.

1. Right-click your service and choose **Start Deployment**.

1. While your application is being deployed, the service status changes from **Deploying** to **Running**.

1. To confirm that your application is successfully deployed, right-click the same service and choose **Copy Service URL**.

1. To access your deployed web application, paste the copied URL into the address bar of your web browser.

## Viewing logs streams for App Runner
<a name="viewing-logs-apprunner"></a>

Use CloudWatch Logs to monitor, store, and access your log streams for services such as App Runner. A log stream is a sequence of log events that share the same source. <a name="view-logs-apprunner"></a>

1. Expand **App Runner** to view the list of service instances.

1. Expand a specific service instance to view the list of log groups. (A log group is a group of log streams that share the same retention, monitoring, and access control settings.) 

1. Right-click a log group and choose **View Log Streams**.

1. From the command pane, choose a log stream from the group.

   The AWS Cloud9 IDE displays the list of log events that make up the stream. You can choose to load older or newer events into the editor. 

## Deleting App Runner services
<a name="deleting-apprunner"></a>

**Important**  
If you delete your App Runner service, it's permanently removed and your stored data is deleted. If you need to recreate the service, App Runner needs to fetch your source again and build it if it's a code repository. Your web application gets a new App Runner domain. <a name="delete-app-runner"></a>

1. Open AWS Explorer, if it isn't already open.

1. Expand **App Runner** to view the list of services.

1. Right-click a service and choose **Delete Service**.

1. In the AWS Toolkit command pane, enter *delete* and then press **Enter** to confirm.

   The deleted service displays the **Deleting** status, and then the service disappears from the list.

# Working with AWS CloudFormation stacks using AWS Toolkit
<a name="cloudformation-toolkit"></a>

The AWS Toolkit provides support for [AWS CloudFormation](https://aws.amazon.com/cloudformation/) stacks. Using the AWS Toolkit, you can delete an CloudFormation stack.

## Deleting CloudFormation stacks
<a name="cloudformation-delete"></a>

You can use the AWS Toolkit to view and delete CloudFormation stacks.

### Prerequisites
<a name="cloudformation-delete-prereq"></a>
+ Ensure that the credentials you're using in the AWS Cloud9 environment include appropriate read/write access to the CloudFormation service. If in the **AWS Explorer**, under **CloudFormation**, you see a message similar to "Error loading CloudFormation resources," check the permissions attached to those credentials. Changes that you make to permissions take a few minutes to affect the **AWS Explorer**.

## To delete an CloudFormation stack


1. In the **AWS Explorer**, open the context (right-click) menu of the CloudFormation stack you want to delete.

1. Choose **Delete CloudFormation Stack**.

1. In the message that appears, choose **Yes** to conﬁrm the delete.

After the stack is deleted, it's no longer listed in the **AWS Explorer**.

# Working with CloudWatch Logs using the AWS Toolkit
<a name="cloudwatch-logs-toolkit"></a>

You can use Amazon CloudWatch Logs to centralize the logs from all of your systems and applications and the AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. For more information, see [What Is Amazon CloudWatch Logs?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatchLogs.html) in the *Amazon CloudWatch User Guide*.

The following topics describe how to use the AWS Toolkit to work with CloudWatch Logs in an AWS account:

**Topics**
+ [Viewing CloudWatch log groups and log streams](viewing-CloudWatch-logs.md)
+ [Working with CloudWatch log events](working-CloudWatch-log-events.md)

# Viewing CloudWatch log groups and log streams using the AWS Toolkit
<a name="viewing-CloudWatch-logs"></a>

A *log stream* is a sequence of log events that share the same source. Each separate source of logs into CloudWatch Logs makes up a separate log stream.

A *log group* is a group of log streams that share the same retention, monitoring, and access control settings. You can define log groups and specify which streams to put into each group. There's no limit on the number of log streams that can belong to one log group. 

For more information, see [Working with Log Groups and Log Streams ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch User Guide*.

**Topics**
+ [Viewing log groups and log streams with the **CloudWatch Logs** node](#viewing-log-groups)

## Viewing log groups and log streams with the **CloudWatch Logs** node
<a name="viewing-log-groups"></a>

1. Open AWS Explorer, if it isn't already open.

1. Click the **CloudWatch Logs** node to expand the list of log groups.

   The log groups for the current AWS Region are displayed under the **CloudWatch Logs** node.

1. To view the log streams in a specific log group, open the context (right-click) menu for the name of the log group, and then choose **View Log Streams**.

1. The log group's contents are displayed under the **Select a log stream** heading. 

   You can choose a specific stream from the list or filter the streams by entering text in the field.

   After you choose a stream, the events in that stream are displayed in the IDE's **Log Streams** window. For information about interacting with the log events in each stream, see [Working with CloudWatch log events](working-CloudWatch-log-events.md).

# Working with CloudWatch log events in log streams
<a name="working-CloudWatch-log-events"></a>

After you opened the **Log Stream** window, you can access the log events in each stream. Log events are records of activity recorded by the application or resource being monitored.

**Topics**
+ [Viewing and copying log stream information](#viewing-log-events)
+ [Save the contents of the log stream editor to a local file](#saving-CW-logs)

## Viewing and copying log stream information
<a name="viewing-log-events"></a>

When you open a log stream, the **Log Stream** window displays that stream's sequence of log events. 

1. To find a log stream to view, open the **Log Stream** window. For more information, see [Viewing CloudWatch log groups and log streams](viewing-CloudWatch-logs.md).

   Each line listing an event is timestamped to show when it was logged. 

1. You can view and copy information about the stream's events using the following options:
   + **View events by time: **Display the latest and older log events by choosing **Load newer events** or **Load older events**. 
**Note**  
The **Log Stream** editor initially loads a batch of the most recent 10,000 lines of log events or 1 MB of log data, whichever is smaller. If you choose **Load newer events**, the editor displays events that were logged after the last batch was loaded. If you choose **Load older events**, the editor displays a batch of events that occurred before those currently displayed. 
   + **Copy log events:** Select the events to copy, then open the context (right-click) menu and select **Copy** from the menu.
   + **Copy the log stream's name:** Open the context (right-click) menu for the tab of the **Log Stream** window and choose **Copy Log Stream Name**.

## Save the contents of the log stream editor to a local file
<a name="saving-CW-logs"></a>

You can download the contents of the CloudWatch log stream editor to a `log` file on your local machine.

**Note**  
You can use this option to save to file only those log events that are currently displayed in the log stream editor. For example, suppose that the total size of a log stream is 5MB and only 2MB is loaded in the editor. Your saved file also contains only 2MB of log data. To display more data to be saved, choose **Load newer events** or **Load older events** in the editor. 

1. To find a log stream to copy, open the **Log Streams** window (see [Viewing CloudWatch log groups and log streams](viewing-CloudWatch-logs.md)).

1. Open the context (right-click) menu for the tab of the **Log Stream** window and choose **Save Current Log Content to File**

1. Use the dialog box to select or create a download folder for the log file, and choose **Save**.

# Working with AWS Lambda functions using the AWS Toolkit
<a name="lambda-toolkit"></a>

The AWS Toolkit supports [AWS Lambda](https://aws.amazon.com/lambda/) functions. The AWS Toolkit replaces the functionality formerly provided by the Lambda plug-in in AWS Cloud9. Using the AWS Toolkit, you can author code for Lambda functions that are part of [serverless applications](https://aws.amazon.com/serverless/). In addition, you can invoke Lambda functions either locally or on AWS.

Lambda is a fully managed compute service that runs your code in response to events generated by custom code or from various AWS services. They include Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Kinesis, Amazon Simple Notification Service (Amazon SNS), and Amazon Cognito.

**Important**  
If you want to build a Lambda application that uses the resources that are provided by the Serverless Application Model (SAM), see [Working with AWS SAM using the AWS Toolkit](serverless-apps-toolkit.md).

**Topics**
+ [Invoking remote Lambda functions](#remote-lambda)
+ [Downloading, uploading, and deleting Lambda functions](#import-upload-delete-lambda)

## Invoking remote Lambda functions
<a name="remote-lambda"></a>

Using the AWS Toolkit you can interact with [AWS Lambda](https://aws.amazon.com/lambda/) functions in various ways.

For more information about Lambda, see the [AWS Lambda Developer Guide](https://docs.aws.amazon.com/lambda/latest/dg/). 

**Note**  
Suppose that you have already created Lambda functions by using the AWS Management Console or in some other way. You can invoke them from the AWS Toolkit. To create a new function with AWS Toolkit that you can deploy to AWS Lambda, you must first [create a serverless application](serverless-apps-toolkit.md#sam-create).

### Prerequisites
<a name="remote-lambda-prereq"></a>
+ Make sure that the credentials that you configured in include appropriate read/write access to the AWS Lambda service. If in the **AWS Explorer**, under **Lambda**, you see a message similar to "Error loading Lambda resources," check the permissions attached to those credentials. Changes that you make to permissions take a few minutes to affect the **AWS Explorer** in AWS Toolkit.

### Invoking a Lambda function
<a name="invoke-lam-func"></a>

**Important**  
Calling API methods using the AWS Toolkit might result in changes to resources that can't be undone. For example, if you call a `POST` method, the API's resources are updated if the call is successful. 

You can invoke a Lambda function on AWS using the AWS Toolkit.

****

1. In the **AWS Explorer**, choose the name of the Lambda function you want to invoke, and then open its context menu.

1. Choose **Invoke on AWS**.

1. In the **Invoke function** window that opens, choose an option for the payload your Lambda function needs. (The payload is the JSON that you want to provide to your Lambda function as input.) You can choose ** Browse** to select a file to use as payload or use the dropdown field to pick a template for the payload. In this case, the Lambda function might appear as a string as an input, as shown in the text box.

Choose **Invoke** to call the Lambda and pass in the payload.

You see the output of the Lambda function in the AWS Lambda tab.

## Downloading, uploading, and deleting Lambda functions
<a name="import-upload-delete-lambda"></a>

The AWS Toolkit provides the options for importing and uploading Lambda functions in AWS Cloud9 IDE. 

### Downloading a Lambda function
<a name="w2aac28c32c13b5"></a>

By downloading a Lambda function, you also download the project files that describe the function from the AWS Cloud and work with them in the AWS Cloud9 IDE.

### To download a Lambda function


1. In the **AWS Explorer**, under the Lambda node, open the context (right-click) menu for the function, and choose **Download**.

1. When asked to **Select a workspace folder for your new project**, you can do one of the following:
   + Choose the folder that's suggested to create a subfolder with the same name as your Lambda project.
   + Choose **Select a different folder** to open a dialog box to browse for and select a different parent folder for your project subfolder. 

   The IDE opens a new editor window.

### Configuring a downloaded Lambda function for running and debugging
<a name="w2aac28c32c13b7"></a>

To run and debug your downloaded Lambda function as a serverless application, you need a launch configuration to be defined in your `launch.json` file. A Lambda function that was created in the AWS Management Console might not be included in a launch configuration. So, you might need to add it manually.

### To add your Lambda function to launch configuration


1. After you' downloaded the Lambda function, open the **Environment** window to view its folders and files.

1. Next, check that your Lambda function is included in a `/home/ec2-user/.c9/launch.json` file. If it isn't present, do the following to add a CodeLens link to your function's code:

   1. Open the source code file that defines the Lambda function (for example, a `.js` or `.py` file). Then, check if there's a CodeLens link that you can use to add your lambda function to a `launch.json` file. A CodeLens appears above the function and includes the `Add Debug Config` link.

   1. Choose **Go** (the magnifying glass icon) on the left of the IDE, and enter "sam hint" to display the `AWS: Toggle SAM hints in source files` command. Choose the command to run it. 

   1. Close your Lambda source code file and then reopen it.

   1. If the CodeLens is available in the source code after you reopen the file, choose `Add Debug Config` to add the launch configuration.

1. If you can't add a CodeLens even after toggling the SAM hint option, do the following to add the launch configuration:

   1. Choose **Go** (the magnifying glass icon) on the left of the IDE, and type "config" to display the `AWS: SAM Debug Configuration Editor` command. Choose the command to run it.

   1. The **SAM Debug Configuration Editor** displays. You can use this editor to define launch configuration properties. For information, see the step for [configuring launch properties](serverless-apps-toolkit.md#properties) in [Using SAM templates to run and debug serverless applications](serverless-apps-toolkit.md#sam-run-debug-template). 
**Note**  
If your Lambda function doesn't have a `template.yaml` for SAM applications, you must add one. For more information, see [Create your AWS SAM template](https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorial-lambda-sam-template.html).

   1. After you finished entering the required configuration information in the editor, your launch configuration is added to the **launch.json** file.

After you defined a launch configuration for your Lambda function, you can run it by doing the following:

1. At the top of the IDE, choose the arrow beside **Auto** and select the relevant launch configuration.

1. Next, choose **Run**.

### Uploading a Lambda function
<a name="w2aac28c32c13b9"></a>

You can update existing Lambda functions with local code. Updating code in this way doesn't use the AWS Serverless Application Model CLI for deployment and doesn't create an AWS CloudFormation stack. This way, you can upload a Lambda function with any runtime supported by Lambda. 

There are several interface options for uploading Lambda functions using the AWS Toolkit. 

#### Upload from **Environment** window or **Command pane**
<a name="upload-lambda-from-environment"></a>

1. In the **Environment window** for your project files, choose the context (right-click) menu for the `template.yaml` for the Lambda application that you want to upload and choose **Upload Lambda**.

   Alternatively, press **Ctrl\$1P** to open the **Go to Anything** pane and enter "lambda" to access the **AWS Upload Lambda** command. Then, choose it to start the upload process.

1. Next, select an AWS Region that you want to upload to.

1. Now choose an option for uploading your Lambda function:

   **Upload a .zip archive**

   1. Choose **ZIP Archive** from the menu.

   1. Choose a .zip file from your AWS Cloud9 file system and choose **Open**.

   **Upload a directory as is**

   1. Choose **Directory** from the menu.

   1. Choose a directory from your AWS Cloud9 file system and choose **Open**.

1. Specify the Lambda function handler that processes events. When your function is invoked, Lambda runs this handler method.
**Note**  
When selecting your Lambda function, you can select from the list that's displayed. If you don't know which function to choose, you can enter the Amazon Resource Number (ARN) of a Lambda function that's available in the Toolkit. 

   A dialog displays asking whether you want this code to be published as the latest version of the Lambda function. Choose **Yes** to confirm publication.
**Note**  
You can also upload Lambda applications by opening the context (right-click) menu for the parent folder on the folder and selecting **Upload Lambda**. The parent folder is automatically selected for upload.

#### Upload from **AWS Explorer**
<a name="upload-lambda-from-explorer"></a>

1. In the **AWS Explorer**, open the context (right-click) menu for the name of the Lambda function that you want to import.

1. Choose **Upload Lambda**.

1. Choose from the three options for uploading your Lambda function.

   **Upload a premade .zip archive**

   1. Choose **ZIP Archive** from the menu.

   1. Choose a .zip file from your AWS Cloud9 file system and choose **Open**.

   1. Confirm the upload with the modal dialog. This uploads the .zip file and is immediately updates the Lambda following deployment.

   **Upload a directory as is**

   1. Choose **Directory** from the menu.

   1. Choose a directory from your AWS Cloud9 file system and choose **Open**.

   1. Choose **No** when prompted to build the directory.

   1. Confirm the upload with the modal dialog. This uploads the directory as is and immediately updates the Lambda following deployment.

   **Build and upload a directory**

   1. Choose **Directory** from the menu.

   1. Choose a directory from your AWS Cloud9 file system and choose **Open**.

   1. Choose **Yes** when prompted to build the directory.

   1. Confirm the upload with the modal dialog. This builds the code in the directory using the AWS SAM CLI `sam build` command and immediately updates the Lambda following deployment.

### Deploying a Lambda function for remote access
<a name="w2aac28c32c13c11"></a>

You can make your local functions available remotely by deploying them as serverless SAM applications.

### To deploy a Lambda function as a SAM application


1. In **AWS Explorer**, open the context (right-click) menu for the **Lambda** node, and choose **Deploy SAM Application**.

1. In the command pane, select the [YAML template](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy.html) that defines your function as a serverless application.

1. Next, select an Amazon S3 bucket for the Lambda deployment. You can also choose to create a bucket for the deployment.

1. Now enter the name of an CloudFormation stack that you're deploying to. If you specify an existing stack, the command updates the stack. If you specify a new stack, the command creates it.

   After you enter the name of the stack, your Lambda function starts to deploy as a SAM application. After a successful deployment, the SAM Lambda application is available remotely. That way, you can download or invoke it from other AWS Cloud9 development environments. 

If you want to create a Lambda function from scratch, we recommend following the steps to [Create a serverless application with the AWS Toolkit](serverless-apps-toolkit.md#create-serverless-app).

### Deleting a Lambda function
<a name="delete-lambda"></a>

You can also delete a Lambda function using the same context (right-click) menu.

**Warning**  
Do not use this procedure to delete Lambda functions that are associated with [CloudFormation](https://docs.aws.amazon.com/cloudformation/). For example, do not delete the Lambda function that was created when [creating a serverless application](serverless-apps-toolkit.md#sam-create) earlier in this guide. These functions must be deleted through the CloudFormation stack.

****

1. In the **AWS Explorer**, choose the name of the Lambda function you want to delete, and then open its context (right-menu).

1. Choose **Delete**.

1. In the message that appears, choose **Yes** to conﬁrm the delete.

After the function is deleted, it's no longer listed in the **AWS Explorer** view.

# Working with resources
<a name="more-resources"></a>

In addition to accessing AWS services that are listed by default in the AWS Explorer, you can go to **Resources** and choose from hundreds of resources to add to the interface. In AWS, a **resource** is an entity you can work with. Some of the resources that are added include Amazon AppFlow, Amazon Kinesis Data Streams, AWS IAM roles, Amazon VPC, and Amazon CloudFront distributions.

To view available resources, go to **Resources** and expand the resource type to list the available resources for that type. For example, if you select the `AWS::Lambda::Function` resource type, you can access the resources that define different functions, their properties, and their attributes.

After adding a resource type to **Resources**, you can interact with it and its resources in the following ways:
+ View a list of existing resources that are available in the current AWS Region for this resource type.
+ View a read-only version of the JSON file that describes a resource.
+ Copy the resource identifier for the resource.
+ View the AWS documentation that explains the purpose of the resource type and the schema (in JSON and YAML formats) for modeling a resource. 

## IAM permissions for accessing resources
<a name="cloud-api-permissions"></a>

You require specific AWS Identity and Access Management permissions to access the resources associated with AWS services. For example, an IAM entity, such as a user or a role, requires Lambda permissions to access `AWS::Lambda::Function` resources. 

In addition to permissions for service resources, an IAM entity requires permissions to permit the AWS Toolkit to call AWS Cloud Control API operations. Cloud Control API operations allow the IAM user or role to access and update the remote resources.

You can quickly grant permissions by attaching the AWS managed policy, **PowerUserAccess**, to the IAM entity that's calling these API operations using the Toolkit interface. This managed policy grants a range of permissions for performing application development tasks, including calling API operations. 

For specific permissions that define allowable API operations on remote resources, see the [AWS Cloud Control API User Guide.](https://docs.aws.amazon.com//cloudcontrolapi/latest/userguide/security.html)

## Interacting with existing resources
<a name="configure-resources"></a>

1. In the **AWS Explorer**, choose **Resources**.

   A list of resource types is displayed under the **Resources** node.

1. There's documentation describing the syntax that defines the template for a resource type. To access this documentation, open the context (right-click) menu for that resource type and choose **View Documentation**. 
**Note**  
You might be asked to switch off your browser's popup blocker so you can access the documentation page.

1. To view the resources that already exist for a resource type, expand the entry for that type.

   A list of available resources is displayed under their resource type.

1. To interact with a specific resource, open the context (right-click) menu for its name and choose one of the following options:
   + **Copy Identifier**: Copy the identifier for the specific resource to the clipboard. For example, the `AWS::DynamoDB::Table` resource can be identified using the `TableName` property.
   + **Preview**: View a read-only version of the JSON-formatted template that describes the resource.

# Working with Amazon S3 using AWS Toolkit
<a name="s3-toolkit"></a>

The following topics describe how to use the AWS Toolkit to work with [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/) buckets and objects in an AWS account.

**Topics**
+ [Working with Amazon S3 buckets](work-with-S3-buckets.md)
+ [Working with Amazon S3 objects](work-with-S3-objects.md)

# Working with Amazon S3 buckets
<a name="work-with-S3-buckets"></a>

Every object you store in Amazon S3 resides in a bucket. You can use buckets to group related objects in the same way that you use a directory to group files in a file system.

**Topics**
+ [Creating an Amazon S3 bucket](#creating-s3-bucket)
+ [Adding a folder to an Amazon S3 bucket](#adding-folders)
+ [Deleting an Amazon S3 bucket](#deleting-s3-buckets)
+ [Configuring the display of Amazon S3 items](#configuring-items-display)

## Creating an Amazon S3 bucket
<a name="creating-s3-bucket"></a>

1. In the **AWS Explorer**, open the context (right-click) menu for the **S3** node, and then choose **Create Bucket**. 

1. In the **Bucket Name** field, enter a valid name for the bucket. Press **Enter** to confirm.

   The new bucket is displayed under the **S3** node.
**Note**  
Because your S3 bucket can be used as a URL that's accessed publicly, the bucket name that you choose must be globally unique. If some other account has already created a bucket with the name that you chose, you must use another name.  
If you can't create a bucket, you can check the **AWS Toolkit Logs** in the **Output** tab. For example, if you use a bucket name already in use, a `BucketAlreadyExists` error occurs. For more information, see [Bucket restrictions and limitations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) in the *Amazon Simple Storage Service User Guide*.

   After a bucket is created, you can copy its name and Amazon Resource Name (ARN) to the clipboard. Open the context (right-click) menu for the bucket entry and select the relevant option from the menu.

## Adding a folder to an Amazon S3 bucket
<a name="adding-folders"></a>

You organize a bucket's contents by grouping objects in folders. You can also create folders within other folders.

1. In the **AWS Explorer**, choose the **S3** node to view the list of buckets.

1. Open the context (right-click) menu for a bucket or a folder, and then choose **Create Folder**. 

1. Enter a **Folder Name**, and then press **Enter**.

   The new folder is now displayed below the selected bucket and folder in the **AWS Explorer** window.

## Deleting an Amazon S3 bucket
<a name="deleting-s3-buckets"></a>

When you delete a bucket, you also delete the folders and objects that it contains. Before the bucket is deleted, you're asked to confirm that you want to do this.

**Note**  
[To delete only a folder](https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-folders.html), not the entire bucket, use the AWS Management Console. 

1. In the **AWS Explorer**, choose the **S3** node to expand the list of buckets.

1. Open the context menu for the bucket to delete, and then choose **Delete**.

1. Enter the bucket's name to confirm that you want to delete it, and then press **Enter**.
**Note**  
If the bucket contains objects, the bucket is emptied before you delete it. This can take some time if it's necessary to delete every version of thousands of objects. A notification is displayed after the delete process is complete.

## Configuring the display of Amazon S3 items
<a name="configuring-items-display"></a>

If you're working with a large number of Amazon S3 objects or folders, it's helpful to specify how many are displayed at one time. When the maximum number is displayed, you can choose **Load More** to display the next batch. 

1. On the menu bar, choose **AWS Cloud9**, **Preferences**.

1. In the **Preferences** window, expand **Project Settings**, and go to the **EXTENSIONS** section to choose **AWS Configuration**.

1. In the **AWS Configuration** pane, go to the **AWS > S3: Max Items Per Page** setting.

1. Before choosing to load more, change the default value to the number of S3 items that you want displayed.
**Note**  
The range of accepted values is between 3 and 1000. This setting applies only to the number of objects or folders displayed at one time. All the buckets that you created are displayed at once. By default, you can create up to 100 buckets in each of your AWS accounts.

# Working with Amazon S3 objects
<a name="work-with-S3-objects"></a>

Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata.

**Topics**
+ [Uploading a file to an Amazon S3 bucket](#uploading-s3-object-to-bucket)
+ [Downloading an Amazon S3 object](#downloading-s3-object)
+ [Deleting an Amazon S3 object](#deleting-s3-object)
+ [Generating a presigned URL for an Amazon S3 object](#presigned-s3-object)

## Uploading a file to an Amazon S3 bucket
<a name="uploading-s3-object-to-bucket"></a>

You can use the Toolkit interface or a command to upload a file to a bucket 

Both methods allow you to upload a file from a user's environment and store it as an S3 object in the AWS Cloud. You can upload a file to a bucket or to a folder that organizes that bucket's contents.

## Upload a file to an S3 bucket using the interface


1. In the **AWS Explorer**, choose the **S3** node to view the list of buckets.

1. Open the context menu (right-click) for a bucket or a folder in that bucket, and then choose **Upload File**. 
**Note**  
If you open the context menu (right-click) an S3 object, you can choose **Upload to Parent**. This enables you to add a file to the folder or bucket that contains the selected file.

1. Using your environment's file manager, select a file, and then choose **Upload**.

   The selected file is uploaded as an S3 object to the bucket or folder. Each object's entry describes the size of the stored object and how long ago it was uploaded. You can pause over the object's listing to view the path, size, and time when it was last modified.

## Upload the current file to an S3 bucket using a command


1. To select a file for upload, choose the file's tab.

1. Press **Ctrl\$1P** to display the **Commands** pane.

1. For **Go To Anything**, start to enter the phrase `upload file` to display the `AWS: Upload File` command. Choose the command when it appears.

1. For **Step 1: Select a file to upload**, you can choose the file you've selected or browse for another file.

1. For **Step 2: Select an S3 bucket to upload to**, choose a bucket from the list.

   The selected file is uploaded as an S3 object to the bucket or folder. Each object's entry describes the size of the stored object and how long ago it was uploaded. You can pause over the object's listing to view the path, size, and time when it was last modified.

## Downloading an Amazon S3 object
<a name="downloading-s3-object"></a>

You can download objects in an Amazon S3 bucket from the AWS Cloud to a folder in your AWS Cloud9 environment.

1. In the **AWS Explorer**, choose the **S3** node to view the list of buckets.

1. In a bucket or in a folder in a bucket, open the context menu (right-click) for an object, and then choose **Download As**.

1. Using your environment's file manager, select a destination folder, enter a file name, and then choose **Download**.

After a file is downloaded, you can open it in AWS Cloud9.

## Deleting an Amazon S3 object
<a name="deleting-s3-object"></a>

You can permanently delete an object if it's in a non-versioned bucket. But for versioning-enabled buckets, a delete request does not permanently delete that object. Instead, Amazon S3 inserts a delete marker in the bucket. For more information, see [Deleting object versions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html) in the *Amazon Simple Storage Service User Guide*.

1. In the **AWS Explorer**, choose the **S3** node to view the list of buckets.

1. In a bucket or a folder in a bucket, open the context menu (right-click) for an object, and then choose **Delete**.

1. Choose **Delete** to confirm the deletion.

## Generating a presigned URL for an Amazon S3 object
<a name="presigned-s3-object"></a>

With presigned URLS, an object owner can share private Amazon S3 objects with others by granting time-limited permission to download the objects. For more information, see [Sharing an object with a presigned URL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) in the *Amazon S3 User Guide*.

1. In the **AWS Explorer**, choose the **S3** node to view the list of buckets.

1. In a bucket or a folder in a bucket, right-click an object, and then choose **Generate Presigned URL**.

1. In the AWS Toolkit command pane, enter the number of minutes that the URL can be used to access the object. Press **Enter** to confirm.

   The status at the bottom of the IDE confirms that presigned URL for the object was copied to your clipboard.

# Working with AWS SAM using the AWS Toolkit
<a name="serverless-apps-toolkit"></a>

The AWS Toolkit provides support for [serverless applications](https://aws.amazon.com/serverless/). Using the AWS Toolkit, you can create serverless applications that contain [AWS Lambda](https://aws.amazon.com/lambda/) functions, and then deploy the applications to an AWS CloudFormation stack.

## Creating a serverless application
<a name="sam-create"></a>

This example shows how to use the AWS Toolkit to create a serverless application. For information about how to run and debug serverless applications, see [Running and debugging serverless applications](#sam-run-debug).

The necessary prerequisites for creating a serverless application include the **AWS SAM CLI** and the **AWS CLI**. These are included with AWS Cloud9. If AWS SAM CLI is not installed, or if it is outdated, you might need to run an install or upgrade. For instructions on how to install AWS SAM CLI, see [Installing the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html#install-sam-cli-instructions) and for instructions on how to upgrade the AWS SAM CLI, see [Upgrading the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/manage-sam-cli-versions.html#manage-sam-cli-versions-upgrade).

### Create a serverless application with the AWS Toolkit
<a name="create-serverless-app"></a>

This example shows how to create a serverless application with the AWS Toolkit by using the [AWS Serverless Application Model (AWS SAM)](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html).

1. In the **AWS Explorer**, open the context (right-click) menu for the **Lambda** node, and then choose **Create Lambda SAM Application**. 
**Note**  
Alternatively, you can select the menu icon across from the **AWS: Explorer** heading, and choose **Create Lambda SAM Application**.

1. Choose the runtime for your SAM application. For this example, choose **nodejs12.x**.
**Note**  
If you select one of the runtimes with "(Image)," your application is package type `Image`. If you select one of the runtimes without "(Image)," your application is the `Zip` type. For more information about the difference between `Image` and `Zip` package types, see [Lambda deployment packages](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) in the *AWS Lambda Developer Guide*.

1. Choose one of the following templates for your serverless app:
   + **AWS SAM Hello World**: A basic template with a Lambda function that returns the classic "Hello World" message.
   + **AWS Step Functions Sample App**: A sample application that runs a stock-trading workflow. Step functions orchestrate the interactions of the Lambda functions that are involved. 

1. Choose a location for your new project. If one is available, you can select an existing workspace folder. Otherwise, browse for a different folder. If you choose **Select a different folder**, a dialog box displays where you can select a folder location.

1. Enter a name for your new application. For this example, use `my-sam-app-nodejs`. After you press **Enter**, the AWS Toolkit takes a few moments to create the project.

When the project is created, you can view your application's files in the Environment window. Find it listed in the **Explorer** window.

![\[Screenshot showing the available runtimes for SAM applications.\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/sam-create-app-explorer.png)


## Running and debugging serverless applications
<a name="sam-run-debug"></a>

You can use the AWS Toolkit to configure how to debug serverless applications and run them locally in your development environment. You can debug a serverless application that's defined by an AWS Serverless Application Model (AWS SAM) template. This template uses simple YAML syntax to describe resources such as functions, APIs, databases, and event-source mappings that make up a serverless application. 

For a closer look at the AWS SAM template, see the [AWS SAM template anatomy](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy.html) in the *AWS Serverless Application Model Developer Guide.* 

Alternatively, you can rapidly debug serverless applications that haven't been committed to a SAM template.

You start to configure debug behavior by using inline actions to identify an eligible AWS Lambda function. To use the infrastructure defined by the SAM template, use the inline action in the relevant YAML-formatted file. To test the function directly without the template, use the context-aware link for the Lambda handler in the application file.

**Note**  
In this example, we're debugging an application that uses JavaScript. But you can use debugging features available in the AWS Toolkit with the following languages and runtimes:  
JavaScript – Node.js 10.*x*, 12.*x*, 14.*x*
Python – 3.7, 3.8, 3.9, 3.10 (Python 2.7 and 3.6 serverless applications can be run but not debugged by the AWS Toolkit.)
Your language choice also affects how context-aware links indicate eligible Lambda handlers. For more information, see [Running and debugging serverless functions directly from code](#run-debug-no-template).

### Using SAM templates to run and debug serverless applications
<a name="sam-run-debug-template"></a>

For applications that are run and debugged using a SAM template, a YAML-formatted file describes the application's behavior and the resources it uses. If you create a serverless application using the AWS Toolkit, a file named `template.yaml` is automatically generated for your project.

In this procedure, use the example application that was created in [Creating a serverless application](#sam-create).

### To use a SAM template to run and debug a serverless application


1. To view your application files that make up your serverless application, go to the **Environment** window.

1. From the application folder (for example, *my-sample-app*), open the `template.yaml` file.

1. For `template.yaml`, select **Edit Launch Configuration**.

   A new editor displays the `launch.json` file that provides a debugging configuration with default attributes.

1. <a name="properties"></a>Edit or confirm values for the following configuration properties:
   + `"name"` – Enter a reader-friendly name to appear in the **Configuration** dropdown field in the **Run** view.
   + `"target"` – Ensure that the value is `"template"`. That way, the SAM template is the entry point for the debug session. 
   + `"templatePath"` – Enter a relative or absolute path for the `template.yaml` file.
   + `"logicalId"` – Ensure that the name matches the one that's specified in the **Resources** section of SAM template. In this case, it's the `HelloWorldFunction` of type `AWS::Serverless::Function`.

   For more information about these and other entries in the `launch.json` file, see [Configuration options for debugging serverless applications](sam-debug-config-ref.md).

1. If you're satisfied with your debug configuration, save `launch.json`. Then, choose the green "play" button next to **RUN** to start debugging.
**Note**  
If your SAM application fails to run, check the **Output** window to see if the error is caused by a Docker image not building. You might need to free up disk space in your environment.   
For more information, see [Error running SAM applications locally in AWS Toolkit because the AWS Cloud9 environment doesn't have enough disk space](troubleshooting.md#troubleshooting-dockerimage-toolkit). 

   When the debugging sessions starts, the **DEBUG CONSOLE** panel shows debugging output and displays any values that are returned by the Lambda function. When debugging SAM applications, the **AWS Toolkit** is selected as the **Output** channel in the **Output** panel.<a name="docker-problem"></a>
**Note**  
For Windows users, if you see a Docker mounting error during this process, you might need to refresh the credentials for your shared drives in **Docker Settings**. A Docker mounting error looks similar to the following.   

   ```
   Fetching lambci/lambda:nodejs10.x Docker container image......
   2019-07-12 13:36:58 Mounting C:\Users\<username>\AppData\Local\Temp\ ... as /var/task:ro,delegated inside runtime container
   Traceback (most recent call last):
   ...requests.exceptions.HTTPError: 500 Server Error: Internal Server Error ...
   ```

### Running and debugging serverless functions directly from code
<a name="run-debug-no-template"></a>

When testing the AWS SAM application, you can choose to run and debug only the Lambda function. Exclude other resources that are defined by the SAM template. This approach involves using an inline action to identify Lambda function handlers in the source code that can be directly invoked. 

The Lambda handlers that are detected by context-aware links depend on the language and runtime you're using for your application.


|  Language/runtime  | Conditions for Lambda functions to be identified by context-aware links | 
| --- | --- | 
|  JavaScript (Node.js 10.x, 12.x, and 14.x)  |  The function has the following features: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/serverless-apps-toolkit.html)  | 
|  Python (3.7, 3.8, 3.9, and 3.10)  |  The function has the following features: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/serverless-apps-toolkit.html)  | 

### To run and debug a serverless application directly from the application code




1. To view your serverless application files, navigate to the application folder by choosing the folder icon next to the editor.

1. From the application folder (for example, *my-sample-app*), expand the function folder (in this example, *hello-world*) and open the `app.js` file.

1. In the inline action that identifies an eligible Lambda handler function, choose `Add Debug Configuration`. If the add debug configuration option doesn't appear, you must enable code lenses. To enable code lenses, see [Enabling AWS Toolkit code lenses](enable-code-lenses.md).  
![\[Access the Add Debug Configuration option in the inline action for a Lambda function handler.\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/direct_invoke_config.png)

1. Select the runtime where your SAM application runs.

1. In the editor for the `launch.json` file, edit or confirm values for the following configuration properties:
   + `"name"` – Enter a reader-friendly name.
   + `"target"` – Ensure that the value is `"code"` so that a Lambda function handler is directly invoked.
   + `"lambdaHandler"` – Enter the name of the method within your code that Lambda calls to invoke your function. For example, for applications in JavaScript, the default is `app.lambdaHandler`.
   + `"projectRoot"` – Enter the path to the application file that contains the Lambda function.
   + `"runtime"` – Enter or confirm a valid runtime for the Lambda execution environment (for example, `"nodejs.12x"`).
   + `"payload"` – Choose one of the following options to define the event payload that you want to provide to your Lambda function as input:
     + `"json"`: JSON-formatted key-value pairs that define the event payload.
     + `"path"`: A path to the file that's used as the event payload.

1. 

   If you're satisfied with the debug configuration, choose the green play arrow next to **RUN** to start debugging.

   When the debugging sessions starts, the **DEBUG CONSOLE** panel shows debugging output and displays any values that are returned by the Lambda function. When debugging SAM applications, **AWS Toolkit** is selected as the **Output** channel in the **Output** panel.
**Note**  
If you see Docker mentioned in error messages, see this [note](#docker-problem).

### Running and debugging local Amazon API Gateway resources
<a name="run-debug-api-gateway"></a>

You can run or debug AWS SAM API Gateway local resources that are specified in `template.yaml`. Do so by running an AWS Cloud9 launch configuration of `type=aws-sam` with the `invokeTarget.target=api`.

**Note**  
API Gateway supports two types of APIs. They are REST and HTTP APIs. However, the API Gateway feature with the AWS Toolkit only supports REST APIs. Sometimes HTTP APIs are called "API Gateway V2 APIs."

**To run and debug local API Gateway resources**

1. Choose one of the following approaches to create a launch config for an AWS SAM API Gateway resource:
   + **Option 1**: Visit the handler source code (specifically, a .js, .cs, or .py file) in your AWS SAM project, hover over the Lambda handler, and choose **Add Debug Configuration** If the add debug configuration option doesn't appear, enable code lenses. To enable code lenses, see [Enabling AWS Toolkit code lenses](enable-code-lenses.md).). Then, in the menu, choose the item marked API Event.
   + **Option 2** Edit `launch.json` and create a new launch configuration using the following syntax.

     ```
     {
         "type": "aws-sam",
         "request": "direct-invoke",
         "name": "myConfig",
         "invokeTarget": {
             "target": "api",
             "templatePath": "n12/template.yaml",
             "logicalId": "HelloWorldFunction"
         },
         "api": {
             "path": "/hello",
             "httpMethod": "post",
             "payload": {
                 "json": {}
             }
         }, 
         "sam": {},
         "aws": {}
     }
     ```

1. In the dropdown menu next to the **Run** button, choose the launch configuration (named `myConfig` in the preceding example).

1. (Optional) Add breakpoints to your Lambda project code.

1. Choose the **Run** button beside the green **"play" button**.

1. In the output pane, view the results.

#### Configuration
<a name="run-debug-api-gateway-configuration"></a>

When you use the `invokeTarget.target` property value `api`, the Toolkit changes the launch configuration validation and behavior to support an `api` field.

```
{
    "type": "aws-sam",
    "request": "direct-invoke",
    "name": "myConfig",
    "invokeTarget": {
        "target": "api",
        "templatePath": "n12/template.yaml",
        "logicalId": "HelloWorldFunction"
    },
    "api": {
        "path": "/hello",
        "httpMethod": "post",
        "payload": {
            "json": {}
        },
        "querystring": "abc=def&qrs=tuv",
        "headers": {
            "cookie": "name=value; name2=value2; name3=value3"
        }
    },
    "sam": {},
    "aws": {}
}
```

Replace the values in the example as follows:

**invokeTarget.logicalId**  
An API resource.

**path**  
The API path that the launch config requests (for example, `"path": "/hello"`).  
Must be a valid API path resolved from the `template.yaml` that's specified by `invokeTarget.templatePath`.

**httpMethod**  
Use one of the following verbs: "delete," "get," "head," "options," "patch," "post," and "put."

**payload**  
The JSON payload (HTTP body) to send in the request with the same structure and rules as the lambda.payload field.  
`payload.path` points to a file that contains the JSON payload.  
`payload.json` specifies a JSON payload inline.

**headers**  
Optional map of name-value pairs. Use it to specify HTTP headers to include in the request.  

```
"headers": {
     "accept-encoding": "deflate, gzip;q=1.0, *;q=0.5",
     "accept-language": "fr-CH, fr;q=0.9, en;q=0.8, de;q=0.7, *;q=0.5",
     "cookie": "name=value; name2=value2; name3=value3",
     "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
}
```

**querystring**  
(Optional) Use this string to set the `querystring` of the request (for example, `"querystring": "abc=def&ghi=jkl"`).

**aws**  
How AWS connection information is provided. For more information, see the **AWS connection (`aws`) properties** table in [Configuration options for debugging serverless applications](sam-debug-config-ref.md).

**sam**  
How the AWS SAM CLI builds the application. For more information, see the **AWS SAM CLI ("`sam`") properties** in [Configuration options for debugging serverless applications](sam-debug-config-ref.md).

## Syncing a serverless application
<a name="deploy-serverless-app"></a>

This example shows how to sync the serverless application that was created in the previous topic ([Creating a serverless application](#sam-create)) to AWS using the AWS Toolkit for Visual Studio Code.

### Prerequisites
<a name="deploy-sam-prereq"></a>
+ Make sure to choose a globally unique Amazon S3 bucket name.
+ Ensure that the credentials you configured in include the appropriate read/write access to the following services: Amazon S3, CloudFormation, AWS Lambda, and Amazon API Gateway.
+ For applications with deployment type `Image`, make sure that you have both a globally unique Amazon S3 bucket name and an Amazon ECR repository URI to use for the deployment.

### Syncing a serverless application
<a name="deploy-sam-proc"></a>

1. In the **AWS Explorer** window, open the context (right-click) menu for the **Lambda** node and select **Sync SAM Application**.

1. Choose the AWS Region to deploy to. 

1. Choose the `template.yaml` file to use for the deployment.

1. Enter the name of an Amazon S3 bucket that this deployment can use. The bucket must be in the Region that you're deploying to.
**Warning**  
The Amazon S3 bucket name must be globally unique across all existing bucket names in Amazon S3. Add a unique identifier to the name given in the following example or choose another name.

1. If your serverless application includes a function with package type `Image`, enter the name of an Amazon ECR repository that this deployment can use. The repository must be in the Region that you're deploying to.

1. Enter a name for the deployed stack, either a new stack name or an existing stack name.

1. Verify the success of the deployment on the **AWS Toolkit** tab of the **Console**.

   If an error occurs, a message pops up in the lower right.

   If this happens, check the text in the **AWS Toolkit** tab for details. The following is an example of error details.

   ```
   Error with child process: Unable to upload artifact HelloWorldFunction referenced by CodeUri parameter of HelloWorldFunction resource.
   S3 Bucket does not exist. Execute the command to create a new bucket
   aws s3 mb s3://pbart-my-sam-app-bucket
   An error occurred while deploying a SAM Application. Check the logs for more information by running the "View AWS Toolkit Logs" command from the Command Palette.
   ```

   In this example, the error occurred because the Amazon S3 bucket didn't exist.

When the deployment is complete, you'll see your application that's listed in the **AWS Explorer**. To learn how to invoke the Lambda function that was created as part of the application, see [Invoking remote Lambda functions](lambda-toolkit.md#remote-lambda).

## Deleting a serverless application from the AWS Cloud
<a name="delete-serverless-app"></a>

Deleting a serverless application involves deleting the CloudFormation stack that you previously deployed to the AWS Cloud. Note that this procedure does not delete your application directory from your local host.

1. Open the **AWS Explorer**.

1. In the **AWS Explorer** window, expand the Region containing the deployed application that you want to delete, and then expand **CloudFormation**.

1. Open the context (right-click) menu for the name of the CloudFormation stack that corresponds to the serverless application that you want to delete. Then, choose **Delete CloudFormation Stack**.

1. To confirm that you want to delete the selected stack, choose **Delete**.

If the stack deletion succeeds, the AWS Toolkit removes the stack name from the CloudFormation list in **AWS Explorer**.

# Enabling AWS Toolkit code lenses
<a name="enable-code-lenses"></a>

This step shows how you can enable AWS Toolkit code lenses.

1. On the menu bar, choose **AWS Cloud9**, and then **Preferences**.

1. On the **Preferences** tab, in the sidebar, choose **AWS Toolkit**.

1. To enable code lenses, choose **Enable Code Lenses**.

# Configuration options for debugging serverless applications
<a name="sam-debug-config-ref"></a>

With inline actions, you can easily find and define properties for invoking Lambda functions directly or with the SAM template. You can also define properties for `"lambda"` (how the function runs), `"sam"` (how the AWS SAM CLI builds the application), and `"aws"` (how AWS connection information is provided). 


**AWS SAM: Direct Lambda handler invoke / Template-based Lambda invoke**  

| Property | Description | 
| --- | --- | 
|  `type`  |  Specifies which extension manages the launch configuration. Always set to `aws-sam` to use the AWS SAM CLI to build and debug locally.  | 
|  `name`  |  Specifies a reader-friendly name to appear in the **Debug launch configuration** list.  | 
| `request` |  Specifies the type of configuration to be performed by the designated extension (`aws-sam`). Always set to `direct-invoke` to start the Lambda function.  | 
|  `invokeTarget`  |  Specifies the entry point for invoking the resource. For invoking the Lambda function directly, set values for the following `invokeTarget` fields:  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/sam-debug-config-ref.html) For invoking the Lambda resources with the SAM template, set values for the following `invokeTarget` fields: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/sam-debug-config-ref.html)  | 


**Lambda (`"lambda"`) properties**  

|  Property | Description | 
| --- | --- | 
|  `environmentVariables`  |  Passes operational parameters to your function. For example, if you're writing to an Amazon S3 bucket, configure the bucket name as an environment variable. Do not hard code the bucket name that you're writing to.  | 
| `payload` |  Provides two options for the event payload that you provide to your Lambda function as input. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/sam-debug-config-ref.html)  | 
|  `memoryMB`  |  Specifies megabytes of memory provided for running an invoked Lambda function.  | 
| `runtime` |  Specifies the runtime used by the Lambda function. For more information, see [AWS Lambda runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html).  | 
|  `timeoutSec`  |  Sets the time allowed, in seconds, before the debug session times out.  | 

The AWS Toolkit extension uses the AWS SAM CLI to build and debug serverless applications locally. You can configure the behavior of AWS SAM CLI commands using properties of the `"sam"` configuration in the `launch.json` file.


**AWS SAM CLI (`"sam"`) properties**  

| Property |  Description  |  Default value  | 
| --- | --- | --- | 
|  `buildArguments`  | Configures how the `sam build` command builds your Lambda source code. To view build options, see [sam build](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-build.html) in the *AWS Serverless Application Model Developer Guide*. |  Empty string  | 
|  `containerBuild`  |  Indicates whether to build your function inside an AWS Lambda-like Docker container.   |  `false`  | 
|  `dockerNetwork`  |  Specifies the name or ID of an existing Docker network that the Lambda Docker containers should connect to, along with the default bridge network. If not specified, the Lambda containers only connect to the default bridge Docker network.   |  Empty string  | 
|  `localArguments`  |  Additional local invoke arguments.  |  Empty string  | 
|  `skipNewImageCheck`  |  Specifies whether the command should skip pulling down the latest Docker image for Lambda runtime.   |  `false`  | 
|  `template`  |  Customizes your SAM template by using parameters to input customer values to it. For more information, see [Parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html) in the *AWS CloudFormation User Guide*.  |  `"parameters":{}`  | 


**AWS connection (`"aws"`) properties**  

| Property | Description | Default value | 
| --- | --- | --- | 
| `credentials` |  Selects a specific profile (for example, `profile:default`) from your credential file to get AWS credentials.   | The AWS credentials provided by your existing shared AWS config file or shared AWS credentials file. | 
| `Region` |  Sets the AWS Region of the service (for example, us-east-1).  | The default AWS Region associated with the active credentials profile.  | 

# Working with AWS Step Functions using the AWS Toolkit
<a name="bulding-stepfunctions"></a>

The AWS Toolkit provides support for [AWS Step Functions](https://aws.amazon.com/step-functions/). Step Functions allow you to create state machines that define workflows for AWS Lambda functions and other AWS services that support business-critical application.

You can use the AWS Toolkit to do the following with Step Functions:
+ Create and publish a state machine, which is a workflow made up of individual steps.
+ Download a file that defines a state machine workflow.
+ Run a state machine workflow with input you've entered or selected. 

**Topics**
+ [Prerequisites](#bulding-stepfunctions-pre)
+ [Create and publish a state machine](#state-machine-create)
+ [Run a state machine in AWS Toolkit](#starting-stepfunctions)
+ [Download a state machine definition file and visualize its workflow](#sfn-download)

## Prerequisites
<a name="bulding-stepfunctions-pre"></a>

Step Functions can run code and access AWS resources (such as invoking a Lambda function). To maintain security, you must grant Step Functions access to those resources by using an IAM role. 

With AWS Toolkit, you can take advantage of automatically generated IAM roles that are valid for the AWS Region in which you create the state machine. To create your own IAM role for a state machine, see [How AWS Step Functions Works with IAM](https://docs.aws.amazon.com/step-functions/latest/dg/procedure-create-iam-role.html) in the *AWS Step Functions Developer Guide*. 

## Create and publish a state machine
<a name="state-machine-create"></a>

When you create a state machine with AWS Toolkit, you choose a starter template that defines a workflow for a business case. You can then edit or replace that template to suit your specific needs. For more information on defining a state machine in a file that represents its structure, see [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html) in the *AWS Step Functions Developer Guide*.

1. In the **AWS Explorer** pane, open the context (right-click) menu for **Step Functions**, and then choose **Create a new Step Function state machine**.

1. In the command panel, choose a starter template for your state machine's workflow. 

1. Next, choose a format for the Amazon States Language (ASL) file that defines your state machine.

   An editor opens to display the ASL file that defines the state machine's workflow.
**Note**  
For information on editing the ASL file to customize your workflow, see [State Machine Structure](https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-state-machine-structure.html). 

1. In the ASL file, choose **Publish to Step Functions** to add your state machine to the AWS Cloud. 
**Note**  
You can also choose **Render graph** in the ASL file to display a visual representation of the state machine's workflow.  
![\[Diagram that shows how to choose Publish to Step Functions\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/publish-stepfunction.png)

1. In the command panel, choose an AWS Region to host your step function.

1. Next, you can choose to create a new step function or update an existing one.

------
#### [ Quick Create  ]

   This option allows you to create a new step function from the ASL file using the [step-functions/latest/dg/concepts-standard-vs-express.html](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html). You're asked to specify the following:
   + An IAM role that allows your step function to run code and access AWS resources. (You can choose an automatically generated IAM role that's valid for the AWS Region in which you create the state machine.)
   + A name for your new function.

   You can check that your state machine was successfully created and obtain its ARN in the AWS Toolkit output tab.

------
#### [ Quick Update ]

   If a state machine already exists in the AWS Region, you can choose one to update with the current ASL file. 

   You can check that your state machine was successfully updated and obtain its ARN in the AWS Toolkit output tab.

------

   After you create a state machine, it appears under **Step Functions** in the **AWS Explorer** pane. If it doesn't immediately appear, choose the **Toolkit** menu, **Refresh Explorer**.

## Run a state machine in AWS Toolkit
<a name="starting-stepfunctions"></a>

You can use AWS Toolkit to run remote state machines. The running state machine receives JSON text as input and passes that input to the first state in the workflow. Individual states receive JSON as input and usually pass JSON as output to the next state. For more information, see [ Input and Output Processing in Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-input-output-filtering.html).

1. In the **AWS Explorer** pane, choose **Step Functions**. Then open the context (right-click) menu for a specific state machine and choose **Start Execution**.

1. In the **Start Execution** pane, add the JSON-formatted input for state machine's workflow by either entering the text directly in the field below or uploading a file from your local device.

1. Choose **Execute**

   The AWS Toolkit output tab displays a confirmation that the workflow has started and the ARN of the process ID. You can use that process ID to check in the AWS Step Functions console whether the workflow ran successfully. You can also see the timestamps for when your workflow started and ended. 

## Download a state machine definition file and visualize its workflow
<a name="sfn-download"></a>

To download a state machine means you download a file containing JSON text that represents the structure of that state machine. You can then edit this file to create a new state machine or update an existing one. For more information, see [Amazon States Language](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html) in the *AWS Step Functions Developer Guide*.

1. In the **AWS Explorer** pane, choose **Step Functions**. Then open the context (right-click) menu for a specific state machine and choose **Download Definition**.
**Note**  
The context menu also offers the options to **Copy Name** and **Copy ARN**.

1. In the **Save** dialog box, select the folder in your environment where you store downloaded state machine file, and then choose **Save**.

   The JSON-formatted file that defines your state machine's workflow is displayed in an editor.

1. To display a visual representation of the workflow, choose **Render graph**.

   A window displays a flowchart, which shows the sequence of states in your state machine's workflow.  
![\[Visual representation of the state machine's workflow\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/render-graph.png)

# Working with Systems Manager automation documents
<a name="systems-manager-automation-docs"></a>

With AWS Systems Manager, you have visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface that you can use to view operational data from multiple AWS services and automate operational tasks across your AWS resources.

A [Systems Manager document](https://docs.aws.amazon.com//systems-manager/latest/userguide/sysman-systems-manager-docs.html) defines the actions that Systems Manager performs on your managed instances. An automation document is a type of Systems Manager document that's used to perform common maintenance and deployment tasks. This includes creating or updating an Amazon Machine Image (AMI). This topic outlines how to create, edit, publish, and delete automation documents with AWS Toolkit.

**Topics**
+ [Assumptions and prerequisites](#systems-manager-assumptions)
+ [IAM permissions for Systems Manager Automation documents](#systems-manager-permissions)
+ [Creating a new Systems Manager automation document](#systems-manager-create)
+ [Publishing a Systems Manager automation document](#systems-manager-publish)
+ [Editing an existing Systems Manager automation document](#systems-manager-open)
+ [Working with versions](#systems-manager-edit-default-version)
+ [Deleting a Systems Manager automation document](#systems-manager-delete)
+ [Running a Systems Manager automation document](#systems-manager-run)
+ [Troubleshooting Systems Manager automation documents in AWS Toolkit](systems-manager-troubleshoot.md)

## Assumptions and prerequisites
<a name="systems-manager-assumptions"></a>

Before you begin, make sure you met the following conditions:
+ You’re familiar with Systems Manager. For more information, see the [https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html).
+ You’re familiar with Systems Manager automation use cases. For more information, see [AWS Systems Manager Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) in the *AWS Systems Manager User Guide*.

## IAM permissions for Systems Manager Automation documents
<a name="systems-manager-permissions"></a>

To create, edit, publish, and delete Systems Manager automation documents, you must have a credentials profile that contains the necessary AWS Identity and Access Management (IAM) permissions. The following policy document defines the necessary IAM permissions that can be used in a principal policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:ListDocuments",
                "ssm:ListDocumentVersions",
                "ssm:DescribeDocument",
                "ssm:GetDocument",
                "ssm:CreateDocument",
                "ssm:UpdateDocument",
                "ssm:UpdateDocumentDefaultVersion",
                "ssm:DeleteDocument"
            ],
            "Resource": "*"
        }
    ]
}
```

------

For information about how to update an IAM policy, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

## Creating a new Systems Manager automation document
<a name="systems-manager-create"></a>

You can create an automation document in `JSON` or `YAML` using AWS Toolkit. When you create an automation document, it's presented in an untitled file. You can name your file and save it. However, the file isn't uploaded to AWS until you publish it.

**To create a new automation document**

1. Choose the search icon on the left navigation pane or press **Ctrl\$1P** to open the Search pane.

1. In the Search pane, start to enter the term "systems manager" and choose the **AWS: Create a new Systems Manager Document Locally** command when it displays.

1. Choose one of the starter templates for a "Hello World" example.

1. Choose either `JSON` or `YAML` as the format for your document.

   The editor displays your new automation document.

**Note**  
When you first create a local automation document, it doesn't automatically appear in AWS. Before you can run it, you must publish it to AWS. 

## Publishing a Systems Manager automation document
<a name="systems-manager-publish"></a>

After you create or edit your automation document in AWS Toolkit, you can publish it to AWS.

**To publish your automation document**

1. Open the automation document that you want to publish using the procedure that's outlined in [Editing an existing Systems Manager automation document](#systems-manager-open).

1. Choose the search icon on the left navigation pane or press **Ctrl\$1P** to open the Search pane.

1. In the Search pane, start to enter the term "systems manager" and choose the **AWS: Publish a new Systems Manager Document ** command when it displays.

1. For **Step 1 of 3**, choose the AWS Region where you want to publish the document.

1. For **Step 2 of 3**, choose **Quick Create** to create an automation document. Or, choose **Quick Update** to update an existing automation document in that Region.
**Note**  
You can update only automation documents that you own. If you choose **Quick Update** and you don't own any documents in that Region, a message informs you to publish a document before updating it.

1. For **Step 3 of 3**, depending on your choice in the previous step, enter the name of a new automation document or select an existing document to update.
**Note**  
When you publish an update to an existing automation document in AWS, a new version is added to the document. If a document has multiple versions, you can set the [default one](#systems-manager-edit-default-version).

## Editing an existing Systems Manager automation document
<a name="systems-manager-open"></a>

You use the AWS Explorer to find existing Systems Manager automation documents. When you open an existing document, it appears as an untitled file in an AWS Cloud9 editor. There are three types of automation document that you download:
+ **Owned by Amazon**: Pre-configured SSM documents that can be used by specifying parameters at runtime.
+ **Owned by me**: Documents that I've created and published to AWS. 
+ **Shared with me**: Documents that owners have shared with you, based on your AWS account ID. 

The only type of documents that you can update on AWS are those that are *owned by me*. You can also download automation documents that are shared or owned by Amazon, and edit them in AWS Cloud9. However, when you publish to AWS, you must use either create a new document or update an existing document you own. You can't create new versions of documents that have another owner or are owned by Amazon.

For more information, see [AWS Systems Manager documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/documents.html) in the *AWS Systems Manager User Guide*.

1. In the AWS Explorer, for **Systems Manager**, choose the category of SSM document you want to download: **Owned by Amazon**, **Owned by me**, or **Shared with me**.

1. For a specific document, open the context (right-click) menu and choose **Download as YAML** or **Download as JSON**.

   The formatted SSM document displays in a new editor tab.

After you finished editing, you can use the **AWS: Publish a new Systems Manager Document ** command to create a new document in the AWS Cloud or update an existing document that you own. 

## Working with versions
<a name="systems-manager-edit-default-version"></a>

Systems Manager automation documents use versions for change management. With AWS Toolkit, you can set the default version of the document, which is the version that's used when you run the document. 

**To set a default version**
+ In the AWS Explorer, navigate to the document that you want to set the default version on, open the context (right-click) menu for the document, and choose **Set default version**.
**Note**  
If the chosen document only has one version, you can't change the default.

## Deleting a Systems Manager automation document
<a name="systems-manager-delete"></a>

You can delete the automation documents that you own in AWS Toolkit. Deleting an Automation document deletes the document and all versions of the document. 

**Important**  
Deleting is a destructive action that can't be undone.
Deleting an automation document that has already been started doesn't delete the AWS resources that were created or modified when it was run.
Deleting is permitted only if you own the document.

**To delete your automation document**

1. In the AWS Explorer pane, for **Systems Manager**, expand **Owned by Me** to list your documents.

1. Open the context (right-click) menu for the document you want to delete, and choose **Delete document**.

1. In the warning dialog box that displays, choose **Delete** to confirm.

## Running a Systems Manager automation document
<a name="systems-manager-run"></a>

After your automation document is published to AWS, you can run it to perform tasks on your behalf in your AWS account. To run your Automation document, you use the AWS Management Console, the Systems Manager APIs, the AWS CLI, or the AWS Tools for PowerShell. For instructions on how to run an automation document, see [Running a simple automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/running-simple-automations.html) in the *AWS Systems Manager User Guide*.

Alternatively, if you want to use one of the AWS SDKs with the Systems Manager APIs to run your Automation document, see the [AWS SDK references](https://aws.amazon.com/developer/tools/).

**Important**  
Running an automation document can create new resources in AWS and can incur billing costs. We strongly recommend that you understand what your automation document will create in your account before you run it.

# Troubleshooting Systems Manager automation documents in AWS Toolkit
<a name="systems-manager-troubleshoot"></a>

**I saved my automation document in AWS Toolkit, but I don’t see it in the AWS Management Console.**  
Saving an automation document in AWS Toolkit doesn't publish the automation document to AWS. For more information about publishing your Automation document, see [Publishing a Systems Manager automation document](systems-manager-automation-docs.md#systems-manager-publish).

**Publishing my automation document failed with a permissions error.**  
Make sure your AWS credentials profile has the necessary permissions to publish Automation documents. For an example permissions policy, see [IAM permissions for Systems Manager Automation documents](systems-manager-automation-docs.md#systems-manager-permissions).

**I published my automation document to AWS, but I don’t see it in the AWS Explorer pane.**  
Make sure that you’ve published the document to the same AWS Region you’re browsing in the AWS Explorer pane.

**I’ve deleted my automation document, but I’m still being billed for the resources it created.**  
Deleting an automation document doesn’t delete the resources it created or modified. You can identify the AWS resources that you’ve created from the [AWS Billing Management Console](https://console.aws.amazon.com/billing/home), explore your charges, and choose what resources to delete from there.

# Working with Amazon ECR in AWS Cloud9 IDE
<a name="ecr"></a>

Amazon Elastic Container Registry (Amazon ECR) is an AWS managed container-registry service that's secure and scalable. Several Amazon ECR service functions are accessible from the AWS Toolkit Explorer:
+ Creating a repository.
+ Creating an AWS App Runner service for your repository or tagged image.
+ Accessing image tag and repository URIs or ARNs.
+ Deleting image tags and repositories.

You can also access the full-range of Amazon ECR functions through the AWS Cloud9 console by installing the AWS CLI and other platforms.

For more information about Amazon ECR, see [What is Amazon ECR?](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) in the Amazon Elastic Container Registry User Guide.

## Prerequisites
<a name="prereqs-awstoolkit-vscode-ecr"></a>

The following are pre-installed in the AWS Cloud9 IDE for AWS Cloud9 Amazon EC2 environments. They're required to access the Amazon ECR service from the AWS Cloud9 IDE. 

### IAM credentials
<a name="create-an-iam-user"></a>

The IAM role that you created and used for authentication in the AWS console. For more information about IAM, see the [AWS Identity and Access Management User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/).

### Docker configuration
<a name="create-an-iam-user"></a>

Docker is pre-installed in the AWS Cloud9 IDE for AWS Cloud9 Amazon EC2 environments. For more information about Docker, see [Install Docker Engine](https://docs.docker.com/engine/install/).

### AWS CLI version 2 configuration
<a name="create-an-iam-user"></a>

AWS CLI version 2 is pre-installed in the AWS Cloud9 IDE for AWS Cloud9 Amazon EC2 environments. For more information about AWS CLI version 2, see [Installing, updating, and uninstalling the AWS CLI version 2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

**Topics**
+ [Prerequisites](#prereqs-awstoolkit-vscode-ecr)
+ [Using Amazon ECR with AWS Cloud9 IDE](ecr-working.md)

# Working with Amazon ECR service in AWS Cloud9
<a name="ecr-working"></a>

You can access the Amazon Elastic Container Registry (Amazon ECR) service directly from the AWS Explorer in AWS Cloud9 IDE. You can use Amazon ECR to push a program image to an Amazon ECR repository. To get started, follow these steps:

1. Create a Dockerfile that contains the information necessary to build an image.

1. Build an image from that Dockerfile and tag the image for processing.

1. Create a repository that's inside of your Amazon ECR instance. 

1. Push the tagged image to your repository.

**Topics**
+ [Prerequisites](#prereqs-vscode-ecr)
+ [1. Creating a Dockerfile](#dockerfile-ecr-cloud9toolkit)
+ [2. Building your image from your Dockerfile](#build-docker-image)
+ [3. Creating a new repository](#create-repository)
+ [4. Pushing, pulling, and deleting images](#push-image)

## Prerequisites
<a name="prereqs-vscode-ecr"></a>

Before you can use the Amazon ECR feature of the AWS Toolkit for AWS Cloud9, make sure that you meet these [prerequisites](ecr.md#prereqs-awstoolkit-vscode-ecr) first. These prerequisites are pre-installed in the AWS Cloud9 IDE for AWS Cloud9 Amazon EC2 environments and are required to access Amazon ECR.

## 1. Creating a Dockerfile
<a name="dockerfile-ecr-cloud9toolkit"></a>

Docker uses a file that's called a Dockerfile to define an image that can be pushed and stored on a remote repository. Before you can upload an image to an ECR repository, create a Dockerfile and then build an image from that Dockerfile.

**Creating a Dockerfile**

1. To navigate to the directory where you want to store your Dockerfile, choose **Toggle Tree** option in the left navigation bar within your AWS Cloud9 IDE.

1. Create a new file named **Dockerfile**.
**Note**  
AWS Cloud9 IDE might prompt you to select a file type or file extension. If this occurs, select **plaintext**. AWS Cloud9 IDE has a "dockerfile" extension. However, we don't recommend you use it. This is because the extension might cause conflicts with certain versions of Docker or other associated applications.

**Editing your Dockerfile using AWS Cloud9 IDE**

If your Dockerfile has a file extension, open the context (right-click) menu for the file and remove the file extension. A Dockerfile with extensions might cause conflicts with certain versions of Docker or other associated applications.

After the file extension is removed from your Dockerfile:

1. Open the empty Dockerfile directly in AWS Cloud9 IDE.

1. Copy the contents of the following example into your Dockerfile.  
**Example Dockerfile image template**  

   ```
   FROM ubuntu:22.04
   
   # Install dependencies
   RUN apt-get update && \
    apt-get -y install apache2
   
   # Install apache and write hello world message
   RUN echo 'Hello World!' > /var/www/html/index.html
   
   # Configure apache
   RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \
    echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \
    echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \ 
    echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \ 
    chmod 755 /root/run_apache.sh
   
   EXPOSE 80
   
   CMD /root/run_apache.sh
   ```

   This is a Dockerfile that uses an Ubuntu 22.04 image. The **RUN** instructions update the package caches. Install software packages for the web server, and then write the "Hello World\$1" content to the document root of the web server. The **EXPOSE** instruction exposes port 80 on the container, and the **CMD** instruction starts the web server.

1. Save your Dockerfile.

## 2. Building your image from your Dockerfile
<a name="build-docker-image"></a>

The Dockerfile that you created contains the necessary information to build an image for a program. Before you can push that image to your Amazon ECR instance, first build the image.

**Building an image from your Dockerfile**

1. To navigate into the directory that contains your Dockerfile, use the Docker CLI or a CLI that's integrated with your instance of Docker.

1. To build the image that's defined in your Dockerfile, run the **Docker build** command from the same directory as the Dockerfile.

   ```
             docker build -t hello-world .
   ```

1. To verify that the image was created correctly, run the **Docker images** command.

   ```
   docker images --filter reference=hello-world
   ```  
**Example**  

   The output is as follows.

   ```
   REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
   hello-world         latest              e9ffedc8c286        4 minutes ago       241MB
   ```

1. To run the newly built image based on Ubuntu 22.04, use the **echo** command.
**Note**  
This step isn't necessary to create or push your image. However, you can see how the program image works when it's run.

   ```
   FROM ubuntu:22.04
   CMD ["echo", "Hello from Docker in Cloud9"]
   ```

   Then, run and build the dockerfile. You must run this command from the same directory as the dockerfile.

   ```
   docker build -t hello-world .
   docker run --rm hello-world
   ```  
**Example**  

   The output is as follows.

   ```
   Hello from Docker in Cloud9
   ```

   For more information about the **Docker run** command, see [Docker run reference](https://docs.docker.com/engine/reference/run/) on the Docker website.

## 3. Creating a new repository
<a name="create-repository"></a>

To upload your image into your Amazon ECR instance, create a new repository where it can be stored in.

**Creating a new Amazon ECR repository**

1. From the AWS Cloud9 IDE navigation bar, choose the **AWS Toolkit icon**.

1. Expand the AWS Explorer menu.

1. Locate the default AWS Region that's associated with your AWS account. Then, select it to see a list of the services that are through the AWS Cloud9 IDE.

1. Open the context (right-click) menu for the **ECR** option to start the **Create new repository** process. Then, select **Create Repository**.

1. To complete the process, follow the prompt.

1. After the process is complete, you can access your new repository from the **ECR** section of the AWS Explorer menu.

## 4. Pushing, pulling, and deleting images
<a name="push-image"></a>

After you built an image from your Dockerfile and created a repository, you can push your image into your Amazon ECR repository. Additionally, using the AWS Explorer with Docker and the AWS CLI, you can do the following:
+ Pull an image from your repository.
+ Delete an image that's stored in your repository.
+ Delete your repository.

**Authenticating Docker with your default registry**

Authentication is required to exchange data between Amazon ECR and Docker instances. To authenticate Docker with your registry:

1. Open a terminal within your AWS Cloud9 IDE. 

1. Use the **get-login-password** method to authenticate to your private ECR registry and enter your region and AWS account ID.

   ```
   aws ecr get-login-password \
       --region <region> \
   | docker login \
       --username AWS \
       --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
   ```
**Important**  
In the preceding command, replace **region** and the **AWS\$1account\$1id** with information that's specific to your AWS account. A valid **region** value is *us-east-1*.

**Tagging and pushing an image to your repository**

After you authenticated Docker with your instance of AWS, push an image to your repository.

1. Use the **docker images** command to view the images that you stored locally and identify the one you want to tag.

   ```
   docker images
   ```  
**Example**  

   The output is as follows.

   ```
   REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
   hello-world         latest              e9ffedc8c286        4 minutes ago       241MB
   ```

1. Tag your image with the **Docker tag** command.

   ```
   docker tag hello-world:latest AWS_account_id.dkr.ecr.region.amazonaws.com/hello-world:latest
   ```

1. Push the tagged image to your repository with the **Docker push** command.
**Important**  
Make sure that name of your local repository is the same as your AWS Amazon EC2 repository. In this example, both repositories must be called `hello-world`. For more information about pushing images with docker, see [Pushing a Docker image](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html).

   ```
   docker push AWS_account_id.dkr.ecr.region.amazonaws.com/hello-world:latest
   ```  
**Example**  

   The output is as follows.

   ```
   The push refers to a repository [AWS_account_id.dkr.ecr.region.amazonaws.com/hello-world] (len: 1)
   e9ae3c220b23: Pushed
   a6785352b25c: Pushed
   0998bf8fb9e9: Pushed
   0a85502c06c9: Pushed
   latest: digest: sha256:215d7e4121b30157d8839e81c4e0912606fca105775bb0636b95aed25f52c89b size: 6774
   ```

After your tagged image is successfully uploaded to your repository, refresh the AWS Toolkit by choosing **Refresh Explorer** from the AWS Explorer tab. It's then visible in the AWS Explorer menu in AWS Cloud9 IDE.

**Pulling an image from Amazon ECR**
+ You can pull an image to your local instance of **Docker tag** command.

  ```
  docker pull AWS_account_id.dkr.ecr.region.amazonaws.com/hello-world:latest
  ```  
**Example**  

  The output is as follows.

  ```
  azonaws.com/hello-world:latest
  latest: Pulling from hello-world
  Digest: sha256:e02c521fd65eae4ef1acb746883df48de85d55fc85a4172a09a124b11b339f5e
  Status: Image is up to date for 922327013870.dkr.ecr.us-west-2.amazonaws.com/hello-world.latest
  ```

**Deleting an image from your Amazon ECR repository**

There are two methods for deleting an image from AWS Cloud9 IDE. The first method is to use the AWS Explorer.

1. From the AWS Explorer, expand the **ECR** menu.

1. Expand the repository that you want to delete an image from.

1. Open the context (right-click) menu for the image tag that's associated with the image that you want to delete.

1. To delete all the stored images that are associated with that tag, choose **Delete Tag...**.

**Deleting an image using the AWS CLI**
+ You can also delete an image from your repository with the **AWS ecr batch-delete-image** command.

  ```
  aws ecr batch-delete-image \
        --repository-name hello-world \
        --image-ids imageTag=latest
  ```  
**Example**  

  The output is as follows.

  ```
  {
      "failures": [],
      "imageIds": [
          {
              "imageTag": "latest",
              "imageDigest": "sha256:215d7e4121b30157d8839e81c4e0912606fca105775bb0636b95aed25f52c89b"
          }
      ]
  }
  ```

**Deleting a repository from your Amazon ECR instance**

There are two methods for deleting a repository from AWS Cloud9 IDE. The first method is to use the AWS Explorer:

1. From the AWS Explorer, expand the **ECR** menu.

1. Open the context (right-click) menu for the repository that you want to delete.

1. Choose **Delete Repository...**.

**Deleting an Amazon ECR repository from the AWS CLI**
+ You can delete a repository with the **AWS ecr delete-repository** command.
**Note**  
You normally can't delete a repository without first deleting the images that are contained in it. However, if you add the **--force** flag, you can delete a repository and all of its images in one step.

  ```
          aws ecr delete-repository \
        --repository-name hello-world \
        --force
  ```  
**Example**  

  The output is as follows.

  ```
  --repository-name hello-world --force
  {
      "repository": {
          "repositoryUri": "922327013870.dkr.ecr.us-west-2.amazonaws.com/hello-world", 
          "registryId": "922327013870", 
          "imageTagMutability": "MUTABLE", 
          "repositoryArn": "arn:aws:ecr:us-west-2:922327013870:repository/hello-world", 
          "repositoryName": "hello-world", 
          "createdAt": 1664469874.0
      }
  }
  ```

# Working with AWS IoT in AWS Cloud9 IDE
<a name="iot-start"></a>

With AWS IoT in AWS Cloud9 IDE, you can interact with the AWS IoT service while minimizing interruptions to your work flow in AWS Cloud9. This guide covers how you can get started using the AWS IoT service features that are available in the AWS Cloud9 IDE. For more information, see [What is AWS IoT?](https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html) in the *AWS IoT Developer Guide*.

## AWS IoT prerequisites
<a name="iot-cloud9-prereq"></a>

To get started using AWS IoT in AWS Cloud9 IDE, make sure your AWS account and AWS Cloud9 setup meet all the requirements. For information about the AWS account requirements and AWS user permissions specific to the AWS IoT service, see the [Getting Started with AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/setting-up.html) in the *AWS IoT Developer Guide*.

## AWS IoT Things
<a name="iot-cloud9-things"></a>

AWS IoT connects devices to AWS services and AWS resources. You can connect your devices to AWS IoT by using objects called **things**. A thing is a representation of a specific device or logical entity. It can be a physical device or sensor (for example, a light bulb or a switch on a wall). For more information about AWS IoT things, see [Managing devices with AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/iot-thing-management.html) in the *AWS IoT Developer Guide*. 

### Managing AWS IoT things
<a name="iot-cloud9-things-actions"></a>

The AWS Cloud9 IDE has several features that make your thing management efficient. To manage your AWS IoT things, follow these steps: 
+ [Create a thing](#thing-create)
+ [Attach a certificate to a thing](#thing-certificate-attach)
+ [Detach a certificate from a thing](#thing-certificate-detach)
+ [Delete a thing](#thing-delete)<a name="thing-create"></a>

**To create a thing**

1. From the AWS Explorer, expand the **IoT** service section.

1. Open the context (right-click) menu for the **thing** and choose **Create Thing**.

1. Enter a name for the **thing** in the **Thing Name** field and follow the prompt.

1. When this step is complete, a **thing icon** followed by the name that you specified is visible in the **Thing** section.<a name="thing-certificate-attach"></a>

**To attach a certificate to a thing**

1. From the AWS Explorer, expand the **IoT** service section.

1. Under the **Things** subsection, locate the **thing** where you're attaching the certificate. 

1. Open the context (right-click) menu for the **thing** and choose **Attach Certificate** from the context-menu, to open an input selector with a list of your certificates.

1. From the list, choose the **certificate ID** that corresponds to the certificate that you want to attach to your thing.

1. After this step is complete, your certificate is accessible in the AWS Explorer as an item of the thing that you attached it to.<a name="thing-certificate-detach"></a>

**To detach a certificate from a thing**

1. From the AWS Explorer, expand the **IoT** service section.

1. In the **Things** subsection, locate the **thing** that you want to detach a certificate from. 

1. Open the context (right-click) menu for the **thing** and choose **Attach Certificate**.

1. After this step is complete, the detached certificate is no longer displayed under the thing in the AWS Explorer. However, it's still accessible from the **Certificates** subsection.<a name="thing-delete"></a>

**To delete a thing**

1. From the AWS Explorer, expand the **IoT** service section.

1. In the **Things** subsection, locate the **thing** that you want to delete.

1. Open the context (right-click) menu for the **thing** and choose **Delete Thing**.

1. After this step is completed, the deleted **thing** is no longer available from the **Things** subsection.
**Note**  
You can only delete a thing that doesn't have a certificate attached to it.

## AWS IoT certificates
<a name="iot-cloud9-cert"></a>

Certificates are a common way to create a secure connection between your AWS IoT services and devices. X.509 certificates are digital certificates that use the X.509 public key infrastructure standard to associate a public key with an identity contained in a certificate. For more information about AWS IoT certificates, see [Authentication (IoT)](https://docs.aws.amazon.com/iot/latest/developerguide/authentication.html) in the *AWS IoT Developer Guide*.

### Managing certificates
<a name="iot-cloud9-cert-actions"></a>

The AWS toolkit offers a variety of ways for you to manage your AWS IoT certificates directly from the AWS Explorer. They're outlined in the following steps:
+ [Create a certificate](#cert-create)
+ [Change a certificate status](#cert-status)
+ [Attach a policy to a certificate](#cert-attach-policy)
+ [Delete a certificate](#cert-delete)<a name="cert-create"></a>

**To create an AWS IoT certificate**

An X.509 certificate is used to connect with your instance of AWS IoT. 

1. From the AWS Explorer, expand the **IoT** service section, and open (right-click) **Certificates**.

1. To open a dialog box, choose **Create Certificate** from the context-menu.

1. To save your RSA key pair and X.509 certificate, select a directory in your local file system.
**Note**  
The default file names contain the certificate ID as a prefix.
Only the X.509 certificate is stored with your AWS account, through the AWS IoT service.
Your RSA key pair can only be issued once, save them to a secure location in your file system when you're prompted.
If the certificate or the key pair can't be saved to your file system, then the AWS Toolkit deletes the certificate from your AWS account.<a name="cert-status"></a>

**To modify a certificate status**

The status of an individual certificate is displayed next to the certificate ID in the AWS Explorer and can be set to **active**, **inactive**, or **revoked**.
**Note**  
Your certificate needs an **active** status before you can use it to connect your device to your AWS IoT service.
An **inactive** certificate can be activated, whether it was deactivated previously or is inactive by default.
A certificate that has been **revoked** can't be reactivated.

1. From the AWS Explorer, expand the **IoT** service section.

1. In the **Certificates** subsection, locate the certificate that you want to modify.

1. Open the context (right-click) menu for the certificate that displays the status change options available for that certificate.
+ If a certificate has the status **inactive**, choose **activate** to change the status to **active**.
+ If a certificate has the status **active**, choose **deactivate** to change the status to **inactive**.
+ If a certificate has either an **active** or **inactive** status, choose **revoke** to change the status to **revoked**.

**Note**  
Each of these status-changing actions is available if you select a certificate that is attached to a thing while displayed in the **Things** subsection.<a name="cert-attach-policy"></a>

**To attach an IoT policy to a certificate**

1. From the AWS Explorer, expand the **IoT** service section.

1. In the **Certificates** subsection, locate the certificate that you want to modify.

1. Open the context (right-click) menu for the certificate and choose **Attach Policy** to open an input selector with a list of your available policies.

1. Choose the policy that you want to attach to the certificate.

1. When this step is completed, the policy that you selected is added to the certificate as a sub-menu item.<a name="cert-detach-policy"></a>

**To detach an IoT policy from a certificate**

1. From the AWS Explorer, expand the **IoT** service section.

1. In the **Certificates** subsection, locate the certificate that you want to modify.

1. Expand the certificate and locate the policy that you want to detach.

1. Open the context (right-click) menu for the policy and choose **Detach** from the context menu.

1. When this step is completed, the policy is no longer accessible from your certificate, it's available from the **Policy** subsection.<a name="cert-delete"></a>

**To delete a certificate**

1. From the AWS Explorer, expand the **IoT** service heading.

1. In the **Certificates** subsection, locate the certificate that you want to delete.

1. Open the context (right-click) menu for the certificate and choose **Delete Certificate** from the context menu.
**Note**  
You can't delete a certificate if it's attached to a thing or has an active status. You can delete a certificate that has attached policies.

## AWS IoT policies
<a name="iot-vsctoolkit-policy"></a>

AWS IoT Core policies are defined through JSON documents. Each contains at least one policy statement. Policies define how AWS IoT, AWS, and your device can interact with each other. For more information about how to create a policy document, see [IoT Polices](https://docs.aws.amazon.com/iot/latest/developerguide/iot-policies.html) in the *AWS IoT Developer Guide*.

**Note**  
Named policies are versioned so that you can roll them back. In the AWS Explorer, your IoT polices are listed under the **Policies** subsection in the AWS IoT service. You can view policy versions by expanding a policy. The default version is denoted by an asterisk (\$1).

### Managing policies
<a name="iot-vsctoolkit-policy-actions"></a>

The AWS Cloud9 IDE offers several ways for you to manage your AWS IoT service policies. These are ways that you can manage or modify your policies directly from the AWS Explorer in VS Code: 
+ [Create a policy](#policy-create)
+ [Upload a new policy version](#policy-version-upload)
+ [Edit a policy version](#policy-version-edit)
+ [Change the policy version defualt](#policy-version-default)
+ [Change the policy version defualt](#policy-delete)<a name="policy-create"></a>

**To create an AWS IoT policy**
**Note**  
You can create a new policy from the AWS Explorer. However, the JSON document that defines the policy must already exist in your file system.

1. From the AWS Explorer, expand the **IoT** service section.

1. Open the context (right-click) menu for the **Policies** subsection and to open the **Policy Name** input field choose **Create Policy from Document**.

1. Enter a name and follow the prompts to open a dialog asking you to select a JSON document from your file system.

1. Choose the JSON file that contains your policy definitions, the policy is available in the AWS explorer after this is complete.<a name="policy-version-upload"></a>

**To upload a new AWS IoT policy version**

You can create a new version of a policy by uploading a JSON document to the policy.
**Note**  
The new JSON document must be present on your file system to create a new version using the AWS Explorer.

1. From the AWS Explorer, expand the **IoT** service section.

1.  Expand the **Policies** subsection to view your AWS IoT policies.

1. Open the context (right-click) menu for the policy that you want to update and choose **Create new version from Document**.

1. When the dialog opens, choose the JSON file that contains the updates to your policy definitions. 

   The new version is accessible from your policy in the AWS Explorer.<a name="policy-version-edit"></a>

**To edit an AWS IoT policy version**

You can open and edit a policy document using AWS Cloud9. When you finished editing the document, save it to your file system. Then, upload it to your AWS IoT service from the AWS Explorer.

1. From the AWS Explorer, expand the **IoT** service section.

1. Expand the **Policies** subsection and locate the policy you want to update.

1. To open the **Policy Name**, choose **Create Policy** from **Document**.

1. Expand the policy that you want to update and then open the context (right-click) menu for the policy version that you want to edit.

1. To open the policy version in AWS Cloud9, choose **View** from the context-menu to open the policy version.

1. When the policy document is opened, edit and save the changes.
**Note**  
At this point, the changes that you made to the policy are only saved to your local file system. To update the version and track it with the AWS Explorer, repeat the steps in [Upload a new policy version](#policy-version-upload).<a name="policy-version-default"></a>

**To select a new policy version default**

1. From the AWS Explorer, expand the **IoT** service section.

1. Expand the **Policies** subsection and locate the policy that you want to update.

1. Expand the policy that you want to update, and then open the context (right-click) menu for the policy version that you want to set and choose **Set as Default**. 

   When this is complete, the new default version that you selected has a star next to it.<a name="policy-delete"></a>

**To delete policies**
**Note**  
Before you can delete a policy or a policy version, make sure that the following conditions are met:  
You can't delete a policy if that policy is attached to a certificate.
You can't delete a policy if that policy has any non-default versions.
You can only delete the default version of a policy if a new default version is selected or the entire policy is deleted.
Before you delete an entire policy, you must delete all of the non-default version of that same policy.

1. From the AWS Explorer, expand the **IoT** service section.

1. Expand the **Policies** subsection and locate the policy that you want to update.

1. Expand the policy that you want to update, and open the context (right-click) menu for the policy version that you want delete and choose **Delete**.

1. When a version is deleted, it's no longer visible from the AWS Explorer.

1. If only the default version of a policy is left, open the context (right-click) menu for the parent policy and choose **Delete**.

# Working with Amazon Elastic Container Service
<a name="ecs"></a>

The AWS Cloud9 IDE provides some support for [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/). You can use the AWS Cloud9 IDE to manage Amazon ECS resources. For example, you can create task definitions.

**Topics**
+ [Amazon ECS Exec in AWS Toolkit for AWS Cloud9](ecs-cloud9-exec.md)

# Amazon ECS Exec in AWS Toolkit for AWS Cloud9
<a name="ecs-cloud9-exec"></a>

You can issue single commands in an Amazon Elastic Container Service (Amazon ECS) container with the AWS Toolkit for AWS Cloud9. You can do this using the Amazon ECS Exec feature. 

**Important**  
Enabling and Disabling Amazon ECS Exec changes the state of your ECS resources in your AWS account. Changes include stopping and restarting the service. Moreover, altering the state of resources while the Amazon ECS Exec is enabled can lead to unpredictable results. For more information about Amazon ECS, see [Using Amazon ECS Exec for Debugging](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-considerations) in the *Amazon ECS Developer Guide*.

## Amazon ECS Exec prerequisites
<a name="ecs-exec-prereq"></a>

Before you can use the Amazon ECS Exec feature, there are certain prerequisite conditions that you must meet.

### Amazon ECS requirements
<a name="ecs-requirements"></a>

Depending on whether your tasks are hosted on Amazon EC2 or AWS Fargate, and Amazon ECS Exec has different version requirements.
+ If you use Amazon EC2, you must use an Amazon ECS optimized AMI that was released after January 20, 2021, with an agent version 1.50.2 or later. For more information, see [Amazon ECS optimized AMIs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) in the *Amazon ECS Developer Guide*.
+ If you use AWS Fargate, you must use platform version 1.4.0 or later. For more information, see [AWS Fargate platform versions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) in the *Amazon ECS Developer Guide*.

### AWS account configuration and IAM permissions
<a name="ecs-configuration"></a>

To use the Amazon ECS Exec feature, you must have an existing Amazon ECS cluster associated with your AWS account. Amazon ECS Exec uses Systems Manager to establish a connection with the containers in your cluster. Amazon ECSrequires specific Task IAM Role Permissions to communicate with the SSM service.

For information about the IAM role and policy that's specific to Amazon ECS Exec, see [IAM permissions required for ECS Exec](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-enabling-and-using) in the *Amazon ECS** Developer Guide*.

## Working with the Amazon ECS Exec
<a name="working-with-ecs-exec"></a>

You can enable or disable the Amazon ECS Exec directly from the AWS Explorer in the AWS Toolkit for AWS Cloud9. When you enabled Amazon ECS Exec, choose containers from the Amazon ECS menu, and run commands against them.

### Enabling Amazon ECS Exec
<a name="enabling-exec"></a>

1. From the AWS Explorer, locate and expand the Amazon ECS menu.

1. Expand the cluster with the service that you want to modify.

1. Open the context menu for (right-click) the service and choose **Enable Command Execution**.

**Important**  
This step starts a new deployment of your service and might take a few minutes. For more information, see the note at the beginning of this section.

### Disabling Amazon ECS Exec
<a name="disabling-ecs-exec"></a>

1. From the AWS Explorer, locate and expand the Amazon ECS menu.

1. Expand the cluster that contains the service that you want.

1. Open the context menu for (right-click) the service and choose **Disable Command Execution**.

**Important**  
This step starts a new deployment of your service and might take a few minutes. For more information, see the note at the beginning of this section.

### Running commands against a Container
<a name="run-commands-container"></a>

To run commands against a container using the AWS Explorer, Amazon ECS Exec must be enabled. If it's not enabled, see the [Enabling Amazon ECS Exec](#enabling-exec) procedure in this section.

1. From the AWS Explorer, locate and expand the Amazon ECS menu.

1. Expand the cluster that the service that you want.

1. Expand the service to list the associated containers.

1. Open the context menu for (right-click) the container and choose **Run Command in Container**.

1. A **prompt** opens with a list of running Tasks. Choose the **Task ARN** that you want.
**Note**  
If only one task is running, a prompt doesn't open. Instead, the task is auto-selected.

1. When prompted, enter the command that you want to run and press **Enter** to proceed.

# Working with Amazon EventBridge
<a name="eventbridge"></a>

The AWS Toolkit for AWS Cloud9 provides support for [Amazon EventBridge](https://aws.amazon.com/eventbridge/). Using the AWS Toolkit for AWS Cloud9, you can work with certain aspects of EventBridge, such as schemas.

**Topics**
+ [Working with Amazon EventBridge Schemas](eventbridge-schemas.md)

# Working with Amazon EventBridge Schemas
<a name="eventbridge-schemas"></a>

You can use the AWS Toolkit for AWS Cloud9 to perform various operations on [Amazon EventBridge schemas](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-schemas.html).

## Prerequisites
<a name="eventbridge-schemas-prereq"></a>

The EventBridge schema that you want to work with must be available in your AWS account. If it isn't available, create or upload the schema. For more information, see [Amazon EventBridge Schemas](https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-schemas.html) in the [Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/).

## View an available Schema
<a name="eventbridge-schemas-view"></a>

1. In the **AWS Explorer**, expand **Schemas**.

1. Expand the name of the registry that contains the schema that you want to view. For example, many of the schemas that AWS supplies are in the **aws.events** registry.

1. To view a schema in the editor, open the context (right-click) menu for the schema, and then choose **View Schema**.  
![\[View an EventBridge schema.\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/schema-eventbridge.png)

## Find an available Schema
<a name="eventbridge-schemas-find"></a>

In the **AWS Explorer**, do one or more of the following:
+ Start entering the title of the schema that you want to find. The **AWS Explorer** highlights the schema titles that contain a match. (A registry must be expanded for you to see the highlighted titles.)
+ Open the context (right-click) menu for **Schemas**, and then choose **Search Schemas**. Or, expand **Schemas**, open the context (right-click) menu for the registry that contains the schema that you want to find, and then choose **Search Schemas in Registry**. In the **EventBridge Schemas Search** dialog box, start entering the title of the schema that you want to find. The dialog box displays the schema titles that contain a match.

  To display the schema in the dialog box, select the title of the schema.

## Generate code for an available Schema
<a name="eventbridge-schemas-generate-code"></a>

1. In the **AWS Explorer**, expand **Schemas**.

1. Expand the name of the registry that contains the Schema that you want to generate code for.

1. Open the context (right-click) menu for the title of the Schema, and then choose **Download code bindings**.

1. In the resulting wizard pages, choose the following:
   + The **Version** of the schema
   + The code binding language
   + The workspace folder where you want to store the generated code on your local development machine