

 AWS Cloud9 is no longer available to new customers. Existing customers of AWS Cloud9 can continue to use the service as normal. [Learn more](https://aws.amazon.com/blogs/devops/how-to-migrate-from-aws-cloud9-to-aws-ide-toolkits-or-aws-cloudshell/)

# Tutorials for AWS Cloud9
<a name="tutorials"></a>

Are you new to AWS Cloud9? Take a tour of the IDE in [Getting started: basic tutorials](tutorials-basic.md).

Experiment with these tutorials and sample code to increase your knowledge and confidence using AWS Cloud9 with various programming languages and AWS services.

**Topics**
+ [AWS CLI and aws-shell tutorial](sample-aws-cli.md)
+ [AWS CodeCommit tutorial](sample-codecommit.md)
+ [Amazon DynamoDB tutorial](sample-dynamodb.md)
+ [AWS CDK tutorial](sample-cdk.md)
+ [LAMP tutorial](sample-lamp.md)
+ [WordPress tutorial](sample-wordpress.md)
+ [Java tutorial](sample-java.md)
+ [C\$1\$1 tutorial](sample-cplusplus.md)
+ [Python tutorial](sample-python.md)
+ [.NET tutorial](sample-dotnetcore.md)
+ [Node.js tutorial](sample-nodejs.md)
+ [PHP tutorial](sample-php.md)
+ [Ruby](tutorial-ruby.md)
+ [Go tutorial](sample-go.md)
+ [TypeScript tutorial](sample-typescript.md)
+ [Docker tutorial](sample-docker.md)
+ [

## Related Tutorials
](#samples-additonal)

# AWS CLI and aws-shell tutorial for AWS Cloud9
<a name="sample-aws-cli"></a>

The following tutorial enables you to set up the AWS Command Line Interface (AWS CLI), the aws-shell, or both in an AWS Cloud9 development environment. The AWS CLI and the aws-shell are unified tools that provide a consistent interface for interacting with all parts of AWS. You can use the AWS CLI instead of the AWS Management Console to quickly run commands to interact with AWS, and some of these commands can be run with the AWS CLI or alternatively using AWS CloudShell.

For more information about the AWS CLI, see the [AWS Command Line Interface User Guide](https://docs.aws.amazon.com/cli/latest/userguide/). For the aws-shell, see the following resources:
+  [aws-shell](https://github.com/awslabs/aws-shell) on the GitHub website
+  [aws-shell](https://pypi.python.org/pypi/aws-shell) on the pip website

For a list of commands you can run with the AWS CLI to interact with AWS, see the [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/). You can use the same commands with AWS CloudShell, except that you start commands without the `aws` prefix.

Creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-aws-cli-prereqs)
+ [

## Step 1: Install the AWS CLI, the aws-shell, or both in your environment
](#sample-aws-cli-install)
+ [

## Step 2: Set up credentials management in your environment
](#sample-aws-cli-creds)
+ [

## Step 3: Run basic commands with the AWS CLI or the aws-shell in your environment
](#sample-aws-cli-run)
+ [

## Step 4: Clean up
](#sample-aws-cli-clean-up)

## Prerequisites
<a name="sample-aws-cli-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install the AWS CLI, the aws-shell, or both in your environment
<a name="sample-aws-cli-install"></a>

In this step, you use the AWS Cloud9 IDE to install the AWS CLI, the aws-shell, or both in your environment so you can run commands to interact with AWS.

If you are using an AWS Cloud9 EC2 development environment and you only want to use the AWS CLI, you can skip ahead to [Step 3: Run basic commands with the AWS CLI or the aws-shell in your environment](#sample-aws-cli-run). This is because the AWS CLI is already installed in an EC2 environment, and a set of AWS access credentials is already set up in the environment. For more information, see [AWS managed temporary credentials](security-iam.md#auth-and-access-control-temporary-managed-credentials).

If you are not using an EC2 environment, do the following to install the AWS CLI:

1. With your environment open, in the IDE, check whether the AWS CLI is already installed. In the terminal, run the ** `aws --version` ** command. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) If the AWS CLI is installed, the version number is displayed, with information such as the version numbers of Python and the operating system version number of your Amazon EC2 instance or your own server. If the AWS CLI is installed, skip ahead to [Step 2: Set up credentials management in your environment](#sample-aws-cli-creds).

1. To install the AWS CLI, see [Installing the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in the *AWS Command Line Interface User Guide*. For example, for an EC2 environment running Amazon Linux, run these three commands, one at a time, in the terminal to install the AWS CLI.

   ```
   sudo yum -y update          # Install the latest system updates.
   sudo yum -y install aws-cli # Install the AWS CLI.
   aws --version               # Confirm the AWS CLI was installed.
   ```

   For an EC2 environment running Ubuntu Server, run these three commands instead, one at a time, in the terminal to install the AWS CLI.

   ```
   sudo apt update             # Install the latest system updates.
   sudo apt install -y awscli  # Install the AWS CLI.
   aws --version               # Confirm the AWS CLI was installed.
   ```

If you want to install the aws-shell, do the following:

1. With your environment open, in the IDE, check whether the aws-shell is already installed. In the terminal, run the ** `aws-shell` ** command. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) If the aws-shell is installed, the `aws>` prompt is displayed. If the aws-shell is installed, skip ahead to [Step 2: Set up credentials management in your environment](#sample-aws-cli-creds).

1. To install the aws-shell, you use pip. To use pip, you must have Python installed.

   To check whether Python is already installed (and to install it if needed), follow the instructions in [Step 1: Install Python](sample-python.md#sample-python-install) in the *Python Sample*, and then return to this topic.

   To check whether pip is already installed, in the terminal, run the ** `pip --version` ** command. If pip is installed, the version number is displayed. If pip is not installed, install it by run these three commands, one at a time, in the terminal.

   ```
   wget https://bootstrap.pypa.io/get-pip.py # Get the pip install file.
   sudo python get-pip.py                    # Install pip. (You might need to run 'sudo python2 get-pip.py' or 'sudo python3 get-pip.py' instead, depending on how Python is installed.)
   rm get-pip.py                             # Delete the pip install file, as it is no longer needed.
   ```

1. To use pip to install the aws-shell, run the following command.

   ```
   sudo pip install aws-shell
   ```

## Step 2: Set up credentials management in your environment
<a name="sample-aws-cli-creds"></a>

Each time you use the AWS CLI or the aws-shell to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the AWS CLI or the aws-shell has the appropriate permissions to make that call. If the credentials don't cover the appropriate permissions, the call will fail.

If you are using an AWS Cloud9 EC2 development environment, you can skip ahead to [Step 3: Run basic commands with the AWS CLI or the aws-shell in your environment](#sample-aws-cli-run). This is because credentials are already set up in an EC2 environment. For more information, see [AWS managed temporary credentials](security-iam.md#auth-and-access-control-temporary-managed-credentials).

If you are not using an EC2 environment, you must manually store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

## Step 3: Run basic commands with the AWS CLI or the aws-shell in your environment
<a name="sample-aws-cli-run"></a>

In this step, you use the AWS CLI or the aws-shell in your environment to create a bucket in Amazon S3, list your available buckets, and then delete the bucket.

1. If you want to use the aws-shell but haven't started it yet, start the aws-shell by running the `aws-shell` command. The `aws>` prompt is displayed.

1. Create a bucket. Run the ** `aws s3 mb` ** command with the AWS CLI or ** `s3 mb` ** command with the aws-shell, supplying the name of the bucket to create. In this example, we use a bucket named `cloud9-123456789012-bucket`, where `123456789012` is your AWS account ID. If you use a different name, substitute it throughout this step.

   ```
   aws s3 mb s3://cloud9-123456789012-bucket # For the AWS CLI.
   s3 mb s3://cloud9-123456789012-bucket     # For the aws-shell.
   ```
**Note**  
Bucket names must be unique across all of AWS, not just your AWS account. The preceding suggested bucket name can help you come up with a unique bucket name. If you get a message that contains the error `BucketAlreadyExists`, you must run the command again with a different bucket name.

1. List your available buckets. Run the ** `aws s3 ls` ** command with the AWS CLI or the ** `s3 ls` ** command with the aws-shell. A list of your available buckets is displayed.

1. Delete the bucket. Run the ** `aws s3 rb` ** command with the AWS CLI or the ** `s3 rb` ** command with the aws-shell, supplying the name of the bucket to delete.

   ```
   aws s3 rb s3://cloud9-123456789012-bucket # For the AWS CLI.
   s3 rb s3://cloud9-123456789012-bucket     # For the aws-shell.
   ```

   To confirm whether the bucket was deleted, run the ** `aws s3 ls` ** command again with the AWS CLI or the ** `s3 ls` ** command again with the aws-shell. The name of the bucket that was deleted should no longer appear in the list.
**Note**  
You don't have to delete the bucket if you want to keep using it. For more information, see [Add an Object to a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/uploading-an-object-bucket.html) in the *Amazon Simple Storage Service User Guide*. See also [s3 commands](https://docs.aws.amazon.com/cli/latest/reference/s3/rm.html.html) in the *AWS CLI Command Reference*. (Remember, if you don't delete the bucket, it might result in ongoing charges to your AWS account.)

To continue experimenting with the AWS CLI, see [Working with Amazon Web Services](https://docs.aws.amazon.com/cli/latest/userguide/chap-working-with-services.html) in the *AWS Command Line Interface User Guide* as well as the [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/). To continue experimenting with the aws-shell, see the [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/), noting that you start commands without the `aws` prefix.

## Step 4: Clean up
<a name="sample-aws-cli-clean-up"></a>

If you're using the aws-shell, you can stop using it by running the ** `.exit` ** or ** `.quit` ** command.

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# AWS CodeCommit tutorial for AWS Cloud9
<a name="sample-codecommit"></a>

You can use the AWS CodeCommit tutorial to set up an AWS Cloud9 development environment to interact with a remote code repository in CodeCommit. CodeCommit is a source code control service that you can use to privately store and manage Git repositories in the AWS Cloud. For more information about CodeCommit, see the [AWS CodeCommit User Guide](https://docs.aws.amazon.com/codecommit/latest/userguide/).

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and CodeCommit. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [AWS CodeCommit Pricing](https://aws.amazon.com/codecommit/pricing/).
+  [Prerequisites](#sample-codecommit-prereqs) 
+  [Step 1: Set up your IAM group with required access permissions](#sample-codecommit-permissions) 
+  [Step 2: Create a repository in AWS CodeCommit](#sample-codecommit-create-repo) 
+  [Step 3: Connect your environment to the remote repository](#sample-codecommit-connect-repo) 
+  [Step 4: Clone the remote repository into your environment](#sample-codecommit-clone-repo) 
+  [Step 5: Add files to the repository](#sample-codecommit-add-files) 
+  [Step 6: Clean up](#sample-codecommit-clean-up) 

## Prerequisites
<a name="sample-codecommit-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Set up your IAM group with required access permissions
<a name="sample-codecommit-permissions"></a>

Suppose that your AWS credentials are associated with an administrator user in your AWS account, and you want to use that user to work with CodeCommit Then, skip ahead to [Step 2: Create a Repository in AWS CodeCommit](#sample-codecommit-create-repo).

You can complete this step using the [AWS Management Console](#sample-codecommit-permissions-console) or the [AWS Command Line Interface (AWS CLI)](#sample-codecommit-permissions-cli).

### Set up your IAM group with required access permissions using the console
<a name="sample-codecommit-permissions-console"></a>

1. Sign in to the AWS Management Console, if you aren't already signed in.

   For this step, we recommend you sign in using credentials for an administrator user in your AWS account. If you cannot do this, check with your AWS account administrator.

1. Open the IAM console. To do this, in the console's navigation bar, choose **Services**. Then, choose **IAM**.

1. Choose **Groups**.

1. Choose the group's name.

1. On the **Permissions** tab, for **Managed Policies**, choose **Attach Policy**.

1. In the list of policy names, select one of the following boxes:
   + Select **AWSCodeCommitPowerUser** for access to all of the functionality of CodeCommit and repository-related resources. However, this doesn't allow you to delete CodeCommit repositories or create or delete repository-related resources in other AWS services, such as Amazon CloudWatch Events.
   + Select **AWSCodeCommitFullAccess** for full control over CodeCommit repositories and related resources in the AWS account. This includes the ability to delete repositories.

   If you don't see either of these policy names in the list, enter the policy names in the **Filter** box to display them.

1. Choose **Attach Policy**.

To see the list of access permissions that these AWS managed policies give to a group, see [AWS Managed (Predefined) Policies for AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-iam-identity-based-access-control.html#managed-policies) in the *AWS CodeCommit User Guide*.

Skip ahead to [Step 2: Create a Repository in AWS CodeCommit](#sample-codecommit-create-repo).

### Set up your IAM group with required access permissions using the AWS CLI
<a name="sample-codecommit-permissions-cli"></a>

Run the IAM `attach-group-policy` command, specifying the group's name and the Amazon Resource Name (ARN) of the AWS managed policy that describes the required access permissions. The syntax is as follows.

```
aws iam attach-group-policy --group-name MyGroup --policy-arn POLICY_ARN
```

In the preceding command, replace `MyGroup` with the name of the group. Replace `POLICY_ARN` with the ARN of the AWS managed policy:
+  `arn:aws:iam::aws:policy/AWSCodeCommitPowerUser` for access to all of the functionality of CodeCommit and repository-related resources. However, it doesn't allow you to delete CodeCommit repositories or create or delete repository-related resources in other AWS services, such as Amazon CloudWatch Events.
+  `arn:aws:iam::aws:policy/AWSCodeCommitFullAccess` for full control over CodeCommit repositories and related resources in the AWS account. This includes the ability to delete repositories.

To see the list of access permissions that these AWS managed policies give to a group, see [AWS Managed (Predefined) Policies for AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-iam-identity-based-access-control.html#managed-policies) in the *AWS CodeCommit User Guide*.

## Step 2: Create a repository in CodeCommit
<a name="sample-codecommit-create-repo"></a>

In this step, you create a remote code repository in CodeCommit by using the CodeCommit console.

If you already have a CodeCommit repository, skip ahead to [Step 3: Connect Your Environment to the Remote Repository](#sample-codecommit-connect-repo).

You can complete this step using the [AWS Management Console](#sample-codecommit-create-repo-console) or the [AWS Command Line Interface (AWS CLI)](#sample-codecommit-create-repo-cli).

### Create a repository in CodeCommit using the console
<a name="sample-codecommit-create-repo-console"></a>

1. Suppose that you're signed in to the AWS Management Console as an administrator user from the previous step, and you don't want to use the administrator user to create the repository. Then, sign out of the AWS Management Console.

1. Open the CodeCommit console, at [https://console.aws.amazon.com/codecommit](https://console.aws.amazon.com/codecommit).

1. In the console's navigation bar, use the Region selector to choose the AWS Region that you want to create the repository in (for example, **US East (Ohio)**).

1. If a welcome page is displayed, choose **Get started**. Otherwise, choose **Create repository**.

1. On the **Create repository** page, for **Repository name**, enter a name for your new repository (for example, `MyDemoCloud9Repo`). If you choose a different name, substitute it throughout this sample.

1. (Optional) For **Description**, enter something about the repository. For example, you can enter: `This is a demonstration repository for the AWS Cloud9 sample.` 

1. Choose **Create repository**. A **Connect to your repository** pane is displayed. Choose **Close**, as you will connect to your repository in a different way later in this topic.

Skip ahead to [Step 3: Connect Your Environment to the Remote Repository](#sample-codecommit-connect-repo).

### Create a repository in CodeCommit using the AWS CLI
<a name="sample-codecommit-create-repo-cli"></a>

Run the AWS CodeCommit `create-repository` command. Specify the repository's name, an optional description, and the AWS Region to create the repository in.

```
aws codecommit create-repository --repository-name MyDemoCloud9Repo --repository-description "This is a demonstration repository for the AWS Cloud9 sample." --region us-east-2
```

In the preceding command, replace `us-east-2` with the ID of the AWS Region to create the repository in. For a list of supported Regions, see [AWS CodeCommit](https://docs.aws.amazon.com/general/latest/gr/rande.html#codecommit_region) in the *Amazon Web Services General Reference*.

If you choose to use a different repository name, substitute it throughout this sample.

## Step 3: Connect your environment to the remote repository
<a name="sample-codecommit-connect-repo"></a>

In this step, you use the AWS Cloud9 IDE to connect to the CodeCommit repository that you created or identified in the previous step.

**Note**  
If you prefer working with Git through a visual interface, you can clone the remote repository. Then, you can add files using the [Git panel](source-control-gitpanel.md) feature that's available in the IDE.

Complete one of the following sets of procedures based on your type of AWS Cloud9 development environment.


****  

|  **Environment type**  |  **Follow these procedures**  | 
| --- | --- | 
|  EC2 environment  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/sample-codecommit.html)  | 
|  SSH environment  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/sample-codecommit.html)  | 

## Step 4: Clone the remote repository into your environment
<a name="sample-codecommit-clone-repo"></a>

In this step, you use the AWS Cloud9 IDE to clone the remote repository in CodeCommit into your environment.

To clone the repository, run the **`git clone`** command. Replace `CLONE_URL` with the repository's clone URL.

```
git clone CLONE_URL
```

For an EC2 environment, you supply an HTTPS clone URL that starts with `https://`. For an SSH environment, you supply an SSH clone URL that starts with `ssh://`.

To get the repository's full clone URL, see [Use the AWS CodeCommit Console to View Repository Details](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-view-repository-details.html#how-to-view-repository-details-console) in the *AWS CodeCommit User Guide*.

If your repository doesn't have any files in it, a warning message is displayed, such as `You appear to have cloned an empty repository.` This is expected. You will address later.

## Step 5: Add files to the repository
<a name="sample-codecommit-add-files"></a>

In this step, you create three simple files in the cloned repository in your AWS Cloud9 environment. Next, you add the files to the Git staging area in your cloned repository. Last, you commit the staged files and push the commit to your remote repository in CodeCommit.

If the cloned repository already has files in it, you're done and can skip the rest of this sample.

**To add files to the repository**

1. Create a new file. On the menu bar, choose **File**, **New File**.

1. Enter the following content into the file, and then choose **File**, **Save** to save the file as `bird.txt` in the `MyDemoCloud9Repo` directory in your AWS Cloud9 environment.

   ```
   bird.txt
   --------
   Birds are a group of endothermic vertebrates, characterized by feathers,
   toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic
   rate, a four-chambered heart, and a lightweight but strong skeleton.
   ```
**Note**  
To confirm that you're saving this file in the correct directory, in the **Save As** dialog box, choose the `MyDemoCloud9Repo` folder. Then, make sure **Folder** displays `/MyDemoCloud9Repo`.

1. Create two more files, named `insect.txt` and `reptile.txt`, with the following content. Save the files in the same `MyDemoCloud9Repo` directory.

   ```
   insect.txt
   ----------
   Insects are a class of invertebrates within the arthropod phylum that
   have a chitinous exoskeleton, a three-part body (head, thorax, and abdomen),
   three pairs of jointed legs, compound eyes, and one pair of antennae.
   ```

   ```
   reptile.txt
   -----------
   Reptiles are tetrapod (four-limbed vertebrate) animals in the class
   Reptilia, comprising today's turtles, crocodilians, snakes,
   amphisbaenians, lizards, tuatara, and their extinct relatives.
   ```

1. In the terminal, run the **`cd`** command to switch to the `MyDemoCloud9Repo` directory.

   ```
   cd MyDemoCloud9Repo
   ```

1. Confirm that the files were successfully saved in the `MyDemoCloud9Repo` directory by running the **`git status`** command. All three files will be listed as untracked files.

   ```
   Untracked files:
     (use "git add <file>..." to include in what will be committed)
   
           bird.txt
           insect.txt
           reptile.txt
   ```

1. Add the files to the Git staging area by running the **`git add`** command.

   ```
   git add --all
   ```

1. Confirm that the files were successfully added to the Git staging area by running the **`git status`** command again. All three files are now listed as changes to commit.

   ```
   Changes to be committed:
     (use "git rm --cached <file>..." to unstage)
   
           new file:   bird.txt
           new file:   insect.txt
           new file:   reptile.txt
   ```

1. Commit the staged files by running the **`git commit`** command.

   ```
   git commit -m "Added information about birds, insects, and reptiles."
   ```

1. Push the commit to your remote repository in CodeCommit by running the **`git push`** command.

   ```
   git push -u origin master
   ```

1. Confirm whether the files were successfully pushed. Open the CodeCommit console, if it isn't already open, at [https://console.aws.amazon.com/codecommit](https://console.aws.amazon.com/codecommit).

1. In the top navigation bar, near the right edge, choose the AWS Region where you created the repository (for example, **US East (Ohio)**).

1. On the **Dashboard** page, choose **MyDemoCloud9Repo**. The three files are displayed.

To continue experimenting with your CodeCommit repository, see [Browse the Contents of Your Repository](https://docs.aws.amazon.com/codecommit/latest/userguide/getting-started-cc.html#getting-started-cc-browse) in the *AWS CodeCommit User Guide*.

If you're new to Git and you don't want to mess up your CodeCommit repository, experiment with a sample Git repository on the [Try Git](https://try.github.io/) website.

## Step 6: Clean up
<a name="sample-codecommit-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, delete the CodeCommit repository. For instructions, see [Delete an AWS CodeCommit Repository](https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-delete-repository.html) in the *AWS CodeCommit User Guide*.

Make sure also to delete the environment. For instructions, see [Deleting an Environment](delete-environment.md).

# Amazon DynamoDB tutorial for AWS Cloud9
<a name="sample-dynamodb"></a>

This tutorial enables you to set up an AWS Cloud9 development environment to work with Amazon DynamoDB.

DynamoDB is a fully managed NoSQL database service. You can use DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity specified and the amount of data stored, while maintaining consistent and fast performance. For more information, see [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) on the AWS website.

Creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and DynamoDB. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon DynamoDB Pricing](https://aws.amazon.com/dynamodb/pricing/).

For information about additional AWS database offerings, see [Amazon Relational Database Service (RDS)](https://aws.amazon.com/rds/), [Amazon ElastiCache](https://aws.amazon.com/elasticache/), and [Amazon Redshift](https://aws.amazon.com/redshift/) on the AWS website. See also [AWS Database Migration Service](https://aws.amazon.com/dms/) on the AWS website.
+  [Prerequisites](#sample-dynamodb-prereqs) 
+  [Step 1: Install and configure the AWS CLI, the AWS CloudShell, or both in your environment](#sample-dynamodb-cli-setup) 
+  [Step 2: Create a table](#sample-dynamodb-create-table) 
+  [Step 3: Add an item to the table](#sample-dynamodb-add-item) 
+  [Step 4: Add multiple items to the table](#sample-dynamodb-add-items) 
+  [Step 5: Create a global secondary index](#sample-dynamodb-create-index) 
+  [Step 6: Get items from the table](#sample-dynamodb-get-items) 
+  [Step 7: Clean up](#sample-dynamodb-clean-up) 

## Prerequisites
<a name="sample-dynamodb-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install and configure the AWS CLI, the AWS CloudShell, or both in your environment
<a name="sample-dynamodb-cli-setup"></a>

In this step, you use the AWS Cloud9 IDE to install and configure the AWS CLI, the AWS CloudShell, or both in your environment so you can run commands to interact with DynamoDB. Then you use the AWS CLI to run a basic DynamoDB command to test your installation and configuration.

1. To set up credentials management for the AWS CLI or the AWS CloudShell and to install the AWS CLI, the AWS CloudShell, or both in your environment, follow Steps 1 and 2 in the [AWS CLI and AWS CloudShell Sample](sample-aws-cli.md), and then return to this topic. If you already installed and configured the AWS CLI, the AWS CloudShell, or both in your environment, you don't need to do it again.

1. Test the installation and configuration of the AWS CLI, the aws-shell, or both by running the DynamoDB** `list-tables` ** command from a terminal session in your environment to list your existing DynamoDB tables, if there are any. To start a new terminal session, on the menu bar, choose **Windows**, **New Terminal**.

   ```
   aws dynamodb list-tables # For the AWS CLI.
   dynamodb list-tables     # For the aws-shell.
   ```
**Note**  
Throughout this sample, if you're using the aws-shell, omit `aws` from each command that starts with `aws`. To start the aws-shell, run the ** `aws-shell` ** command. To stop using the aws-shell, run the ** `.exit` ** or ** `.quit` ** command.

   If this command succeeds, it outputs a `TableNames` array containing a list of existing DynamoDB tables that you might already have. If you have no DynamoDB tables yet, the `TableNames` array will be empty.

   ```
   {
     "TableNames": []
   }
   ```

   If you do have any DynamoDB tables, the `TableNames` array contains a list of the table names.

## Step 2: Create a table
<a name="sample-dynamodb-create-table"></a>

In this step, you create a table in DynamoDB and specify the table's name, layout, simple primary key, and data throughput settings.

This sample table, named `Weather`, contains information about weather forecasts for a few cities in the United States. The table holds the following types of information (in DynamoDB, each piece of information is known as an *attribute*):
+ Required unique city ID (`CityID`)
+ Required forecast date (`Date`)
+ City name (`City`)
+ State name (`State`)
+ Forecast weather conditions (`Conditions`)
+ Forecast temperatures (`Temperatures`)
  + Forecast high, in degrees Fahrenheit (`HighF`)
  + Forecast low, in degrees Fahrenheit (`LowF`)

To create the table, in a terminal session in the AWS Cloud9 IDE, run the DynamoDB** `create-table` ** command.

```
aws dynamodb create-table \
--table-name Weather \
--attribute-definitions \
  AttributeName=CityID,AttributeType=N AttributeName=Date,AttributeType=S \
--key-schema \
  AttributeName=CityID,KeyType=HASH AttributeName=Date,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
```

In this command:
+  `--table-name` represents the table name (`Weather` in this sample). Table names must be unique within each AWS Region in your AWS account.
+  `--attribute-definitions` represents the attributes that are used to uniquely identify the table items. Each of this table's items are uniquely identified by a combination of a numerical `ID` attribute and a `Date` attribute represented as an ISO-8601 formatted string.
+  `--key-schema` represents the table's key schema. This table has a composite primary key of `CityID` and `Date`. This means that each of the table items must have a `CityID` attribute value and a `Date` attribute value, but no two items in the table can have both the same `CityID` attribute value and `Date` attribute value.
+  `--provisioned-throughput` represents the table's read-write capacity. DynamoDB allows up to 5 strongly consistent reads per second for items up to 4 KB in size, or up to 5 eventually consistent reads per second for items up to 4 KB in size. DynamoDB also allows up to 5 writes per second for items up to 1 KB in size.
**Note**  
Setting higher provisioned throughput might result in additional charges to your AWS account.  
For more information about this and other DynamoDB commands, see [dynamodb](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/index.html) in the *AWS CLI Command Reference*.

If this command succeeds, it displays summary information about the new table that is being created. To confirm the table is successfully created, run the DynamoDB** `describe-table` ** command, specifying the table's name (`--table-name`).

```
aws dynamodb describe-table --table-name Weather
```

When the table is successfully created, the `TableStatus` value changes from `CREATING` to `ACTIVE`. Do not proceed past this step until the table is successfully created.

## Step 3: Add an item to the table
<a name="sample-dynamodb-add-item"></a>

In this step, you add an item to the table you just created.

1. Create a file named `weather-item.json` with the following content. To create a new file, on the menu bar, choose **File**, **New File**. To save the file, choose **File**, **Save**.

   ```
   {
     "CityID": { "N": "1" },
     "Date": { "S": "2017-04-12" },
     "City": { "S": "Seattle" },
     "State": { "S": "WA" },
     "Conditions": { "S": "Rain" },
     "Temperatures": { "M": {
         "HighF": { "N": "59" },
         "LowF": { "N": "46" }
       }
     }
   }
   ```

   In this code, `N` represents an attribute value that is a number. `S` is a string attribute value. `M` is a map attribute, which is a set of attribute-value pairs. You must specify an attribute's data type whenever you work with items. For additional available attribute data types, see [Data Types](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.NamingRulesDataTypes.html#HowItWorks.DataTypes) in the *Amazon DynamoDB Developer Guide*.

1. Run the DynamoDB** `put-item` ** command, specifying the table's name (`--table-name`) and the path to the JSON-formatted item (`--item`).

   ```
   aws dynamodb put-item \
   --table-name Weather \
   --item file://weather-item.json
   ```

   If the command succeeds, it runs without error, and no confirmation message is displayed.

1. To confirm the table's current contents, run the DynamoDB** `scan` ** command, specifying the table's name (`--table-name`).

   ```
   aws dynamodb scan --table-name Weather
   ```

   If the command succeeds, summary information about the table and the item you just added is displayed.

## Step 4: Add multiple items to the table
<a name="sample-dynamodb-add-items"></a>

In this step, you add several more items to the `Weather` table.

1. Create a file named `more-weather-items.json` with the following content.

   ```
   {
     "Weather": [
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "1" },
             "Date": { "S": "2017-04-13" },
             "City": { "S": "Seattle" },
             "State": { "S": "WA" },
             "Conditions": { "S": "Rain" },
             "Temperatures": { "M": {
                 "HighF": { "N": "52" },
                 "LowF": { "N": "43" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "1" },
             "Date": { "S": "2017-04-14" },
             "City": { "S": "Seattle" },
             "State": { "S": "WA" },
             "Conditions": { "S": "Rain" },
             "Temperatures": { "M": {
                 "HighF": { "N": "49" },
                 "LowF": { "N": "43" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "2" },
             "Date": { "S": "2017-04-12" },
             "City": { "S": "Portland" },
             "State": { "S": "OR" },
             "Conditions": { "S": "Thunderstorms" },
             "Temperatures": { "M": {
                 "HighF": { "N": "59" },
                 "LowF": { "N": "43" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "2" },
             "Date": { "S": "2017-04-13" },
             "City": { "S": "Portland" },
             "State": { "S": "OR" },
             "Conditions": { "S": "Rain" },
             "Temperatures": { "M": {
                 "HighF": { "N": "51" },
                 "LowF": { "N": "41" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "2" },
             "Date": { "S": "2017-04-14" },
             "City": { "S": "Portland" },
             "State": { "S": "OR" },
             "Conditions": { "S": "Rain Showers" },
             "Temperatures": { "M": {
                 "HighF": { "N": "49" },
                 "LowF": { "N": "39" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "3" },
             "Date": { "S": "2017-04-12" },
             "City": { "S": "Portland" },
             "State": { "S": "ME" },
             "Conditions": { "S": "Rain" },
             "Temperatures": { "M": {
                 "HighF": { "N": "59" },
                 "LowF": { "N": "40" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "3" },
             "Date": { "S": "2017-04-13" },
             "City": { "S": "Portland" },
             "State": { "S": "ME" },
             "Conditions": { "S": "Partly Sunny" },
             "Temperatures": { "M": {
                 "HighF": { "N": "54" },
                 "LowF": { "N": "37" }
               }
             }
           }
         }
       },
       {
         "PutRequest": {
           "Item": {
             "CityID": { "N": "3" },
             "Date": { "S": "2017-04-14" },
             "City": { "S": "Portland" },
             "State": { "S": "ME" },
             "Conditions": { "S": "Mostly Sunny" },
             "Temperatures": { "M": {
                 "HighF": { "N": "53" },
                 "LowF": { "N": "37" }
               }
             }
           }
         }
       }
     ]
   }
   ```

   In this code, 8 `Item` objects define the 8 items to add to the table, similar to the single item defined in the previous step. However, when you run the DynamoDB** `batch-write-item` ** command in the next step, you must provide a JSON-formatted object that includes each `Item` object in a containing `PutRequest` object. Then you must include those `PutRequest` objects in a parent array that has the same name as the table.

1. Run the DynamoDB** `batch-write-item` ** command, specifying the path to the JSON-formatted items to add (`--request-items`).

   ```
   aws dynamodb batch-write-item \
   --request-items file://more-weather-items.json
   ```

   If the command succeeds, it displays the following message, confirming that the items were successfully added.

   ```
   {
     "UnprocessedItems": {}
   }
   ```

1. To confirm the table's current contents, run the DynamoDB** `scan` ** command again.

   ```
   aws dynamodb scan --table-name Weather
   ```

   If the command succeeds, 9 items are now displayed.

## Step 5: Create a global secondary index
<a name="sample-dynamodb-create-index"></a>

Running the DynamoDB** `scan` ** command to get information about items can be slow, especially as a table grows in size or if the type of information you want to get is complex. You can create one or more secondary indexes to speed things up and make getting information easier. In this step, you learn about two types of secondary indexes that DynamoDB supports to do just that. These are known as a *local secondary index* and a *global secondary index*. Then you create a global secondary index.

To understand these secondary index types, you first need to know about primary keys, which uniquely identify a table's items. DynamoDB supports a *simple primary key* or a *composite primary key*. A simple primary key has a single attribute, and that attribute value must be unique for each item in the table. This attribute is also known as a *partition key* (or a *hash attribute*), which DynamoDB can use to partition items for faster access. A table can also have a composite primary key, which contains two attributes. The first attribute is the partition key, and the second is a *sort key* (also known as a *range attribute*). In a table with a composite primary key, any two items can have the same partition key value, but they cannot also have the same sort key value. The `Weather` table has a composite primary key.

A local secondary index has the same partition key as the table itself, but this index type can have a different sort key. A global secondary index can have a partition key and a sort key that are both different from the table itself.

For example, you can already use the primary key to access `Weather` items by `CityID`. To access `Weather` items by `State`, you could create a local secondary index that has a partition key of `CityID` (it must be the same as the table itself) and a sort key of `State`. To access `Weather` items by `City`, you could create a global secondary index that has a partition key of `City` and a sort key of `Date`.

You can create local secondary indexes only while you are creating a table. Because the `Weather` table already exists, you cannot add any local secondary indexes to it. However, you can add global secondary indexes. Practice adding one now.

**Note**  
Creating secondary indexes might result in additional charges to your AWS account.

1. Create a file named `weather-global-index.json` with the following content.

   ```
   [
     {
       "Create": {
         "IndexName": "weather-global-index",
         "KeySchema": [
           {
             "AttributeName": "City",
             "KeyType": "HASH"
           },
           {
             "AttributeName": "Date",
             "KeyType": "RANGE"
           }
         ],
         "Projection": {
           "ProjectionType": "INCLUDE",
           "NonKeyAttributes": [
             "State",
             "Conditions",
             "Temperatures"
           ]
         },
         "ProvisionedThroughput": {
           "ReadCapacityUnits": 5,
           "WriteCapacityUnits": 5
         }
       }
     }
   ]
   ```

   In this code:
   + The name of the global secondary index is `weather-global-index`.
   + The `City` attribute is the partition key (hash attribute), and the `Date` attribute is the sort key (range attribute).
   +  `Projection` defines the attributes to retrieve by default (in addition to the hash attribute and any range attribute) for every item matching a table search that uses this index. In this sample, the `State`, `Conditions`, `HighF` (part of `Temperatures`), and `LowF` (also part of `Temperatures`) attributes (as well as the `City` and `Date` attributes) are retrieved for every matching item.
   + Similar to tables, a global secondary index must define its provisioned throughput settings.
   + The `IndexName`, `KeySchema`, `Projection`, and `ProvisionedThroughput` settings must be contained in a `Create` object, which defines the global secondary index to create when you run the DynamoDB** `update-table` ** command in the next step.

1. Run the DynamoDB** `update-table` ** command.

   ```
   aws dynamodb update-table \
   --table-name Weather \
   --attribute-definitions \
     AttributeName=City,AttributeType=S AttributeName=Date,AttributeType=S \
   --global-secondary-index-updates file://weather-global-index.json
   ```

   In this command:
   +  `--table-name` is the name of the table to update.
   +  `--attribute-definitions` are the attributes to include in the index. The partition key is always listed first, and any sort key is always listed second.
   +  `--global-secondary-index-updates` is the path to the file that defines the global secondary index.

   If this command succeeds, it displays summary information about the new global secondary index that is being created. To confirm the global secondary index is successfully created, run the DynamoDB** `describe-table` ** command, specifying the table's name (`--table-name`).

   ```
   aws dynamodb describe-table --table-name Weather
   ```

   When the global secondary index is successfully created, the `TableStatus` value changes from `UPDATING` to `ACTIVE`, and the `IndexStatus` value changes from `CREATING` to `ACTIVE`. Do not proceed past this step until the global secondary index is successfully created. This can take several minutes.

## Step 6: Get items from the table
<a name="sample-dynamodb-get-items"></a>

There are many ways to get items from tables. In this step, you get items by using the table's primary key, by using the table's other attributes, and by using the global secondary index.

### To get a single item from a table based on the item's primary key value
<a name="w2aac31c21c25b5"></a>

If you know an item's primary key value, you can get the matching item by running the DynamoDB command ** `get-item` **, ** `scan` **, or ** `query` **. The following are the main differences in these commands:
+  ** `get-item` ** returns a set of attributes for the item with the given primary key.
+  ** `scan` ** returns one or more items and item attributes by accessing every item in a table or a secondary index.
+  ** `query` ** finds items based on primary key values. You can query any table or secondary index that has a composite primary key (a partition key and a sort key).

In this sample, here's how to use each of these commands to get the item that contains the `CityID` attribute value of `1` and the `Date` attribute value of `2017-04-12`.

1. To run the DynamoDB** `get-item` ** command, specify the name of the table (`--table-name`), the primary key value (`--key`), and the attribute values for the item to display (`--projection-expression`). Because `Date` is a reserved keyword in DynamoDB, you must also provide an alias for the `Date` attribute value (`--expression-attribute-names`). (`State` is also a reserved keyword, and so you will see an alias provided for it in later steps.)

   ```
   aws dynamodb get-item \
   --table-name Weather \
   --key '{ "CityID": { "N": "1" }, "Date": { "S": "2017-04-12" } }' \
   --projection-expression \
     "City, #D, Conditions, Temperatures.HighF, Temperatures.LowF" \
   --expression-attribute-names '{ "#D": "Date" }'
   ```

   In this and the other commands, to display all of the item's attributes, don't include `--projection-expression`. In this example, because you are not including `--projection-expression`, you also don't need to include `--expression-attribute-names`.

   ```
   aws dynamodb get-item \
   --table-name Weather \
   --key '{ "CityID": { "N": "1" }, "Date": { "S": "2017-04-12" } }'
   ```

1. To run the DynamoDB** `scan` ** command, specify:
   + The name of the table (`--table-name`).
   + The search to run (`--filter-expression`).
   + The search criteria to use (`--expression-attribute-values`).
   + The kinds of attributes to display for the matching item (`--select`).
   + The attribute values for the item to display (`--projection-expression`).
   + If any of your attributes are using reserved keywords in DynamoDB, aliases for those attributes (`--expression-attribute-names`).

   ```
   aws dynamodb scan \
   --table-name Weather \
   --filter-expression "(CityID = :cityID) and (#D = :date)" \
   --expression-attribute-values \
     '{ ":cityID": { "N": "1" }, ":date": { "S": "2017-04-12" } }' \
   --select SPECIFIC_ATTRIBUTES \
   --projection-expression \
     "City, #D, Conditions, Temperatures.HighF, Temperatures.LowF" \
   --expression-attribute-names '{ "#D": "Date" }'
   ```

1. To run the DynamoDB** `query` ** command, specify:
   + The name of the table (`--table-name`).
   + The search to run (`--key-condition-expression`).
   + The attribute values to use in the search (`--expression-attribute-values`).
   + The kinds of attributes to display for the matching item (`--select`).
   + The attribute values for the item to display (`--projection-expression`).
   + If any of your attributes are using reserved keywords in DynamoDB, aliases for those attributes (`--expression-attribute-names`).

   ```
   aws dynamodb query \
   --table-name Weather \
   --key-condition-expression "(CityID = :cityID) and (#D = :date)" \
   --expression-attribute-values \
     '{ ":cityID": { "N": "1" }, ":date": { "S": "2017-04-12" } }' \
   --select SPECIFIC_ATTRIBUTES \
   --projection-expression \
     "City, #D, Conditions, Temperatures.HighF, Temperatures.LowF" \
   --expression-attribute-names '{ "#D": "Date" }'
   ```

   Notice that the ** `scan` ** command needed to scan all 9 items to get the result, while the ** `query` ** command only needed to scan for 1 item.

### To get multiple items from a table based on the items' primary key values
<a name="w2aac31c21c25b7"></a>

If you know the items' primary key values, you can get the matching items by running the DynamoDB** `batch-get-item` ** command. In this sample, here's how to get the items that contain the `CityID` attribute value of `3` and `Date` attribute values of `2017-04-13` or `2017-04-14`.

Run the DynamoDB** `batch-get-item` ** command, specifying the path to a file describing the items to get (`--request-items`).

```
aws dynamodb batch-get-item --request-items file://batch-get-item.json
```

For this sample, the code in the `batch-get-item.json` file specifies to search the `Weather` table for items with a `CityID` of `3` and a `Date` of `2017-04-13` or `2017-04-14`. For each item found, the attribute values for `City`, `State`, `Date`, and `HighF` (part of `Temperatures`) are displayed, if they exist.

```
{
  "Weather" : {
    "Keys": [
      {
        "CityID": { "N": "3" },
        "Date": { "S": "2017-04-13" }
      },
      {
        "CityID": { "N": "3" },
        "Date": { "S": "2017-04-14" }
      }
    ],
    "ProjectionExpression": "City, #S, #D, Temperatures.HighF",
    "ExpressionAttributeNames": { "#S": "State", "#D": "Date" }
  }
}
```

### To get all matching items from a table
<a name="w2aac31c21c25b9"></a>

If you know something about the attributes' values in the table, you can get matching items by running the DynamoDB** `scan` ** command. In this sample, here's how to get the dates when the `Conditions` attribute value contains `Sunny` and the `HighF` attribute value (part of `Temperatures`) is greater than `53`.

Run the DynamoDB** `scan` ** command, specifying:
+ The name of the table (`--table-name`).
+ The search to run (`--filter-expression`).
+ The search criteria to use (`--expression-attribute-values`).
+ The kinds of attributes to display for the matching item (`--select`).
+ The attribute values for the item to display (`--projection-expression`).
+ If any of your attributes are using reserved keywords in DynamoDB, aliases for those attributes (`--expression-attribute-names`).

```
aws dynamodb scan \
--table-name Weather \
--filter-expression \
  "(contains (Conditions, :sun)) and (Temperatures.HighF > :h)" \
--expression-attribute-values \
  '{ ":sun": { "S" : "Sunny" }, ":h": { "N" : "53" } }' \
--select SPECIFIC_ATTRIBUTES \
--projection-expression "City, #S, #D, Conditions, Temperatures.HighF" \
--expression-attribute-names '{ "#S": "State", "#D": "Date" }'
```

### To get all matching items from a global secondary index
<a name="w2aac31c21c25c11"></a>

To search using a global secondary index, use the DynamoDB** `query` ** command. In this sample, here's how to use the `weather-global-index` secondary index to get the forecast conditions for cities named `Portland` for the dates of `2017-04-13` and `2017-04-14`.

Run the DynamoDB** `query` ** command, specifying:
+ The name of the table (`--table-name`).
+ The name of the global secondary index (`--index-name`).
+ The search to run (`--key-condition-expression`).
+ The attribute values to use in the search (`--expression-attribute-values`).
+ The kinds of attributes to display for the matching item (`--select`).
+ If any of your attributes are using reserved keywords in DynamoDB, aliases for those attributes (`--expression-attribute-names`).

```
aws dynamodb query \
--table-name Weather \
--index-name weather-global-index \
--key-condition-expression "(City = :city) and (#D between :date1 and :date2)" \
--expression-attribute-values \
  '{ ":city": { "S" : "Portland" }, ":date1": { "S": "2017-04-13" }, ":date2": { "S": "2017-04-14" } }' \
--select SPECIFIC_ATTRIBUTES \
--projection-expression "City, #S, #D, Conditions, Temperatures.HighF" \
--expression-attribute-names '{ "#S": "State", "#D": "Date" }'
```

## Step 7: Clean up
<a name="sample-dynamodb-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the table. Deleting the table deletes the global secondary index as well. You should also delete your environment.

To delete the table, run the DynamoDB** `delete-table` ** command, specifying the table's name (`--table-name`).

```
aws dynamodb delete-table --table-name Weather
```

If the command succeeds, information about the table is displayed, including the `TableStatus` value of `DELETING`.

To confirm the table is successfully deleted, run the DynamoDB** `describe-table` ** command, specifying the table's name (`--table-name`).

```
aws dynamodb describe-table --table-name Weather
```

If the table is successfully deleted, a message containing the phrase `Requested resource not found` is displayed.

To delete your environment, see [Deleting an Environment](delete-environment.md).

# AWS CDK tutorial for AWS Cloud9
<a name="sample-cdk"></a>

This tutorial shows you how to work with the AWS Cloud Development Kit (AWS CDK) in an AWS Cloud9 development environment. The AWS CDK is a set of software tools and libraries that developers can use to model AWS infrastructure components as code.

The AWS CDK includes the AWS Construct Library that you can use to quickly resolve many tasks on AWS. For example, you can use the `Fleet` construct to fully and securely deploy code to a fleet of hosts. You can create your own constructs to model various elements of your architectures, share them with others, or publish them to the community. For more information, see the [AWS Cloud Development Kit Developer Guide](https://docs.aws.amazon.com/cdk/v2/guide/home.html).

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2, Amazon SNS, and Amazon SQS. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/), [Amazon SNS Pricing](https://aws.amazon.com/sns/pricing/), and [Amazon SQS Pricing](https://aws.amazon.com/sqs/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-cdk-prereqs)
+ [

## Step 1: Install required tools
](#sample-cdk-install)
+ [

## Step 2: Add code
](#sample-cdk-code)
+ [

## Step 3: Run the code
](#sample-cdk-run)
+ [

## Step 4: Clean up
](#sample-cdk-clean-up)

## Prerequisites
<a name="sample-cdk-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-cdk-install"></a>

In this step, you install all of the tools in your environment that the AWS CDK needs to run a sample that is written in the TypeScript programming language.

1.  [Node Version Manager](#sample-cdk-install-nvm), or ** `nvm` **, which you use to install Node.js later.

1.  [Node.js](#sample-cdk-install-nodejs), which is required by the sample and contains Node Package Manager, or ** `npm` **, which you use to install TypeScript and the AWS CDK later.

1.  [TypeScript](#sample-cdk-install-typescript), which is required by this sample. (The AWS CDK also provides support for several other programming languages.)

1. The [AWS CDK](#sample-cdk-install-cdk).

### Step 1.1: Install Node Version Manager (nvm)
<a name="sample-cdk-install-nvm"></a>

1. In a terminal session in the AWS Cloud9 IDE, ensure the latest security updates and bug fixes are installed. To do this, run the ** `yum update` ** (for Amazon Linux) or ** `apt update` ** command (for Ubuntu Server). (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.)

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. Confirm whether ** `nvm` ** is already installed. To do this, run the ** `nvm` ** command with the ** `--version` ** option.

   ```
   nvm --version
   ```

   If successful, the output contains the ** `nvm` ** version number, and you can skip ahead to [Step 1.2: Install Node.js](#sample-cdk-install-nodejs).

1. Download and install ** `nvm` **. To do this, run the install script. In this example, v0.33.0 is installed, but you can check for the latest version of ** `nvm` ** [here](https://github.com/nvm-sh/nvm#installing-and-updating).

   ```
   curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
   ```

1. Start using ** `nvm` **. You can either close the terminal session and then restart it, or source the `~/.bashrc` file that contains the commands to load ** `nvm` **.

   ```
   . ~/.bashrc
   ```

### Step 1.2: Install Node.js
<a name="sample-cdk-install-nodejs"></a>

1. Confirm whether you already have Node.js installed, and if you do, confirm that the installed version is 16.17.0 or greater. **This sample has been tested with Node.js 16.17.0.** To check, with the terminal session still open in the IDE, run the ** `node` ** command with the ** `--version` ** option.

   ```
   node --version
   ```

   If you do have Node.js installed, the output contains the version number. If the version number is v16.17.0 skip ahead to [Step 1.3: Install TypeScript](#sample-cdk-install-typescript).

1. Install Node.js 16 by running the ** `nvm` ** command with the ** `install` ** action.
**Note**  
You can also run **`nvm install node`** to install the long-term support (LTS) version of Node.js. AWS Cloud9 support tracks the LTS version of Node.js. 

   ```
   nvm install v16
   ```

1. Start using Node.js 16. To do this, run the ** `nvm` ** command with the ** `alias` ** action, the version number to alias, and the version to use for that alias, as follows.

   ```
   nvm alias default 16
   ```
**Note**  
The preceding command sets Node.js 16 as the default version of Node.js. Alternatively, you can run the ** `nvm` ** command along with the ** `use` ** action instead of the ** `alias` ** action (for example, ** `nvm use 16.17.0` **). However, the ** `use` ** action causes that version of Node.js to run only while the current terminal session is running.

1. To confirm that you're using Node.js 16 run the ** `node --version` ** command again. If the correct version is installed, the output contains version v16.

### Step 1.3: Install TypeScript
<a name="sample-cdk-install-typescript"></a>

1. Confirm whether you already have TypeScript installed. To do this, with the terminal session still open in the IDE, run the command line TypeScript compiler with the ** `--version` ** option.

   ```
   tsc --version
   ```

   If you do have TypeScript installed, the output contains the TypeScript version number. If TypeScript is installed, skip ahead to [Step 1.4: Install the AWS CDK](#sample-cdk-install-cdk).

1. Install TypeScript. To do this, run the ** `npm` ** command with the ** `install` ** action, the ** `-g` ** option, and the name of the TypeScript package. This installs TypeScript as a global package in the environment.

   ```
   npm install -g typescript
   ```

1. Confirm that TypeScript is installed. To do this, run the command line TypeScript compiler with the ** `--version` ** option.

   ```
   tsc --version
   ```

   If TypeScript is installed, the output contains the TypeScript version number.

### Step 1.4: Install the AWS CDK
<a name="sample-cdk-install-cdk"></a>

1. Confirm whether you already have the AWS CDK installed. To do this, with the terminal session still open in the IDE, run the ** `cdk` ** command with the ** `--version` ** option.

   ```
   cdk --version
   ```

   If the AWS CDK is installed, the output contains the AWS CDK version and build numbers. Skip ahead to [Step 2: Add code](#sample-cdk-code).

1. Install the AWS CDK by running the ** `npm` ** command along with the `install` action, the name of the AWS CDK package to install, and the `-g` option to install the package globally in the environment.

   ```
   npm install -g aws-cdk
   ```

1. Confirm that the AWS CDK is installed and correctly referenced. To do this, run the ** `cdk` ** command with the ** `--version` ** option.

   ```
   cdk --version
   ```

   If successful, the AWS CDK version and build numbers are displayed.

## Step 2: Add code
<a name="sample-cdk-code"></a>

In this step, you create a sample TypeScript project that contains all of the source code you need for the AWS CDK to programmatically deploy an AWS CloudFormation stack. This stack creates an Amazon SNS topic and an Amazon SQS queue in your AWS account and then subscribes the queue to the topic.

1. With the terminal session still open in the IDE, create a directory to store the project's source code, for example a `~/environment/hello-cdk` directory in your environment. Then switch to that directory.

   ```
   rm -rf ~/environment/hello-cdk # Remove this directory if it already exists.
   mkdir ~/environment/hello-cdk  # Create the directory.
   cd ~/environment/hello-cdk     # Switch to the directory.
   ```

1. Set up the directory as a TypeScript language project for the AWS CDK. To do this, run the ** `cdk` ** command with the ** `init` ** action, the ** `sample-app` ** template, and the ** `--language` ** option along with the name of the programming language.

   ```
   cdk init sample-app --language typescript
   ```

   This creates the following files and subdirectories in the directory.
   + A hidden `.git` subdirectory and a hidden `.gitignore` file, which makes the project compatible with source control tools such as Git.
   + A `lib` subdirectory, which includes a `hello-cdk-stack.ts` file. This file contains the code for your AWS CDK stack. This code is described in the next step in this procedure.
   + A `bin` subdirectory, which includes a `hello-cdk.ts` file. This file contains the entry point for your AWS CDK app.
   + A `node_modules` subdirectory, which contains supporting code packages that the app and stack can use as needed.
   + A hidden `.npmignore` file, which lists the types of subdirectories and files that ** `npm` ** doesn't need when it builds the code.
   + A `cdk.json` file, which contains information to make running the ** `cdk` ** command easier.
   + A `package-lock.json` file, which contains information that ** `npm` ** can use to reduce possible build and run errors.
   + A `package.json` file, which contains information to make running the ** `npm` ** command easier and with possibly fewer build and run errors.
   + A `README.md` file, which lists useful commands you can run with ** `npm` ** and the AWS CDK.
   + A `tsconfig.json` file, which contains information to make running the ** `tsc` ** command easier and with possibly fewer build and run errors.

1. In the **Environment** window, open the `lib/hello-cdk-stack.ts` file, and browse the following code in that file.

   ```
   import sns = require('@aws-cdk/aws-sns');
   import sqs = require('@aws-cdk/aws-sqs');
   import cdk = require('@aws-cdk/cdk');
   
   export class HelloCdkStack extends cdk.Stack {
     constructor(parent: cdk.App, name: string, props?: cdk.StackProps) {
       super(parent, name, props);
   
       const queue = new sqs.Queue(this, 'HelloCdkQueue', {
         visibilityTimeoutSec: 300
       });
   
       const topic = new sns.Topic(this, 'HelloCdkTopic');
   
       topic.subscribeQueue(queue);
     }
   }
   ```
   + The `Stack`, `App`, `StackProps`, `Queue`, and `Topic` classes represent an CloudFormation stack and its properties, an executable program, an Amazon SQS queue, and an Amazon SNS topic, respectively.
   + The `HelloCdkStack` class represents the CloudFormation stack for this application. This stack contains the new Amazon SQS queue and Amazon SNS topic for this application.

1. In the **Environment** window, open the `bin/hello-cdk.ts` file, and browse the following code in that file.

   ```
   #!/usr/bin/env node
   import cdk = require('@aws-cdk/cdk');
   import { HelloCdkStack } from '../lib/hello-cdk-stack';
   
   const app = new cdk.App();
   new HelloCdkStack(app, 'HelloCdkStack');
   app.run();
   ```

   This code loads, instantiates, and then runs the `HelloCdkStack` class from the `lib/hello-cdk-stack.ts` file.

1. Use ** `npm` ** to run the TypeScript compiler to check for coding errors, and then enable the AWS CDK to execute the project's `bin/hello-cdk.js` file. To do this, from the project's root directory, run the ** `npm` ** command with the ** `run` ** action, specifying the ** `build` ** command value in the `package.json` file, as follows.

   ```
   npm run build
   ```

   The preceding command runs the TypeScript compiler, which adds supporting `bin/hello-cdk.d.ts` and `lib/hello-cdk-stack.d.ts` files. The compiler also transpiles the `hello-cdk.ts` and `hello-cdk-stack.ts` files into `hello-cdk.js` and `hello-cdk-stack.js` files.

## Step 3: Run the code
<a name="sample-cdk-run"></a>

In this step, you instruct the AWS CDK to create a CloudFormation stack template based on the code in the `bin/hello-cdk.js` file. You then instruct the AWS CDK to deploy the stack, which creates the Amazon SNS topic and Amazon SQS queue and then subscribes the queue to the topic. You then confirm that the topic and queue were successfully deployed by sending a message from the topic to the queue.

1. Have the AWS CDK create the CloudFormation stack template. To do this, with the terminal session still open in the IDE, from the project's root directory, run the ** `cdk` ** command with the ** `synth` ** action and the name of the stack.

   ```
   cdk synth HelloCdkStack
   ```

   If successful, the output displays the CloudFormation stack template's `Resources` section.

1. The first time that you deploy an AWS CDK app into an environment for a specific AWS account and AWS Region combination, you must install a *bootstrap stack*. This stack includes various resources that the AWS CDK needs to complete its various operations. For example, this stack includes an Amazon S3 bucket that the AWS CDK uses to store templates and assets during its deployment processes. To install the bootstrap stack, run the ** `cdk` ** command with the ** `bootstrap` ** action.

   ```
   cdk bootstrap
   ```
**Note**  
If you run `cdk bootstrap` without specifying any options, the default AWS account and AWS Region are used. You can also bootstrap a specific environment by specifying a profile and account/Region combination. For example:  

   ```
   cdk bootstrap --profile test 123456789012/us-east-1
   ```

1. Have the AWS CDK run the CloudFormation stack template to deploy the stack. To do this, from the project's root directory, run the ** `cdk` ** command with the ** `deploy` ** action and the name of the stack.

   ```
   cdk deploy HelloCdkStack
   ```

   If successful, the output displays that the `HelloCdkStack` stack deployed without errors.
**Note**  
If the output displays a message that the stack does not define an environment and that AWS credentials could not be obtained from standard locations or no region was configured, make sure that your AWS credentials are set correctly in the IDE, and then run the ** `cdk deploy` ** command again. For more information, see [Calling AWS services from an environment in AWS Cloud9](credentials.md).

1. To confirm that the Amazon SNS topic and Amazon SQS queue were successfully deployed, send a message to the topic, and then check the queue for the received message. To do this, you can use a tool such as the AWS Command Line Interface (AWS CLI) or the AWS CloudShell. For more information about these tools, see the [AWS CLI and aws-shell tutorial for AWS Cloud9](sample-aws-cli.md).

   For example, to send a message to the topic, with the terminal session still open in the IDE, use the AWS CLI to run the Amazon SNS** `publish` ** command, supplying the message's subject and body, the AWS Region for the topic, and the topic's Amazon Resource Name (ARN).

   ```
   aws sns publish --subject "Hello from the AWS CDK" --message "This is a message from the AWS CDK." --topic-arn arn:aws:sns:us-east-2:123456789012:HelloCdkStack-HelloCdkTopic1A234567-8BCD9EFGHIJ0K
   ```

   In the preceding command, replace `arn:aws:sns:us-east-2:123456789012:HelloCdkStack-HelloCdkTopic1A234567-8BCD9EFGHIJ0K` with the ARN that CloudFormation assigns to the topic. To get the ID, you can run the Amazon SNS** `list-topics` ** command.

   ```
   aws sns list-topics --output table --query 'Topics[*].TopicArn'
   ```

   If successful, the output of the ** `publish` ** command displays the `MessageId` value for the message that was published.

   To check the queue for the received message, run the Amazon SQS** `receive-message` ** command, supplying the queue's URL.

   ```
   aws sqs receive-message --queue-url https://queue.amazonaws.com/123456789012/HelloCdkStack-HelloCdkQueue1A234567-8BCD9EFGHIJ0K
   ```

   In the preceding command, replace `https://queue.amazonaws.com/123456789012/HelloCdkStack-HelloCdkQueue1A234567-8BCD9EFGHIJ0K` with the ARN that CloudFormation assigns to the queue. To get the URL, you can run the Amazon SQS** `list-queues` ** command.

   ```
   aws sqs list-queues --output table --query 'QueueUrls[*]'
   ```

   If successful, the output of the ** `receive-message` ** command displays information about the message that was received.

## Step 4: Clean up
<a name="sample-cdk-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the CloudFormation stack. This deletes the the Amazon SNS topic and Amazon SQS queue. You should also delete the environment.

### Step 4.1: Delete the stack
<a name="step-4-1-delete-the-stack"></a>

With the terminal session still open in the IDE, from the project's root directory, run the ** `cdk` ** command with the ** `destroy` ** action and the stack's name.

```
cdk destroy HelloCdkStack
```

When prompted to delete the stack, type `y`, and then press `Enter`.

If successful, the output displays that the `HelloCdkStack` stack was deleted without errors.

### Step 4.2: Delete the environment
<a name="step-4-2-delete-the-envtitle"></a>

To delete the environment, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# LAMP tutorial for AWS Cloud9
<a name="sample-lamp"></a>

This tutorial enables you to set up and run LAMP (Linux, Apache HTTP Server, MySQL, and PHP) within an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for AWS services such as Amazon Elastic Compute Cloud (Amazon EC2). For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-lamp-prereqs)
+ [

## Step 1: Install the tools
](#sample-lamp-install-tools)
+ [

## Step 2: Set up MySQL
](#sample-lamp-setup-mysql)
+ [

## Step 3: Set up a website
](#sample-lamp-apache)
+ [

## Step 4: Clean up
](#sample-lamp-clean-up)

## Prerequisites
<a name="sample-lamp-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install the tools
<a name="sample-lamp-install-tools"></a>

In this step, you install the following tools:
+ Apache HTTP Server, a web server host.
+ PHP, a scripting language that is especially suited for web development and can be embedded into HTML. 
+ MySQL, a database management system.

You then finish this step by starting Apache HTTP Server and then MySQL.

1. Ensure that the latest security updates and bug fixes are installed on the instance. To do this, in a terminal session in the AWS Cloud9 IDE, run the **`yum update`** for (Amazon Linux) or **`apt update`** for (Ubuntu Server) command. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) 

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt -y update
   ```

1. Check whether Apache HTTP Server is already installed. To do this, run the **`httpd -v`** (for Amazon Linux) or **`apache2 -v`** (for Ubuntu Server) command. 

   If successful, the output contains the Apache HTTP Server version number. 

   If you see an error, then install Apache HTTP Server by running the **`install`** command.

   For Amazon Linux:

   ```
   sudo yum install -y httpd24
   ```

   For Ubuntu Server:

   ```
   sudo apt install -y apache2
   ```

1. Confirm whether PHP is already installed by running the **`php -v`** command. 

   If successful, the output contains the PHP version number. 

   If you see an error, then install PHP by running the **`install`** command.

   For Amazon Linux:

   ```
   sudo yum install -y php56
   ```

   For Ubuntu Server:

   ```
   sudo apt install -y php libapache2-mod-php php-xml
   ```

1. Confirm whether MySQL is already installed by running the **`mysql --version`** command. 

   If successful, the output contains the MySQL version number. 

   If you see an error, then install MySQL by running the **`install`** command.

   For Amazon Linux:

   ```
   sudo yum install -y mysql-server
   ```

   For Ubuntu Server:

   ```
   sudo apt install -y mysql-server
   ```

1. After you install Apache HTTP Server, PHP, and MySQL, start Apache HTTP Server, and then confirm it has started, by running the following command.

   For Amazon Linux (you might need to run this command twice):

   ```
   sudo service httpd start && sudo service httpd status
   ```

   For Ubuntu Server (to return to the command prompt, press `q`):

   ```
   sudo service apache2 start && sudo service apache2 status
   ```

1. Start MySQL, and then confirm it has started, by running the following command.

   For Amazon Linux:

   ```
   sudo service mysqld start && sudo service mysqld status
   ```

   For Ubuntu Server (to return to the command prompt, press `q`):

   ```
   sudo service mysql start && sudo service mysql status
   ```

## Step 2: Set up MySQL
<a name="sample-lamp-setup-mysql"></a>

In this step, you set up MySQL to follow MySQL security best practices. These security best practices include setting a password for root accounts and removing root accounts that are accessible from outside the local host. Other best practices to be mindful of are removing anonymous user's, removing the test database, and removing privileges that permit anyone to access databases with names that start with `test_`. 

You then finish this step by practicing the starting and then exiting of the MySQL command line client.

1. Implement MySQL security best practices for the MySQL installation by running the following command in a terminal session in the AWS Cloud9 IDE.

   ```
   sudo mysql_secure_installation
   ```

1. When prompted, answer the following questions as specified.

   For Amazon Linux: 

   1. **Enter current password for root (enter for none)** – Press `Enter` (for no password).

   1. **Set root password** – Type `Y`, and then press `Enter`.

   1. **New password** – Type a password, and then press `Enter`.

   1. **Re-enter new password** – Type the password again, and then press `Enter`. (Be sure to store the password in a secure location for later use.)

   1. **Remove anonymous users** – Type `Y`, and then press `Enter`.

   1. **Disallow root login remotely** – Type `Y`, and then press `Enter`.

   1. **Remove test database and access to it** – Type `Y`, and then press `Enter`.

   1. **Reload privilege tables now** – Type `Y`, and then press `Enter`.

   For Ubuntu Server:

   1. **Would you like to set up VALIDATE PASSWORD plugin** – Enter `y`, and then press `Enter`.

   1. **There are three levels of password validation policy** – Enter `0`, `1`, or `2`, and then press `Enter`.

   1. **New password** – Enter a password, and then press `Enter`.

   1. **Re-enter new password** – Enter the password again, and then press `Enter`. Make sure that you store the password in a secure location for later use.

   1. **Do you wish to continue with the password provided** – Enter `y`, and then press `Enter`.

   1. **Remove anonymous users** – Enter `y`, and then press `Enter`.

   1. **Disallow root login remotely** – Enter `y`, and then press `Enter`.

   1. **Remove test database and access to it** – Enter `y`, and then press `Enter`.

   1. **Reload privilege tables now** – Enter `y`, and then press `Enter`.

1. To interact directly with MySQL, start the MySQL command line client as the root user by running the following command. When prompted, type the root user's password that you set earlier, and then press `Enter`. The prompt changes to `mysql>` while you are in the MySQL command line client.

   ```
   sudo mysql -uroot -p
   ```

1. To exit the MySQL command line client, run the following command. The prompt changes back to `$`.

   ```
   exit;
   ```

## Step 3: Set up a website
<a name="sample-lamp-apache"></a>

In this step, you set up the default website root for the Apache HTTP Server with recommended owners and access permissions. You then create a PHP-based webpage within that default website root. 

You then enable incoming web traffic to view that webpage by setting up the security group in Amazon EC2 and network access control list (network ACL) in Amazon Virtual Private Cloud (Amazon VPC) that are associated with this EC2 environment. Each EC2 environment must be associated with both a security group in Amazon EC2 and a network ACL in Amazon VPC. However, even though the default network ACL in an AWS account allows all incoming and outgoing traffic for the environment, the default security group allows only incoming traffic using SSH over port 22. For more information, see [VPC settings for AWS Cloud9 Development Environments](vpc-settings.md).

You then finish this step by successfully viewing the webpage from outside of the AWS Cloud9 IDE.

1. Set up the default website root for the Apache HTTP Server (`/var/www/html`) with recommended owners and access permissions. To do this, run the following six commands, one at a time in the following order, in a terminal session in the AWS Cloud9 IDE. To understand what each command does, read the information after the `#` character after each command.

   For Amazon Linux:

   ```
   sudo groupadd web-content # Create a group named web-content.
   
   sudo usermod -G web-content -a ec2-user # Add the user ec2-user (your default user for this environment) to the group web-content.
   
   sudo usermod -G web-content -a apache # Add the user apache (Apache HTTP Server) to the group web-content.
   
   sudo chown -R ec2-user:web-content /var/www/html # Change the owner of /var/www/html and its files to user ec2-user and group web-content.
   
   sudo find /var/www/html -type f -exec chmod u=rw,g=rx,o=rx {} \; # Change all file permissions within /var/www/html to user read/write, group read-only, and others read/execute. 
   
   sudo find /var/www/html -type d -exec chmod u=rwx,g=rx,o=rx {} \; # Change /var/www/html directory permissions to user read/write/execute, group read/execute, and others read/execute.
   ```

   For Ubuntu Server:

   ```
   sudo groupadd web-content # Create a group named web-content.
   
   sudo usermod -G web-content -a ubuntu # Add the user ubuntu (your default user for this environment) to the group web-content.
   
   sudo usermod -G web-content -a www-data # Add the user www-data (Apache HTTP Server) to the group web-content.
   
   sudo chown -R ubuntu:web-content /var/www/html # Change the owner of /var/www/html and its files to user ubuntu and group web-content.
   
   sudo find /var/www/html -type f -exec chmod u=rw,g=rx,o=rx {} \; # Change all file permissions within /var/www/html to user read/write, group read-only, and others read/execute. 
   
   sudo find /var/www/html -type d -exec chmod u=rwx,g=rx,o=rx {} \; # Change /var/www/html directory permissions to user read/write/execute, group read/execute, and others read/execute.
   ```

1. Create a PHP-based webpage named `index.php` in the default website root folder for the Apache HTTP Server (which is `/var/www/html`) by running the following command.

   For Amazon Linux:

   ```
   sudo touch /var/www/html/index.php && sudo chown -R ec2-user:web-content /var/www/html/index.php && sudo chmod u=rw,g=rx,o=rx /var/www/html/index.php && sudo printf '%s\n%s\n%s' '<?php' '  phpinfo();' '?>' >> /var/www/html/index.php
   ```

   The preceding command for Amazon Linux also changes the file's owner to `ec2-user`, changes the file's group to `web-content`, and changes the file's permissions to read/write for the user, and read/execute for the group and others. 

   For Ubuntu Server:

   ```
   sudo touch /var/www/html/index.php && sudo chown -R ubuntu:web-content /var/www/html/index.php && sudo chmod u=rw,g=rx,o=rx /var/www/html/index.php && sudo printf '%s\n%s\n%s' '<?php' '  phpinfo();' '?>' >> /var/www/html/index.php
   ```

   The preceding command for Ubuntu Server also changes the file's owner to `ubuntu`, changes the file's group to `web-content`, and changes the file's permissions to read/write for the user, and read/execute for the group and others. 

   If successful, the preceding commands create the `index.php` file with the following contents.

   ```
   <?php
     phpinfo();
   ?>
   ```

1. Enable incoming web traffic over port 80 to view the new webpage by setting up the network ACL in Amazon VPC and the security group Amazon EC2 that's associated with this EC2 environment. To do this, run the following eight commands, one at a time in the following order. To understand what each command does, read the information after the `#` character for each command.
**Important**  
Running the following commands enables incoming web traffic over port 80 for **all** EC2 environments and Amazon EC2 instances that are associated with the security group and network ACL for this environment. This might result in unexpectedly enabling incoming web traffic over port 80 for EC2 environments and Amazon EC2 instances other than this one.
**Note**  
The following second through fourth commands enable the security group to allow incoming web traffic over port 80. If you have a default security group, which only allows incoming SSH traffic over port 22, then you must run the first command followed by these second through fourth commands. However, if you have a custom security group already allows incoming web traffic over port 80, you can skip running those commands.  
The following fifth through eighth commands enable the network ACL to allow incoming web traffic over port 80. If you have a default network ACL, which already allows all incoming traffic over all ports, then you can safely skip running those commands. However, suppose that you have a custom network ACL that doesn't allow incoming web traffic over port 80. Then, run the first command followed by these fifth through eighth commands. 

   ```
   MY_INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id) # Get the ID of the instance for the environment, and store it temporarily.
              
   MY_SECURITY_GROUP_ID=$(aws ec2 describe-instances --instance-id $MY_INSTANCE_ID --query 'Reservations[].Instances[0].SecurityGroups[0].GroupId' --output text) # Get the ID of the security group associated with the instance, and store it temporarily.
   
   aws ec2 authorize-security-group-ingress --group-id $MY_SECURITY_GROUP_ID --protocol tcp --cidr 0.0.0.0/0 --port 80 # Add an inbound rule to the security group to allow all incoming IPv4-based traffic over port 80.
   
   aws ec2 authorize-security-group-ingress --group-id $MY_SECURITY_GROUP_ID --ip-permissions IpProtocol=tcp,Ipv6Ranges='[{CidrIpv6=::/0}]',FromPort=80,ToPort=80 # Add an inbound rule to the security group to allow all incoming IPv6-based traffic over port 80.
   
   MY_SUBNET_ID=$(aws ec2 describe-instances --instance-id $MY_INSTANCE_ID --query 'Reservations[].Instances[0].SubnetId' --output text) # Get the ID of the subnet associated with the instance, and store it temporarily.
   
   MY_NETWORK_ACL_ID=$(aws ec2 describe-network-acls --filters Name=association.subnet-id,Values=$MY_SUBNET_ID --query 'NetworkAcls[].Associations[0].NetworkAclId' --output text) # Get the ID of the network ACL associated with the subnet, and store it temporarily.
   
   aws ec2 create-network-acl-entry --network-acl-id $MY_NETWORK_ACL_ID --ingress --protocol tcp --rule-action allow --rule-number 10000 --cidr-block 0.0.0.0/0 --port-range From=80,To=80 # Add an inbound rule to the network ACL to allow all IPv4-based traffic over port 80. Advanced users: change this suggested rule number as desired.
   
   aws ec2 create-network-acl-entry --network-acl-id $MY_NETWORK_ACL_ID --ingress --protocol tcp --rule-action allow --rule-number 10100 --ipv6-cidr-block ::/0 --port-range From=80,To=80 # Add an inbound rule to the network ACL to allow all IPv6-based traffic over port 80. Advanced users: change this suggested rule number as desired.
   ```

1. Get the URL to the `index.php` file within the web server root. To do this, run the following command, and use a new web browser tab or a different web browser separate from the AWS Cloud9 IDE to go to the URL that is displayed. If successful, the webpage displays information about Apache HTTP Server, MySQL, PHP, and other related settings.

   ```
   MY_PUBLIC_IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4) && echo http://$MY_PUBLIC_IP/index.php # Get the URL to the index.php file within the web server root.
   ```

## Step 4: Clean up
<a name="sample-lamp-clean-up"></a>

Suppose that you want to keep using this environment but you want to disable incoming web traffic over port 80. Then, run the following eight commands, one at a time in the following order, to delete the corresponding incoming traffic rules that you set earlier in the security group and network ACL that are associated with the environment. To understand what each command does, read the information after the `#` character for each command.

**Important**  
Running the following commands disables incoming web traffic over port 80 for **all** EC2 environments and Amazon EC2 instances that are associated with the security group and network ACL for this environment. This might result in unexpectedly disabling incoming web traffic over port 80 for EC2 environments and Amazon EC2 instances other than this one.

**Note**  
The following fifth through eighth commands remove existing rules to block the network ACL from allowing incoming web traffic over port 80. If you have a default network ACL, which already allows all incoming traffic over all ports, then you can skip running those commands. However, suppose that you have a custom network ACL with existing rules that allow incoming web traffic over port 80 and you want to delete those rules. Then, you need to run the first command followed by these fifth through eighth commands. 

```
MY_INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id) # Get the ID of the instance for the environment, and store it temporarily.
           
MY_SECURITY_GROUP_ID=$(aws ec2 describe-instances --instance-id $MY_INSTANCE_ID --query 'Reservations[].Instances[0].SecurityGroups[0].GroupId' --output text) # Get the ID of the security group associated with the instance, and store it temporarily.

aws ec2 revoke-security-group-ingress --group-id $MY_SECURITY_GROUP_ID --protocol tcp --cidr 0.0.0.0/0 --port 80 # Delete the existing inbound rule from the security group to block all incoming IPv4-based traffic over port 80.

aws ec2 revoke-security-group-ingress --group-id $MY_SECURITY_GROUP_ID --ip-permissions IpProtocol=tcp,Ipv6Ranges='[{CidrIpv6=::/0}]',FromPort=80,ToPort=80 # Delete the existing inbound rule from the security group to block all incoming IPv6-based traffic over port 80.

MY_SUBNET_ID=$(aws ec2 describe-instances --instance-id $MY_INSTANCE_ID --query 'Reservations[].Instances[0].SubnetId' --output text) # Get the ID of the subnet associated with the instance, and store it temporarily.

MY_NETWORK_ACL_ID=$(aws ec2 describe-network-acls --filters Name=association.subnet-id,Values=$MY_SUBNET_ID --query 'NetworkAcls[].Associations[0].NetworkAclId' --output text) # Get the ID of the network ACL associated with the subnet, and store it temporarily.

aws ec2 delete-network-acl-entry --network-acl-id $MY_NETWORK_ACL_ID --ingress --rule-number 10000 # Delete the existing inbound rule from the network ACL to block all IPv4-based traffic over port 80. Advanced users: if you originally created this rule with a different number, change this suggested rule number to match.

aws ec2 delete-network-acl-entry --network-acl-id $MY_NETWORK_ACL_ID --ingress --rule-number 10100 # Delete the existing inbound rule from the network ACL to block all IPv6-based traffic over port 80. Advanced users: if you originally created this rule with a different number, change this suggested rule number to match.
```

If you're done using this environment, delete the environment to prevent ongoing charges to your AWS account. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# WordPress tutorial for AWS Cloud9
<a name="sample-wordpress"></a>

This tutorial enables you to install and run WordPress within an AWS Cloud9 development environment. WordPress is an open-source content management system (CMS) that's widely used for the delivery web content. 

**Note**  
Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon Elastic Compute Cloud (Amazon EC2). For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).

## Prerequisites
<a name="sample-wordpress-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).
+ **You have an up-to-date EC2 instance with all the latest software packages**. In the AWS Cloud9 IDE terminal window, you can run `yum update` with the `-y` option to install updates without asking for confirmation. If you would like to examine the updates before installing, you can omit this option. 

  ```
  sudo yum update -y
  ```

## Installation overview
<a name="task-overview"></a>

Installing WordPress on your environment's EC2 instance involves the following steps:

1. Installing and configuring MariaDB Server, which is an open-source relational database that stores information for WordPress installations 

1. Installing and configuring WordPress, which includes editing the `wordpress.conf` configuration file

1. Configuring the Apache server that hosts the WordPress site

1. Previewing the WordPress web content that's hosted by the Apache server

## Step 1: Installing and configuring MariaDB Server
<a name="wp-install-configure-mariadb"></a>

1. In the AWS Cloud9 IDE, choose **Window**, **New Terminal** and enter the following commands to install and start a MariaDB Server installation:

   ```
   sudo yum install -y mariadb-server
   sudo systemctl start mariadb
   ```

1. Next, run the `mysql_secure_installation` script to improve the security of your MariaDB Server installation. 

   When providing responses to the script, press **Enter** for the first question to keep the root password blank. Press **n** for `Set root password?` and then **y** for each of the rest of the security options.

   ```
   mysql_secure_installation
   ```

1. Now create a database table to store WordPress information using the MariaDB client.

   (Press **Enter** when asked for your password.)

   ```
   sudo mysql -u root -p
   MariaDB [(none)]> create database wp_test;
   MariaDB [(none)]> grant all privileges on wp_test.* to root@localhost identified by ';'
   ```

1. To log out of the MariaDB client, run the `exit` command.

## Step 2: Installing and configuring WordPress
<a name="wp-install-configure-wordpress"></a>

1. In the IDE terminal window, navigate to the `environment` directory and then create the directories `config` and `wordpress`. Then run the `touch` command to create a file called `wordpress.conf` in the `config` directory:

   ```
   cd /home/ec2-user/environment
   mkdir config wordpress
   touch config/wordpress.conf
   ```

1. Use the IDE editor or vim to update `wordpress.conf` with host configuration information that allows the Apache server to serve WordPress content:

   ```
   # Ensure that Apache listens on port 80
   Listen 8080
   <VirtualHost *:8080>
       DocumentRoot "/var/www/wordpress"
       ServerName www.example.org
       # Other directives here
   </VirtualHost>
   ```

1. Now run the following commands to retrieve the required archive file and install WordPress: 

   ```
   cd /home/ec2-user/environment
   wget https://wordpress.org/latest.tar.gz
   tar xvf latest.tar.gz
   ```

1. Run the `touch` command to create a file called `wp-config.php` in the `environment/wordpress` directory:

   ```
   touch wordpress/wp-config.php
   ```

1. Use the IDE editor or vim to update `wp-config.php` and replace the sample data with your setup: 

   ```
   // ** MySQL settings - You can get this info from your web host ** //
   /** The name of the database for WordPress */
   define( 'DB_NAME', 'wp_test' );
   
   /** MySQL database username */
   define( 'DB_USER', 'wp_user' );
   
   /** MySQL database password */
   define( 'DB_PASSWORD', 'YourSecurePassword' );
   
   /** MySQL hostname */
   define( 'DB_HOST', 'localhost' );
   
   /** Database Charset to use in creating database tables. */
   define( 'DB_CHARSET', 'utf8' );
   
   /** The Database Collate type. Don't change this if in doubt. */
   define( 'DB_COLLATE', '' );
   
   define('FORCE_SSL', true);
   
   if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') $_SERVER['HTTPS'] = 'on';
   ```

## Step 3: Configuring your Apache HTTP Server
<a name="wp-install-configure-apache"></a>

1. In the AWS Cloud9 IDE terminal window, make sure that you have Apache installed: 

   ```
   httpd -v
   ```

   If the Apache server isn't installed, run the following command:

   ```
   sudo yum install -y httpd 
   ```

1. Navigate to the `/etc/httpd/conf.d` directory, which is the location for Apache's virtual host configuration files. Then use the `ln` command to link the `wordpress.conf` you created earlier to the current working directory (`/etc/httpd/conf.d`):

   ```
   cd /etc/httpd/conf.d
   sudo ln -s /home/ec2-user/environment/config/wordpress.conf
   ```

1. Now navigate to `/var/www` directory, which is the default root folder for Apache servers. And use the `ln` command to link the `wordpress` directory you created earlier to the current working directory (`/var/www`): 

   ```
   cd /var/www
   sudo ln -s /home/ec2-user/environment/wordpress
   ```

1. Run the `chmod` command to allow the Apache server to run content in the `wordpress` subdirectory:

   ```
   sudo chmod +x /home/ec2-user/
   ```

1. Now restart the Apache server to allow it to detect the new configurations: 

   ```
   sudo service httpd restart
   ```

## Step 4: Previewing WordPress web content
<a name="wp-preview-wordpress"></a>

1. Using the AWS Cloud9 IDE, create a new file called `index.html` in the following directory: `environment/wordpress`.

1. Add HTML-formatted text to `index.html`. For example:

   ```
   <h1>Hello World!</h1>
   ```

1. In the **Environment** window, choose the `index.html` file , and then choose **Preview**, **Preview Running Application**.

   The web page, which displays the *Hello World\$1* message, appears in the application preview tab. To view the web content in your preferred browser, choose **Pop Out Into a New Window**.

   If you delete the `index.html` file and refresh the application preview tab, the WordPress configuration page is displayed. 

## Managing mixed content errors
<a name="wp-allow-mixed"></a>

Web browsers display mixed content errors for a WordPress site if it's loading HTTPS and HTTP scripts or content at the same time. The wording of error messages depends on the web browser that you're using, but you're informed that your connection to a site is insecure or not fully secure. And your web browser blocks access to the mixed content.

**Important**  
By default, all web pages that you access in the application preview tab of the AWS Cloud9 IDE automatically use the HTTPS protocol. If a page's URI features the insecure `http` protocol, it's automatically replaced by `https`. And you can't access the insecure content by manually changing `https` back to `http`.  
For guidance on implementing HTTPS for your web site, see the [WordPress documentation](https://wordpress.org/support/article/https-for-wordpress/).

# Java tutorial for AWS Cloud9
<a name="sample-java"></a>

**Important**  
If you're using an AWS Cloud9 development environment that's backed by an EC2 instance with 2 GiB or more of memory, we recommend that you activate enhanced Java support. This provides access to productivity features such as code completion, linting for errors, context-specific actions, and debugging options such as breakpoints and stepping.  
For more information, see [Enhanced support for Java development](enhanced-java.md).

This tutorial enables you to run some Java code in an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-java-prerequisites)
+ [

## Step 1: Install required tools
](#sample-java-install)
+ [

## Step 2: Add code
](#sample-java-code)
+ [

## Step 3: Build and run the code
](#sample-java-run)
+ [

## Step 4: Set up to use the AWS SDK for Java
](#sample-java-sdk)
+ [

## Step 5: Set up AWS credentials management in your environment
](#sample-java-sdk-creds)
+ [

## Step 6: Add AWS SDK code
](#sample-java-sdk-code)
+ [

## Step 7: Build and run the AWS SDK code
](#sample-java-sdk-run)
+ [

## Step 8: Clean up
](#sample-java-clean-up)

## Prerequisites
<a name="sample-java-prerequisites"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-java-install"></a>

In this step, you install a set of Java development tools in your AWS Cloud9 development environment. If you already have a set of Java development tools such as the Oracle JDK or OpenJDK installed in your environment, you can skip ahead to [Step 2: Add code](#sample-java-code). This sample was developed with OpenJDK 8, which you can install in your environment by completing the following procedure.

1. Confirm whether OpenJDK 8 is already installed. To do this, in a terminal session in the AWS Cloud9 IDE, run the command line version of the Java runner with the ** `-version` ** option. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.)

   ```
   java -version
   ```

   Based on the output of the preceding command, do one of the following:
   + If the output states that the `java` command isn't found, continue with step 2 in this procedure to install OpenJDK 8.
   + If the output contains values starting with `Java(TM)`, `Java Runtime Environment`, `Java SE`, `J2SE`, or `Java2`, the OpenJDK isn't installed or isn't set as the default Java development toolset. Continue with step 2 in this procedure to install OpenJDK 8, and then switch to using OpenJDK 8.
   + If the output contains values starting with `java version 1.8` and `OpenJDK`, skip ahead to [Step 2: Add code](#sample-java-code). OpenJDK 8 is installed correctly for this sample.
   + If the output contains a `java version` less than `1.8` and values starting with `OpenJDK`, continue with step 2 in this procedure to upgrade the installed OpenJDK version to OpenJDK 8.

1. Ensure the latest security updates and bug fixes are installed. To do this, run the yum tool (for Amazon Linux) or the apt tool (for Ubuntu Server) with the ** `update` ** command.

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. Install OpenJDK 8. To do this, run the yum tool (for Amazon Linux) or the apt tool (for Ubuntu Server) with the ** `install` ** command, specifying the OpenJDK 8 package.

   For Amazon Linux:

   ```
   sudo yum -y install java-1.8.0-openjdk-devel
   ```

   For Ubuntu Server:

   ```
   sudo apt install -y openjdk-8-jdk
   ```

   For more information, see [How to download and install prebuilt OpenJDK packages](https://openjdk.org/install/) on the OpenJDK website.

1. Switch or upgrade the default Java development toolset to OpenJDK 8. To do this, run the ** `update-alternatives` ** command with the ** `--config` ** option. Run this command twice to switch or upgrade the command line versions of the Java runner and compiler.

   ```
   sudo update-alternatives --config java
   sudo update-alternatives --config javac
   ```

   At each prompt, type the selection number for OpenJDK 8 (the one that contains `java-1.8`).

1. Confirm that the command line versions of the Java runner and compiler are using OpenJDK 8. To do this, run the command line versions of the Java runner and compiler with the `-version` option.

   ```
   java -version
   javac -version
   ```

   If OpenJDK 8 is installed and set correctly, the Java runner version output contains a value starting with `openjdk version 1.8`, and the Java compiler version output starts with the value `javac 1.8`.

## Step 2: Add code
<a name="sample-java-code"></a>

In the AWS Cloud9 IDE, create a file with the following code, and save the file with the name `hello.java`. (To create a file, on the menu bar, choose **File**, **New File**. To save the file, choose **File**, **Save**.)

```
public class hello {

  public static void main(String []args) {
    System.out.println("Hello, World!");

    System.out.println("The sum of 2 and 3 is 5.");

    int sum = Integer.parseInt(args[0]) + Integer.parseInt(args[1]);

    System.out.format("The sum of %s and %s is %s.\n",
      args[0], args[1], Integer.toString(sum));
  }
}
```

## Step 3: Build and run the code
<a name="sample-java-run"></a>

1. Use the command line version of the Java compiler to compile the `hello.java` file into a `hello.class` file. To do this, using the terminal in the AWS Cloud9 IDE, from the same directory as the `hello.java` file, run the Java compiler, specifying the `hello.java` file.

   ```
   javac hello.java
   ```

1. Use the command line version of the Java runner to run the `hello.class` file. To do this, from the same directory as the `hello.class` file, run the Java runner, specifying the name of the `hello` class that was declared in the `hello.java` file, with two integers to add (for example, `5` and `9`).

   ```
   java hello 5 9
   ```

1. Compare your output.

   ```
   Hello, World!
   The sum of 2 and 3 is 5.
   The sum of 5 and 9 is 14.
   ```

## Step 4: Set up to use the AWS SDK for Java
<a name="sample-java-sdk"></a>

You can enhance this sample to use the AWS SDK for Java to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this step, you install [Apache Maven](https://maven.apache.org/) or [Gradle](https://gradle.org/) in your environment. Maven and Gradle are common build automation systems that can be used with Java projects. After you install Maven or Gradle, you use it to generate a new Java project. In this new project, you add a reference to the AWS SDK for Java. This AWS SDK for Java provides a convenient way to interact with AWS services such as Amazon S3, from your Java code.

**Topics**
+ [

### Set up with Maven
](#sample-java-sdk-maven)
+ [

### Set up with Gradle
](#sample-java-sdk-gradle)

### Set up with Maven
<a name="sample-java-sdk-maven"></a>

1. Install Maven in your environment. To see whether Maven is already installed, using the terminal in the AWS Cloud9 IDE, run Maven with the ** `-version` ** option.

   ```
   mvn -version
   ```

   If successful, the output contains the Maven version number. If Maven is already installed, skip ahead to step 4 in this procedure to use Maven to generate a new Java project in your environment.

1. Install Maven by using the terminal to run the following commands. 

   For Amazon Linux, the following commands get information about the package repository where Maven is stored, and then use this information to install Maven.

   ```
   sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
   sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo
   sudo yum install -y apache-maven
   ```

   For more information about the preceding commands, see [Extra Packages for Enterprise Linux (EPEL)](https://docs.fedoraproject.org/en-US/epel/) on the Fedora Project Wiki website.

   For Ubuntu Server, run the following command instead.

   ```
   sudo apt install -y maven
   ```

1. Confirm the installation by running Maven with the ** `-version` ** option.

   ```
   mvn -version
   ```

1. Use Maven to generate a new Java project. To do this, use the terminal to run the following command from the directory where you want Maven to generate the project (for example, the root directory of your environment).

   ```
   mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
   ```

   The preceding command creates the following directory structure for the project in your environment.

   ```
   my-app
     |- src
     |   `- main
     |        `- java
     |             `- com
     |                 `- mycompany
     |                      `- app
     |                          `-App.java
     |- test
     |   `- java
     |        `- com
     |            `- mycompany
     |                 `- app
     |                     `- AppTest.java
     `- pom.xml
   ```

   For more information about the preceding directory structure, see [Maven Quickstart Archetype](https://maven.apache.org/archetypes/maven-archetype-quickstart/) and [Introduction to the Standard Directory Layout](https://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html) on the Apache Maven Project website.

1. Modify the Project Object Model (POM) file for the project. (A POM file defines a Maven project's settings.) To do this, from the **Environment** window, open the `my-app/pom.xml` file. In the editor, replace the file's current contents with the following code, and then save the `pom.xml` file.

   ```
   <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
     <modelVersion>4.0.0</modelVersion>
     <groupId>com.mycompany.app</groupId>
     <artifactId>my-app</artifactId>
     <packaging>jar</packaging>
     <version>1.0-SNAPSHOT</version>
     <build>
       <plugins>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-assembly-plugin</artifactId>
           <version>3.6.0</version>
           <configuration>
             <descriptorRefs>
               <descriptorRef>jar-with-dependencies</descriptorRef>
             </descriptorRefs>
             <archive>
               <manifest>
                 <mainClass>com.mycompany.app.App</mainClass>
               </manifest>
             </archive>
           </configuration>
           <executions>
             <execution>
               <phase>package</phase>
                 <goals>
                   <goal>single</goal>
                 </goals>
             </execution>
           </executions>
         </plugin>
       </plugins>
     </build>
     <dependencies>
       <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
         <version>3.8.1</version>
         <scope>test</scope>
       </dependency>
       <dependency>
         <groupId>com.amazonaws</groupId>
         <artifactId>aws-java-sdk</artifactId>
         <version>1.11.330</version>
       </dependency>
     </dependencies>
   </project>
   ```

   The preceding POM file includes project settings that specify declarations such as the following:
   + The `artifactid` setting of `my-app` sets the project's root directory name, and the `group-id` setting of `com.mycompany.app` sets the `com/mycompany/app` subdirectory structure and the `package` declaration in the `App.Java` and `AppTest.java` files.
   + The `artifactId` setting of `my-app`, with the `packaging` setting of `jar`, the `version` setting of `1.0-SNAPSHOT`, and the `descriptorRef` setting of `jar-with-dependencies` set the output JAR file's name of `my-app-1.0-SNAPSHOT-jar-with-dependencies.jar`.
   + The `plugin` section declares that a single JAR, which includes all dependencies, will be built.
   + The `dependency` section with the `groupId` setting of `com.amazon.aws` and the `artifactId` setting of `aws-java-sdk` includes the AWS SDK for Java library files. The AWS SDK for Java version to use is declared by the `version` setting. To use a different version, replace this version number.

Skip ahead to [Step 5: Set up AWS credentials management in your environment](#sample-java-sdk-creds).

### Set up with Gradle
<a name="sample-java-sdk-gradle"></a>

1. Install Gradle in your environment. To see whether Gradle is already installed, using the terminal in the AWS Cloud9 IDE, run Gradle with the ** `-version` ** option.

   ```
   gradle -version
   ```

   If successful, the output contains the Gradle version number. If Gradle is already installed, skip ahead to step 4 in this procedure to use Gradle to generate a new Java project in your environment.

1. Install Gradle by using the terminal to run the following commands. These commands install and run the SDKMAN\$1 tool, and then use SDKMAN\$1 to install the latest version of Gradle.

   ```
   curl -s "https://get.sdkman.io" | bash
   source "$HOME/.sdkman/bin/sdkman-init.sh"
   sdk install gradle
   ```

   For more information about the preceding commands, see [Installation](https://sdkman.io/install) on the SDKMAN\$1 website and [Install with a package manager](https://gradle.org/install/#with-a-package-manager) on the Gradle website.

1. Confirm the installation by running Gradle with the ** `-version` ** option.

   ```
   gradle -version
   ```

1. Use Gradle to generate a new Java project in your environment. To do this, use the terminal to run the following commands to create a directory for the project, and then switch to that directory.

   ```
   mkdir my-app
   cd my-app
   ```

1. Run the following command to have Gradle generate a new Java application project in the `my-app` directory in your environment.

   ```
   gradle init --type java-application
   ```

   The preceding command creates the following directory structure for the project in your environment.

   ```
   my-app
     |- .gradle
     |   `- (various supporting project folders and files)
     |- gradle
     |   `- (various supporting project folders and files)
     |- src
     |   |- main
     |   |    `- java
     |   |         `- App.java
     |   `- test
     |        `- java
     |             `- AppTest.java
     |- build.gradle
     |- gradlew
     |- gradlew.bat
     `- settings.gradle
   ```

1. Modify the `AppTest.java` for the project. (If you do not do this, the project might not build or run as expected). To do this, from the **Environment** window, open the `my-app/src/test/java/AppTest.java` file. In the editor, replace the file's current contents with the following code, and then save the `AppTest.java` file.

   ```
   import org.junit.Test;
   import static org.junit.Assert.*;
   
   public class AppTest {
     @Test public void testAppExists () {
       try {
         Class.forName("com.mycompany.app.App");
       } catch (ClassNotFoundException e) {
         fail("Should have a class named App.");
       }
     }
   }
   ```

1. Modify the `build.gradle` file for the project. (A `build.gradle` file defines a Gradle project's settings.) To do this, from the **Environment** window, open the `my-app/build.gradle` file. In the editor, replace the file's current contents with the following code, and then save the `build.gradle` file.

   ```
   apply plugin: 'java'
   apply plugin: 'application'
   
   repositories {
     jcenter()
     mavenCentral()
   }
   
   buildscript {
     repositories {
       mavenCentral()
     }
     dependencies {
       classpath "io.spring.gradle:dependency-management-plugin:1.0.3.RELEASE"
     }
   }
   
   apply plugin: "io.spring.dependency-management"
   
   dependencyManagement {
     imports {
       mavenBom 'com.amazonaws:aws-java-sdk-bom:1.11.330'
     }
   }
   
   dependencies {
     compile 'com.amazonaws:aws-java-sdk-s3'
     testCompile group: 'junit', name: 'junit', version: '4.12'
   }
   
   run {
     if (project.hasProperty("appArgs")) {
       args Eval.me(appArgs)
     }
   }
   
   mainClassName = 'App'
   ```

   The preceding `build.gradle` file includes project settings that specify declarations such as the following:
   + The `io.spring.dependency-management` plugin is used to import the AWS SDK for Java Maven Bill of Materials (BOM) to manage AWS SDK for Java dependencies for the project. `classpath` declares the version to use. To use a different version, replace this version number.
   +  `com.amazonaws:aws-java-sdk-s3` includes the Amazon S3 portion of the AWS SDK for Java library files. `mavenBom` declares the version to use. If you want to use a different version, replace this version number.

## Step 5: Set up AWS credentials management in your environment
<a name="sample-java-sdk-creds"></a>

Each time you use the AWS SDK for Java to call an AWS service, you must provide a set of AWS credentials with the call. These credentials determine whether the AWS SDK for Java has the appropriate permissions to make that call. If the credentials don't cover the appropriate permissions, the call will fail.

In this step, you store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Set up AWS Credentials and Region for Development](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/setup.html#setup-credentials) in the *AWS SDK for Java Developer Guide*.

## Step 6: Add AWS SDK code
<a name="sample-java-sdk-code"></a>

In this step, you add code to interact with Amazon S3 to create a bucket, list your available buckets, and then delete the bucket you just created.

From the **Environment** window, open the `my-app/src/main/java/com/mycompany/app/App.java` file for Maven or the `my-app/src/main/java/App.java` file for Gradle. In the editor, replace the file's current contents with the following code, and then save the `App.java` file.

```
package com.mycompany.app;

import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.AmazonS3Exception;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.CreateBucketRequest;

import java.util.List;

public class App {

    private static AmazonS3 s3;

    public static void main(String[] args) {
        if (args.length < 2) {
            System.out.format("Usage: <the bucket name> <the AWS Region to use>\n" +
                    "Example: my-test-bucket us-east-2\n");
            return;
        }

        String bucket_name = args[0];
        String region = args[1];

        s3 = AmazonS3ClientBuilder.standard()
                .withCredentials(new ProfileCredentialsProvider())
                .withRegion(region)
                .build();

        // List current buckets.
        ListMyBuckets();

        // Create the bucket.
        if (s3.doesBucketExistV2(bucket_name)) {
            System.out.format("\nCannot create the bucket. \n" +
                    "A bucket named '%s' already exists.", bucket_name);
            return;
        } else {
            try {
                System.out.format("\nCreating a new bucket named '%s'...\n\n", bucket_name);
                s3.createBucket(new CreateBucketRequest(bucket_name, region));
            } catch (AmazonS3Exception e) {
                System.err.println(e.getErrorMessage());
            }
        }

        // Confirm that the bucket was created.
        ListMyBuckets();

        // Delete the bucket.
        try {
            System.out.format("\nDeleting the bucket named '%s'...\n\n", bucket_name);
            s3.deleteBucket(bucket_name);
        } catch (AmazonS3Exception e) {
            System.err.println(e.getErrorMessage());
        }

        // Confirm that the bucket was deleted.
        ListMyBuckets();

    }

    private static void ListMyBuckets() {
        List<Bucket> buckets = s3.listBuckets();
        System.out.println("My buckets now are:");

        for (Bucket b : buckets) {
            System.out.println(b.getName());
        }
    }

}
```

## Step 7: Build and run the AWS SDK code
<a name="sample-java-sdk-run"></a>

To run the code from the previous step, run the following commands from the terminal. These commands use Maven or Gradle to create an executable JAR file for the project, and then use the Java runner to run the JAR. The JAR runs with the name of the bucket to create in Amazon S3 (for example, `my-test-bucket`) and the ID of the AWS Region to create the bucket in as input (for example, `us-east-2`).

For Maven, run the following commands.

```
cd my-app
mvn package
java -cp target/my-app-1.0-SNAPSHOT-jar-with-dependencies.jar com.mycompany.app.App my-test-bucket us-east-2
```

For Gradle, run the following commands.

```
gradle build
gradle run -PappArgs="['my-test-bucket', 'us-east-2']"
```

Compare your results to the following output.

```
My buckets now are:

Creating a new bucket named 'my-test-bucket'...

My buckets now are:

my-test-bucket

Deleting the bucket named 'my-test-bucket'...

My buckets now are:
```

## Step 8: Clean up
<a name="sample-java-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# C\$1\$1 tutorial for AWS Cloud9
<a name="sample-cplusplus"></a>

This tutorial enables you to run C\$1\$1 code in an AWS Cloud9 development environment. The code also uses resources provided by the [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/welcome.html), a modularized, cross-platform, open-source library you can use to connect to Amazon Web Services.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-cplusplus-prereqs)
+ [

## Step 1: Install g\$1\$1 and required dev packages
](#sample-cplusplus-install)
+ [

## Step 2: Install CMake
](#install-cmake)
+ [

## Step 3: Obtain and build the SDK for C\$1\$1
](#install-cmake)
+ [

## Step 4: Create C\$1\$1 and CMakeLists files
](#sample-cplusplus-sdk-code)
+ [

## Step 5: Build and run the C\$1\$1 code
](#build-and-run-cpp)
+ [

## Step 6: Clean up
](#sample-cplusplus-clean-up)

## Prerequisites
<a name="sample-cplusplus-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install g\$1\$1 and required dev packages
<a name="sample-cplusplus-install"></a>

To build and run a C\$1\$1 application, you need a utility such as `g++`, which is a C\$1\$1 compiler provided by the [GNU Complier Collection (GCC)](https://gcc.gnu.org/).

You also need to add header files (`-dev` packages) for `libcurl`, `libopenssl`, `libuuid`, `zlib`, and, optionally, `libpulse` for Amazon Polly support. 

The process of installing development tools varies slightly depending on whether you're using an Amazon Linux/Amazon Linux 2 instance or an Ubuntu instance.

------
#### [ Amazon Linux-based systems ]

You can check if you already have `gcc` installed by running the following command in the AWS Cloud9 terminal:

```
g++ --version
```

If `g++` isn't installed, you can easily install it part of the package group called "Development Tools". These tools are added to an instance with the `yum groupinstall` command:

```
sudo yum groupinstall "Development Tools"
```

Run `g++ --version` again to confirm that the compiler has been installed.

Now install the packages for the required libraries using your system’s package manager: 

```
sudo yum install libcurl-devel openssl-devel libuuid-devel pulseaudio-libs-devel
```

------
#### [ Ubuntu-based systems ]

You can check if you already have `gcc` installed by running the following command in the AWS Cloud9 terminal:

```
g++ --version
```

If gcc is not installed, you can install it on an Ubuntu-based system by running the following commands:

```
sudo apt update
sudo apt install build-essential
sudo apt-get install manpages-dev
```

Run `g++ --version` again to confirm that the compiler has been installed.

Now install the packages for the required libraries using your system’s package manager: 

```
sudo apt-get install libcurl4-openssl-dev libssl-dev uuid-dev zlib1g-dev libpulse-dev
```

------

## Step 2: Install CMake
<a name="install-cmake"></a>

 You need to install the `cmake` tool, which automates the process of building executable files from source code. 

1. In the IDE terminal window, run the following command to obtain the required archive:

   ```
   wget https://cmake.org/files/v3.18/cmake-3.18.0.tar.gz
   ```

1. Extract the files from the archive and navigate to the directory that contains the unpacked files:

   ```
   tar xzf cmake-3.18.0.tar.gz
   cd cmake-3.18.0
   ```

1. Next, run a bootstrap script and install `cmake` by running the following commands:

   ```
   ./bootstrap
   make
   sudo make install
   ```

1. Confirm you've installed the tool by running the following command:

   ```
   cmake --version
   ```

## Step 3: Obtain and build the SDK for C\$1\$1
<a name="install-cmake"></a>

To set up the AWS SDK for C\$1\$1, you can either build the SDK yourself directly from the source or download the libraries using a package manager. You can find details on the available options in [Getting Started Using the AWS SDK for C\$1\$1](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/getting-started.html) in the *AWS SDK for C\$1\$1 Developer Guide*. 

This sample demonstrating using `git` to clone the SDK source code and `cmake` to build the SDK for C\$1\$1.

1. Clone the remote repository and get all git submodules recursively for your AWS Cloud9 environment by running the following command in the terminal:

   ```
   git clone --recurse-submodules https://github.com/aws/aws-sdk-cpp
   ```

1. Navigate to the new `aws-sdk-cpp` directory, create a sub-directory to build the AWS SDK for C\$1\$1 into, and then navigate to that:

   ```
   cd aws-sdk-cpp
   mkdir sdk_build
   cd sdk_build
   ```

1. 
**Note**  
To save time, this step builds only the Amazon S3 portion of the AWS SDK for C\$1\$1. If you want to build the complete SDK, omit the `-DBUILD_ONLY=s3` from the `cmake` command.  
Building the complete SDK for C\$1\$1 can take more than an hour to complete, depending on the computing resources available to your Amazon EC2 instance or your own server.

   Use `cmake` to build the Amazon S3 portion of the SDK for C\$1\$1 into the `sdk_build` directory by running the following command:

   ```
   cmake .. -DBUILD_ONLY=s3
   ```

1. Now run the `make install` command so that the built SDK can be accessed:

   ```
   sudo make install
   cd ..
   ```

## Step 4: Create C\$1\$1 and CMakeLists files
<a name="sample-cplusplus-sdk-code"></a>

In this step, you create a `C++` file that allows users of the project to interact with Amazon S3 buckets.

You also create a `CMakeLists.txt` file that provides instructions that are used by `cmake` to build your C\$1\$1 library.

1. In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `s3-demo.cpp` at the root (`/`) of your environment.

   ```
   #include <iostream>
   #include <aws/core/Aws.h>
   #include <aws/s3/S3Client.h>
   #include <aws/s3/model/Bucket.h>
   #include <aws/s3/model/CreateBucketConfiguration.h>
   #include <aws/s3/model/CreateBucketRequest.h>
   #include <aws/s3/model/DeleteBucketRequest.h>
   
   // Look for a bucket among all currently available Amazon S3 buckets.
   bool FindTheBucket(const Aws::S3::S3Client &s3Client,
                      const Aws::String &bucketName) {
   
       Aws::S3::Model::ListBucketsOutcome outcome = s3Client.ListBuckets();
   
       if (outcome.IsSuccess()) {
   
           std::cout << "Looking for a bucket named '" << bucketName << "'..."
                     << std::endl << std::endl;
   
           Aws::Vector<Aws::S3::Model::Bucket> bucket_list =
                   outcome.GetResult().GetBuckets();
   
           for (Aws::S3::Model::Bucket const &bucket: bucket_list) {
               if (bucket.GetName() == bucketName) {
                   std::cout << "Found the bucket." << std::endl << std::endl;
   
                   return true;
               }
           }
   
           std::cout << "Could not find the bucket." << std::endl << std::endl;
       } else {
           std::cerr << "listBuckets error: "
                     << outcome.GetError().GetMessage() << std::endl;
       }
   
       return outcome.IsSuccess();
   }
   
   // Create an Amazon S3 bucket.
   bool CreateTheBucket(const Aws::S3::S3Client &s3Client,
                        const Aws::String &bucketName,
                        const Aws::String &region) {
   
       std::cout << "Creating a bucket named '"
                 << bucketName << "'..." << std::endl << std::endl;
   
       Aws::S3::Model::CreateBucketRequest request;
       request.SetBucket(bucketName);
   
       if (region != "us-east-1") {
           Aws::S3::Model::CreateBucketConfiguration createBucketConfig;
           createBucketConfig.SetLocationConstraint(
                   Aws::S3::Model::BucketLocationConstraintMapper::GetBucketLocationConstraintForName(
                           region));
           request.SetCreateBucketConfiguration(createBucketConfig);
       }
   
       Aws::S3::Model::CreateBucketOutcome outcome =
               s3Client.CreateBucket(request);
   
       if (outcome.IsSuccess()) {
           std::cout << "Bucket created." << std::endl << std::endl;
       } else {
           std::cerr << "createBucket error: "
                     << outcome.GetError().GetMessage() << std::endl;
       }
   
       return outcome.IsSuccess();
   }
   
   // Delete an existing Amazon S3 bucket.
   bool DeleteTheBucket(const Aws::S3::S3Client &s3Client,
                        const Aws::String &bucketName) {
   
       std::cout << "Deleting the bucket named '"
                 << bucketName << "'..." << std::endl << std::endl;
   
       Aws::S3::Model::DeleteBucketRequest request;
       request.SetBucket(bucketName);
   
       Aws::S3::Model::DeleteBucketOutcome outcome =
               s3Client.DeleteBucket(request);
   
       if (outcome.IsSuccess()) {
           std::cout << "Bucket deleted." << std::endl << std::endl;
       } else {
           std::cerr << "deleteBucket error: "
                     << outcome.GetError().GetMessage() << std::endl;
       }
   
       return outcome.IsSuccess();
   }
   
   #ifndef EXCLUDE_MAIN_FUNCTION
   // Create an S3 bucket and then delete it.
   // Before and after creating the bucket, and again after deleting the bucket,
   // try to determine whether that bucket still exists. 
   int main(int argc, char *argv[]) {
   
       if (argc < 3) {
           std::cout << "Usage: s3-demo <bucket name> <AWS Region>" << std::endl
                     << "Example: s3-demo my-bucket us-east-1" << std::endl;
           return 1;
       }
   
       Aws::SDKOptions options;
       Aws::InitAPI(options);
       {
           Aws::String bucketName = argv[1];
           Aws::String region = argv[2];
   
           Aws::Client::ClientConfiguration config;
   
           config.region = region;
   
           Aws::S3::S3Client s3Client(config);
   
           if (!FindTheBucket(s3Client, bucketName)) {
               return 1;
           }
   
           if (!CreateTheBucket(s3Client, bucketName, region)) {
               return 1;
           }
   
           if (!FindTheBucket(s3Client, bucketName)) {
               return 1;
           }
   
           if (!DeleteTheBucket(s3Client, bucketName)) {
               return 1;
           }
   
           if (!FindTheBucket(s3Client, bucketName)) {
               return 1;
           }
       }
       Aws::ShutdownAPI(options);
   
       return 0;
   }
   #endif  // EXCLUDE_MAIN_FUNCTION
   ```

1. Create a second file with this content, and save the file with the name `CMakeLists.txt` at the root (`/`) of your environment. This file enables you to build your code into an executable file.

   ```
   # A minimal CMakeLists.txt file for the AWS SDK for C++.
   
   # The minimum version of CMake that will work.
   cmake_minimum_required(VERSION 2.8)
   
   # The project name.
   project(s3-demo)
   
   # Locate the AWS SDK for C++ package.
   set(AWSSDK_ROOT_DIR, "/usr/local/")
   set(BUILD_SHARED_LIBS ON)
   find_package(AWSSDK REQUIRED COMPONENTS s3)
   
   # The executable name and its source files.
   add_executable(s3-demo s3-demo.cpp)
   
   # The libraries used by your executable.
   target_link_libraries(s3-demo ${AWSSDK_LINK_LIBRARIES})
   ```

## Step 5: Build and run the C\$1\$1 code
<a name="build-and-run-cpp"></a>

1. In the root directory of your environment in which you've saved the `s3-demo.cpp` and `CMakeLists.txt`, run `cmake` to build your project:

   ```
   cmake . 
   make
   ```

1. You can now run your program from the command line. In the following command, replace `my-unique-bucket-name` with a unique name for the Amazon S3 bucket and, if necessary, replace `us-east-1` with the identifier of another AWS Region where you want to create a bucket.

   ```
   ./s3-demo my-unique-bucket-name us-east-1
   ```

   If the program runs successfully, output similar to the following is returned: 

   ```
   Looking for a bucket named 'my-unique-bucket-name'...
   
   Could not find the bucket.
   
   Creating a bucket named 'my-unique-bucket-name'...
   
   Bucket created.
   
   Looking for a bucket named 'my-unique-bucket-name'...
   
   Found the bucket.
   
   Deleting the bucket named 'my-unique-bucket-name'...
   
   Bucket deleted.
   
   Looking for a bucket named 'my-unique-bucket-name'...
   
   Could not find the bucket.
   ```

## Step 6: Clean up
<a name="sample-cplusplus-clean-up"></a>

To prevent ongoing charges to your AWS account after you're finished with this sample, delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# Python tutorial for AWS Cloud9
<a name="sample-python"></a>

This tutorial shows you how to run Python code in an AWS Cloud9 development environment.

Following this tutorial might result in charges to your AWS account. These include possible charges for services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-python-prereqs)
+ [

## Step 1: Install Python
](#sample-python-install)
+ [

## Step 2: Add code
](#sample-python-code)
+ [

## Step 3: Run the code
](#sample-python-run)
+ [

## Step 4: Install and configure the AWS SDK for Python (Boto3)
](#sample-python-sdk)
+ [

## Step 5: Add AWS SDK code
](#sample-python-sdk-code)
+ [

## Step 6: Run the AWS SDK code
](#sample-python-sdk-run)
+ [

## Step 7: Clean up
](#sample-python-clean-up)

## Prerequisites
<a name="sample-python-prereqs"></a>

Before you use this tutorial, be sure to meet the following requirements.
+ **You have an AWS Cloud9 EC2 development environment**

  This tutorial assumes that you have an EC2 environment, and that the environment is connected to an Amazon EC2 instance running Amazon Linux or Ubuntu Server. See [Creating an EC2 Environment](create-environment-main.md) for details.

  If you have a different type of environment or operating system, you might need to adapt this tutorial's instructions.
+ **You have opened the AWS Cloud9 IDE for that environment**

  When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. See [Opening an environment in AWS Cloud9](open-environment.md) for details.

## Step 1: Install Python
<a name="sample-python-install"></a>

1. In a terminal session in the AWS Cloud9 IDE, confirm whether Python is already installed by running the ** `python --version` ** command. (To start a new terminal session, on the menu bar choose **Window**, **New Terminal**.) If Python is installed, skip ahead to [Step 2: Add code](#sample-python-code).

1. Run the ** `yum update`** (for Amazon Linux) or **`apt update`** (for Ubuntu Server) command to help ensure the latest security updates and bug fixes are installed.

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. Install Python by running the ** `install` ** command.

   For Amazon Linux:

   ```
   sudo yum -y install python3
   ```

   For Ubuntu Server:

   ```
   sudo apt-get install python3
   ```

## Step 2: Add code
<a name="sample-python-code"></a>

In the AWS Cloud9 IDE, create a file with the following content and save the file with the name `hello.py`. (To create a file, on the menu bar choose **File**, **New File**. To save the file, choose **File**, **Save**.)

```
import sys

print('Hello, World!')

print('The sum of 2 and 3 is 5.')

sum = int(sys.argv[1]) + int(sys.argv[2])

print('The sum of {0} and {1} is {2}.'.format(sys.argv[1], sys.argv[2], sum))
```

## Step 3: Run the code
<a name="sample-python-run"></a>

1. In the AWS Cloud9 IDE, on the menu bar choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Stopped** tab, enter `hello.py 5 9` for **Command**. In the code, `5` represents `sys.argv[1]`, and `9` represents `sys.argv[2]`.

1. Choose **Run** and compare your output.

   ```
   Hello, World!
   The sum of 2 and 3 is 5.
   The sum of 5 and 9 is 14.
   ```

1. By default, AWS Cloud9 automatically selects a runner for your code. To change the runner, choose **Runner**, and then choose **Python 2** or **Python 3**.
**Note**  
You can create custom runners for specific versions of Python. For details, see [Create a Builder or Runner](build-run-debug.md#build-run-debug-create-builder-runner).

## Step 4: Install and configure the AWS SDK for Python (Boto3)
<a name="sample-python-sdk"></a>

The AWS SDK for Python (Boto3) enables you to use Python code to interact with AWS services like Amazon S3. For example, you can use the SDK to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

### Install pip
<a name="sample-python-sdk-install-pip"></a>

In the AWS Cloud9 IDE, confirm whether `pip` is already installed for the active version of Python by running the ** `python -m pip --version` ** command. If `pip` is installed, skip to the next section.

To install `pip`, run the following commands. Because sudo is in a different environment from your user, you must specify the version of Python to use if it differs from the current aliased version.

```
curl -O https://bootstrap.pypa.io/get-pip.py # Get the install script.
sudo python3 get-pip.py                     # Install pip for Python 3.
python -m pip --version                      # Verify pip is installed.
rm get-pip.py                                # Delete the install script.
```

For more information, see [Installation](https://pip.pypa.io/en/stable/installing/) on the `pip` website.

### Install the AWS SDK for Python (Boto3)
<a name="sample-python-sdk-install-sdk"></a>

After you install `pip`, install the AWS SDK for Python (Boto3) by running the ** `pip install` ** command.

```
sudo python3 -m pip install boto3  # Install boto3 for Python 3.
python -m pip show boto3            # Verify boto3 is installed for the current version of Python.
```

For more information, see the "Installation" section of [Quickstart](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html) in the AWS SDK for Python (Boto3).

### Set up credentials in your environment
<a name="sample-python-sdk-credentials"></a>

Each time you use the AWS SDK for Python (Boto3) to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the SDK has the necessary permissions to make the call. If the credentials don't cover the necessary permissions, the call fails.

To store your credentials within the environment, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html) in the AWS SDK for Python (Boto3).

## Step 5: Add AWS SDK code
<a name="sample-python-sdk-code"></a>

Add code that uses Amazon S3 to create a bucket, list your available buckets, and optionally delete the bucket you just created.

In the AWS Cloud9 IDE, create a file with the following content and save the file with the name `s3.py`.

```
import sys
import boto3
from botocore.exceptions import ClientError


def list_my_buckets(s3_resource):
    print("Buckets:\n\t", *[b.name for b in s3_resource.buckets.all()], sep="\n\t")


def create_and_delete_my_bucket(s3_resource, bucket_name, keep_bucket):
    list_my_buckets(s3_resource)

    try:
        print("\nCreating new bucket:", bucket_name)
        bucket = s3_resource.create_bucket(
            Bucket=bucket_name,
            CreateBucketConfiguration={
                "LocationConstraint": s3_resource.meta.client.meta.region_name
            },
        )
    except ClientError as e:
        print(
            f"Couldn't create a bucket for the demo. Here's why: "
            f"{e.response['Error']['Message']}"
        )
        raise

    bucket.wait_until_exists()
    list_my_buckets(s3_resource)

    if not keep_bucket:
        print("\nDeleting bucket:", bucket.name)
        bucket.delete()

        bucket.wait_until_not_exists()
        list_my_buckets(s3_resource)
    else:
        print("\nKeeping bucket:", bucket.name)


def main():
    import argparse

    parser = argparse.ArgumentParser()
    parser.add_argument("bucket_name", help="The name of the bucket to create.")
    parser.add_argument("region", help="The region in which to create your bucket.")
    parser.add_argument(
        "--keep_bucket",
        help="Keeps the created bucket. When not "
        "specified, the bucket is deleted "
        "at the end of the demo.",
        action="store_true",
    )

    args = parser.parse_args()
    s3_resource = (
        boto3.resource("s3", region_name=args.region)
        if args.region
        else boto3.resource("s3")
    )
    try:
        create_and_delete_my_bucket(s3_resource, args.bucket_name, args.keep_bucket)
    except ClientError:
        print("Exiting the demo.")


if __name__ == "__main__":
    main()
```

## Step 6: Run the AWS SDK code
<a name="sample-python-sdk-run"></a>

1. On the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. For **Command**, enter `s3.py my-test-bucket us-west-2`, where `my-test-bucket` is the name of the bucket to create, and `us-west-2` is the ID of the AWS Region where your bucket is created. By default, your bucket is deleted before the script exits. To keep your bucket, add `--keep_bucket` to your command. For a list of AWS Region IDs, see [Amazon Simple Storage Service Endpoints and Quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*.
**Note**  
Amazon S3 bucket names must be unique across AWS—not just your AWS account.

1. Choose **Run**, and compare your output.

   ```
   Buckets:
   
           a-pre-existing-bucket
   
   Creating new bucket: my-test-bucket
   Buckets:
   
           a-pre-existing-bucket
           my-test-bucket
   
   Deleting bucket: my-test-bucket
   Buckets:
   
           a-pre-existing-bucket
   ```

## Step 7: Clean up
<a name="sample-python-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done with this tutorial, delete the AWS Cloud9 environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# .NET tutorial for AWS Cloud9
<a name="sample-dotnetcore"></a>

This tutorial enables you to run some .NET code in an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-dotnetcore-prereqs)
+ [

## Step 1: Install required tools
](#sample-dotnetcore-setup)
+ [

## Step 2 (Optional): Install the .NET CLI extension for Lambda functions
](#sample-dotnetcore-lambda)
+ [

## Step 3: Create a .NET console application project
](#sample-dotnetcore-app)
+ [

## Step 4: Add code
](#sample-dotnetcore-code)
+ [

## Step 5: Build and run the code
](#sample-dotnetcore-run)
+ [

## Step 6: Create and set up a .NET console application project that uses the AWS SDK for .NET
](#sample-dotnetcore-sdk)
+ [

## Step 7: Add AWS SDK code
](#sample-dotnetcore-sdk-code)
+ [

## Step 8: Build and run the AWS SDK code
](#sample-dotnetcore-sdk-run)
+ [

## Step 9: Clean up
](#sample-dotnetcore-clean-up)

## Prerequisites
<a name="sample-dotnetcore-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-dotnetcore-setup"></a>

In this step, you install the .NET SDK into your environment, which is required to run this sample.

1. Confirm whether the latest version of the .NET SDK is already installed in your environment. To do this, in a terminal session in the AWS Cloud9 IDE, run the .NET Core command line interface (CLI) with the ** `--version` ** option.

   ```
   dotnet --version
   ```

   If the .NET Command Line Tools version is displayed, and the version is 2.0 or greater, skip ahead to [Step 3: Create a .NET console application project](#sample-dotnetcore-app). If the version is less than 2.0, or if an error such as `bash: dotnet: command not found` is displayed, continue on to install the .NET SDK.

1. For Amazon Linux, in a terminal session in the AWS Cloud9 IDE, run the following commands to help ensure the latest security updates and bug fixes are installed, and to install a `libunwind` package that the .NET SDK needs. (To start a new terminal session, on the menu bar, choose **Window, New Terminal**.)

   ```
   sudo yum -y update
   sudo yum -y install libunwind
   ```

   For Ubuntu Server, in a terminal session in the AWS Cloud9 IDE, run the following command to help ensure the latest security updates and bug fixes are installed. (To start a new terminal session, on the menu bar, choose **Window, New Terminal**.)

   ```
   sudo apt -y update
   ```

1. Download the .NET SDK installer script into your environment by running the following command.

   ```
   wget https://dot.net/v1/dotnet-install.sh
   ```

1. Make the installer script executable by the current user by running the following command.

   ```
   sudo chmod u=rx dotnet-install.sh
   ```

1. Run the installer script, which downloads and installs the .NET SDK, by running the following command.

   ```
   ./dotnet-install.sh -c Current
   ```

1. Add the .NET SDK to your `PATH`. To do this, in the shell profile for the environment (for example, the `.bashrc` file), add the `$HOME/.dotnet` subdirectory to the `PATH` variable for the environment, as follows.

   1. Open the `.bashrc` file for editing by using the ** `vi` ** command.

      ```
      vi ~/.bashrc
      ```

   1. For Amazon Linux, using the down arrow or `j` key, move to the line that starts with `export PATH`.

      For Ubuntu Server, move to the last line of the file by typing `G`.

   1. Using the right arrow or `$` key, move to the end of that line.

   1. Switch to insert mode by pressing the `i` key. (`-- INSERT ---` will appear at the end of the display.)

   1. For Amazon Linux, add the `$HOME/.dotnet` subdirectory to the ** `PATH` ** variable by typing `:$HOME/.dotnet`. Be sure to include the colon character (`:`). The line should now look similar to the following.

      ```
      export PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/.dotnet
      ```

      For Ubuntu Server, press the right arrow key and then press `Enter` twice, followed by typing the following line by itself at the end of the file.

      ```
      export PATH=$HOME/.dotnet:$PATH
      ```

   1. Save the file. To do this, press the `Esc` key (`-- INSERT ---` will disappear from the end of the display), type `:wq` (to write to and then quit the file), and then press `Enter`.

1. Load the .NET SDK by sourcing the `.bashrc` file.

   ```
   . ~/.bashrc
   ```

1. Confirm the .NET SDK is loaded by running .NET CLI with the ** `--help` ** option.

   ```
   dotnet --help
   ```

   If successful, the .NET SDK version number is displayed, with additional usage information.

1. If you no longer want to keep the .NET SDK installer script in your environment, you can delete it as follows.

   ```
   rm dotnet-install.sh
   ```

## Step 2 (Optional): Install the .NET CLI extension for Lambda functions
<a name="sample-dotnetcore-lambda"></a>

Although not required for this tutorial, you can deploy AWS Lambda functions and AWS Serverless Application Model applications using the .NET CLI if you also install the `Amazon.Lambda.Tools` package. 

1. To install the package, run the following command:

   ```
   dotnet tool install -g Amazon.Lambda.Tools
   ```

1. Now set the `PATH` and `DOTNET_ROOT` environment variable to point to the installed Lambda tool. In the `.bashrc` file, find the `export PATH` section and edit it so that it appears similar to the following (see Step 1 for details on editing this file):

   ```
   export PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/.dotnet:$HOME/.dotnet/tools
   export DOTNET_ROOT=$HOME/.dotnet
   ```

## Step 3: Create a .NET console application project
<a name="sample-dotnetcore-app"></a>

In this step, you use .NET to create a project named `hello`. This project contains all of the files that .NET needs to run a simple application from the terminal in the IDE. The application's code is written in C\$1.

Create a .NET console application project. To do this, run the .NET CLI with the ** `new` ** command, specifying the console application project template type and the programming language to use (in this sample, C\$1).

 The `-n` option indicates that the project is outputted to a new directory, `hello`. We then navigate to that directory. 

```
dotnet new console -lang C# -n hello
cd hello
```

The preceding command adds a subdirectory named `obj` with several files, and some additional standalone files, to the `hello` directory. You should note the following two key files:
+ The `hello/hello.csproj` file contains information about the console application project.
+ The `hello/Program.cs` file contains the application's code to run.

## Step 4: Add code
<a name="sample-dotnetcore-code"></a>

In this step, you add some code to the application.

From the **Environment** window in the AWS Cloud9 IDE, open the `hello/Program.cs` file.

In the editor, replace the file's current contents with the following code, and then save the `Program.cs` file.

```
using System;

namespace hello
{
  class Program
  {
    static void Main(string[] args)
    {
     if (args.Length < 2) {
       Console.WriteLine("Please provide 2 numbers");
       return;
     }

     Console.WriteLine("Hello, World!");

     Console.WriteLine("The sum of 2 and 3 is 5.");

     int sum = Int32.Parse(args[0]) + Int32.Parse(args[1]);

     Console.WriteLine("The sum of {0} and {1} is {2}.",
     args[0], args[1], sum);

    }
  }
}
```

## Step 5: Build and run the code
<a name="sample-dotnetcore-run"></a>

In this step, you build the project and its dependencies into a set of binary files, including a runnable application file. Then you run the application.

1. In the IDE, create a builder for .NET as follows.

   1. On the menu bar, choose **Run, Build System, New Build System**.

   1. On the **My Builder.build** tab, replace the tab's contents with the following code.

      ```
      {
        "cmd" : ["dotnet", "build"],
        "info" : "Building..."
      }
      ```

   1. Choose **File, Save As**.

   1. For **Filename**, type `.NET.build`.

   1. For **Folder**, type `/.c9/builders`.

   1. Choose **Save**.

1. With the contents of the `Program.cs` file displayed in the editor, choose **Run, Build System, .NET**. Then choose **Run, Build**.

   This builder adds a subdirectory named `bin` and adds a subdirectory named `Debug` to the `hello/obj` subdirectory. Note the following three key files.
   + The `hello/bin/Debug/netcoreapp3.1/hello.dll` file is the runnable application file.
   + The `hello/bin/Debug/netcoreapp3.1/hello.deps.json` file lists the application's dependencies.
   + The `hello/bin/Debug/netcoreapp3.1/hello.runtimeconfig.json` file specifies the shared runtime and its version for the application.
**Note**  
The folder name, `netcoreapp3.1`, reflects the version of the .NET SDK used in this example. You may see a different number in the folder name depending on the version you've installed.

1. Create a runner for .NET as follows.

   1. On the menu bar, choose **Run, Run With, New Runner**.

   1. On the **My Runner.run** tab, replace the tab's contents with the following code.

      ```
      {
        "cmd" : ["dotnet", "run", "$args"],
        "working_dir": "$file_path",
        "info" : "Running..."
      }
      ```

   1. Choose **File, Save As**.

   1. For **Filename**, type `.NET.run`.

   1. For **Folder**, type `/.c9/runners`.

   1. Choose **Save**.

1. Run the application with two integers to add (for example, `5` and `9`) as follows.

   1. With the contents of the `Program.cs` file displayed in the editor, choose **Run, Run Configurations, New Run Configuration**.

   1. In the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **.NET**.

   1. In the **Command** box, type `hello 5 9`.

   1. Choose **Run**.

      By default, this runner instructs .NET to run the `hello.dll` file in the `hello/bin/Debug/netcoreapp3.1` directory.

      Compare your output to the following.

      ```
      Hello, World!
      The sum of 2 and 3 is 5.
      The sum of 5 and 9 is 14.
      ```

## Step 6: Create and set up a .NET console application project that uses the AWS SDK for .NET
<a name="sample-dotnetcore-sdk"></a>

You can enhance this sample to use the AWS SDK for .NET to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this new project, you add a reference to the AWS SDK for .NET. The AWS SDK for .NET provides a convenient way to interact with AWS services such as Amazon S3, from your .NET code. You then set up AWS credentials management in your environment. The AWS SDK for .NET needs these credentials to interact with AWS services.

### To create the project
<a name="sample-dotnetcore-sdk-create"></a>

1. Create a .NET console application project. To do this, run the .NET CLI with the ** `new` ** command, specifying the console application project template type and the programming language to use. 

   The `-n` option indicates that the project is outputted to a new directory, `s3`. We then navigate to that directory.

   ```
   dotnet new console -lang C# -n s3
   cd s3
   ```

1. Add a project reference to the Amazon S3 package in the AWS SDK for .NET. To do this, run the .NET CLI with the ** `add package` ** command, specifying the name of the Amazon S3 package in NuGet. (NuGet defines how packages for .NET are created, hosted, and consumed, and provides the tools for each of those roles.)

   ```
   dotnet add package AWSSDK.S3
   ```

   When you add a project reference to the Amazon S3 package, NuGet also adds a project reference to the rest of the AWS SDK for .NET.
**Note**  
For the names and versions of other AWS related packages in NuGet, see [NuGet packages tagged with aws-sdk](https://www.nuget.org/packages?q=Tags%3A%22aws-sdk%22) on the NuGet website.

### To set up AWS credentials management
<a name="sample-dotnetcore-sdk-creds"></a>

Each time you use the AWS SDK for .NET to call an AWS service, you must provide a set of AWS credentials with the call. These credentials determine whether the AWS SDK for .NET has the appropriate permissions to make that call. If the credentials don't cover the appropriate permissions, the call will fail.

To store your credentials within the environment, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Configuring AWS Credentials](https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/net-dg-config-creds.html) in the *AWS SDK for .NET Developer Guide*.

## Step 7: Add AWS SDK code
<a name="sample-dotnetcore-sdk-code"></a>

In this step, you add code to interact with Amazon S3 to create a bucket, delete the bucket you just created, and then list your available buckets.

From the **Environment** window in the AWS Cloud9 IDE, open the `s3/Program.cs` file. In the editor, replace the file's current contents with the following code, and then save the `Program.cs` file.

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;
     
namespace s3
{
  class Program
  {
   async static Task Main(string[] args)
   {
    if (args.Length < 2) {
      Console.WriteLine("Usage: <the bucket name> <the AWS Region to use>");
      Console.WriteLine("Example: my-test-bucket us-east-2");
      return;
    }
     
    if (args[1] != "us-east-2") {
      Console.WriteLine("Cannot continue. The only supported AWS Region ID is " +
      "'us-east-2'.");
       return;
     }
         
      var bucketRegion = RegionEndpoint.USEast2;
      // Note: You could add more valid AWS Regions above as needed.
     
      using (var s3Client = new AmazonS3Client(bucketRegion)) {
      var bucketName = args[0];
        
      // Create the bucket.
      try
      {
       if (await AmazonS3Util.DoesS3BucketExistV2Async(s3Client, bucketName))
       {
         Console.WriteLine("Cannot continue. Cannot create bucket. \n" +
         "A bucket named '{0}' already exists.", bucketName);
         return;
       } else {
         Console.WriteLine("\nCreating the bucket named '{0}'...", bucketName);
         await s3Client.PutBucketAsync(bucketName);
         }
       }
       catch (AmazonS3Exception e)
       {
        Console.WriteLine("Cannot continue. {0}", e.Message);
       }
       catch (Exception e)
       {
        Console.WriteLine("Cannot continue. {0}", e.Message);
       }
        
       // Confirm that the bucket was created.
       if (await AmazonS3Util.DoesS3BucketExistV2Async(s3Client, bucketName))
       {
          Console.WriteLine("Created the bucket named '{0}'.", bucketName);
       } else {
         Console.WriteLine("Did not create the bucket named '{0}'.", bucketName);
       }
        
       // Delete the bucket.
       Console.WriteLine("\nDeleting the bucket named '{0}'...", bucketName);
       await s3Client.DeleteBucketAsync(bucketName);
        
       // Confirm that the bucket was deleted.
       if (await AmazonS3Util.DoesS3BucketExistV2Async(s3Client, bucketName))
       {
          Console.WriteLine("Did not delete the bucket named '{0}'.", bucketName);
       } else {
         Console.WriteLine("Deleted the bucket named '{0}'.", bucketName);
       };
        
        // List current buckets.
       Console.WriteLine("\nMy buckets now are:");
       var response = await s3Client.ListBucketsAsync();
        
       foreach (var bucket in response.Buckets)
       {
       Console.WriteLine(bucket.BucketName);
       }
      }
    }
  }
}
```

## Step 8: Build and run the AWS SDK code
<a name="sample-dotnetcore-sdk-run"></a>

In this step, you build the project and its dependencies into a set of binary files, including a runnable application file. Then you run the application.

1. Build the project. To do this, with the contents of the `s3/Program.cs` file displayed in the editor, on the menu bar, choose **Run, Build**.

1. Run the application with the name of the Amazon S3 bucket to create and the ID of the AWS Region to create the bucket in (for example, `my-test-bucket` and `us-east-2`) as follows.

   1. With the contents of the `s3/Program.cs` file still displayed in the editor, choose **Run, Run Configurations, New Run Configuration**.

   1. In the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **.NET**.

   1. In the **Command** box, type the name of the application, the name of the Amazon S3 bucket to create, and the ID of the AWS Region to create the bucket in (for example, `s3 my-test-bucket us-east-2`).

   1. Choose **Run**.

      By default, this runner instructs .NET to run the `s3.dll` file in the `s3/bin/Debug/netcoreapp3.1` directory.

      Compare your results to the following output.

      ```
      Creating a new bucket named 'my-test-bucket'...
      Created the bucket named 'my-test-bucket'.
      
      Deleting the bucket named 'my-test-bucket'...
      Deleted the bucket named 'my-test-bucket'.
      
      My buckets now are:
      ```

## Step 9: Clean up
<a name="sample-dotnetcore-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# Node.js tutorial for AWS Cloud9
<a name="sample-nodejs"></a>

This tutorial enables you to run some Node.js scripts in an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-nodejs-prereqs)
+ [

## Step 1: Install required tools
](#sample-nodejs-install)
+ [

## Step 2: Add code
](#sample-nodejs-code)
+ [

## Step 3: Run the code
](#sample-nodejs-run)
+ [

## Step 4: Install and configure the AWS SDK for JavaScript in Node.js
](#sample-nodejs-sdk)
+ [

## Step 5: Add AWS SDK code
](#sample-nodejs-sdk-code)
+ [

## Step 6: Run the AWS SDK code
](#sample-nodejs-sdk-run)
+ [

## Step 7: Clean up
](#sample-nodejs-clean-up)

## Prerequisites
<a name="sample-nodejs-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-nodejs-install"></a>

In this step, you install Node.js, which is required to run this sample.

1. In a terminal session in the AWS Cloud9 IDE, confirm whether Node.js is already installed by running the ** `node --version` ** command. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) If successful, the output contains the Node.js version number. If Node.js is installed, skip ahead to [Step 2: Add code](#sample-nodejs-code).

1. Run the ** `yum update` ** for (Amazon Linux) or ** `apt update` ** for (Ubuntu Server) command to help ensure the latest security updates and bug fixes are installed.

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. To install Node.js, begin by running this command to download Node Version Manager (nvm). (nvm is a simple Bash shell script that is useful for installing and managing Node.js versions. For more information, see [Node Version Manager](https://github.com/creationix/nvm/blob/master/README.md) on the GitHub website.)

   ```
   curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash
   ```

1. To start using nvm, either close the terminal session and start it again, or source the `~/.bashrc` file that contains the commands to load nvm.

   ```
   . ~/.bashrc
   ```

1. Run this command to install Node.js 16 on Amazon Linux 2, Amazon Linux 1 and Ubuntu 18.04. Amazon Linux 1 and Ubuntu 18.04 instances only support Node.js up to v16.

   ```
   nvm install 16
   ```

   Run this command to install the latest version of Node.js on Amazon Linux 2023 and Ubuntu 22.04:

   ```
   nvm install --lts && nvm alias default lts/*
   ```
**Note**  
The latest AL2023 AWS Cloud9 image has Node.js 20 installed, and the latest Amazon Linux 2 AWS Cloud9 image has Node.js 18 installed. If you want to install Node.js 18 on Amazon Linux 2 AWS Cloud9 manually, run the following command in the AWS Cloud9 IDE terminal:  

   ```
   C9_NODE_INSTALL_DIR=~/.nvm/versions/node/v18.17.1
   C9_NODE_URL=https://d3kgj69l4ph6w4.cloudfront.net/static/node-amazon/node-v18.17.1-linux-x64.tar.gz
   mkdir -p $C9_NODE_INSTALL_DIR
   curl -fSsl $C9_NODE_URL  | tar xz --strip-components=1 -C "$C9_NODE_INSTALL_DIR"
   nvm alias default v18.17.1
   nvm use default
   echo -e 'nvm use default' >> ~/.bash_profile
   ```

## Step 2: Add code
<a name="sample-nodejs-code"></a>

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `hello.js`. (To create a file, on the menu bar, choose **File**, **New File**. To save the file, choose **File**, **Save**.)

```
console.log('Hello, World!');

console.log('The sum of 2 and 3 is 5.');

var sum = parseInt(process.argv[2], 10) + parseInt(process.argv[3], 10);

console.log('The sum of ' + process.argv[2] + ' and ' +
  process.argv[3] + ' is ' + sum + '.');
```

## Step 3: Run the code
<a name="sample-nodejs-run"></a>

1. In the AWS Cloud9 IDE, on the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Node.js**.

1. For **Command**, type `hello.js 5 9`. In the code, `5` represents `process.argv[2]`, and `9` represents `process.argv[3]`. (`process.argv[0]` represents the name of the runtime (`node`), and `process.argv[1]` represents the name of the file (`hello.js`).)

1. Choose the **Run** button, and compare your output.

   ```
   Hello, World!
   The sum of 2 and 3 is 5.
   The sum of 5 and 9 is 14.
   ```

![\[Node.js output after running the code in the AWS Cloud9 IDE\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/ide-nodejs-simple.png)


## Step 4: Install and configure the AWS SDK for JavaScript in Node.js
<a name="sample-nodejs-sdk"></a>

When running Node.js scripts in AWS Cloud9, you can choose between AWS SDK for JavaScript version 3 (V3) and the older AWS SDK for JavaScript version 2 (V2). As with V2, V3 enables you to easily work with Amazon Web Services, but has been written in TypeScript and adds several frequently requested features, such as modularized packages.

------
#### [ AWS SDK for JavaScript (V3) ]

You can enhance this sample to use the AWS SDK for JavaScript in Node.js to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this step, you install and configure the Amazon S3 service client module of the AWS SDK for JavaScript in Node.js, which provides a convenient way to interact with the Amazon S3 AWS service, from your JavaScript code.

If you want to use other AWS services, you need to install them separately. For more information on installing AWS modules, see [in the *AWS Developer Guide (V3)*.](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/working-with-services) For information on how to get started with Node.js and AWS SDK for JavaScript (V3), see [ Get started with Node.js ](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started-nodejs.html#getting-started-nodejs-setup-structure)in the *AWS SDK for JavaScript Developers Guide (V3)*.

 After you install the AWS SDK for JavaScript in Node.js, you must set up credentials management in your environment. The AWS SDK for JavaScript in Node.js needs these credentials to interact with AWS services.

**To install the AWS SDK for JavaScript in Node.js**

Use npm to run the **`install`** command.

```
npm install @aws-sdk/client-s3
```

For more information, see [Installing the SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-up.html#installing-jssdk) in the *AWS SDK for JavaScript Developer Guide*.

**To set up credentials management in your environment**

Each time you use the AWS SDK for JavaScript in Node.js to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the AWS SDK for JavaScript in Node.js has the appropriate permissions to make that call. If the credentials do not cover the appropriate permissions, the call will fail.

In this step, you store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Setting Credentials in Node.js](https://docs.aws.amazon.com/sdk-for-javascript/latest/developer-guide/setting-credentials-node.html) in the *AWS SDK for JavaScript Developer Guide*.

------
#### [ AWS SDK for JavaScript (V2) ]

You can enhance this sample to use the AWS SDK for JavaScript in Node.js to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this step, you install and configure the AWS SDK for JavaScript in Node.js, which provides a convenient way to interact with AWS services such as Amazon S3, from your JavaScript code. After you install the AWS SDK for JavaScript in Node.js, you must set up credentials management in your environment. The AWS SDK for JavaScript in Node.js needs these credentials to interact with AWS services.

**To install the AWS SDK for JavaScript in Node.js**

Use npm to run the **`install`** command.

```
npm install aws-sdk
```

For more information, see [Installing the SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript/latest/developer-guide/installing-jssdk.html) in the *AWS SDK for JavaScript Developer Guide*.

**To set up credentials management in your environment**

Each time you use the AWS SDK for JavaScript in Node.js to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the AWS SDK for JavaScript in Node.js has the appropriate permissions to make that call. If the credentials do not cover the appropriate permissions, the call will fail.

In this step, you store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Setting Credentials in Node.js](https://docs.aws.amazon.com/sdk-for-javascript/latest/developer-guide/setting-credentials-node.html) in the *AWS SDK for JavaScript Developer Guide*.

------

## Step 5: Add AWS SDK code
<a name="sample-nodejs-sdk-code"></a>

------
#### [ AWS SDK for JavaScript (V3) ]

In this step, you add some more code, this time to interact with Amazon S3 to create a bucket, list your available buckets, and then delete the bucket you just created. You will run this code later.

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `s3.js`.

```
import {
  CreateBucketCommand,
  DeleteBucketCommand,
  ListBucketsCommand,
  S3Client,
} from "@aws-sdk/client-s3";

const wait = async (milliseconds) => {
  return new Promise((resolve) => setTimeout(resolve, milliseconds));
};

export const main = async () => {
  const client = new S3Client({});
  const now = Date.now();
  const BUCKET_NAME = `easy-bucket-${now.toString()}`;

  const createBucketCommand = new CreateBucketCommand({ Bucket: BUCKET_NAME });
  const listBucketsCommand = new ListBucketsCommand({});
  const deleteBucketCommand = new DeleteBucketCommand({ Bucket: BUCKET_NAME });

  try {
    console.log(`Creating bucket ${BUCKET_NAME}.`);
    await client.send(createBucketCommand);
    console.log(`${BUCKET_NAME} created`);

    await wait(2000);

    console.log(`Here are your buckets:`);
    const { Buckets } = await client.send(listBucketsCommand);
    Buckets.forEach((bucket) => {
      console.log(` • ${bucket.Name}`);
    });

    await wait(2000);

    console.log(`Deleting bucket ${BUCKET_NAME}.`);
    await client.send(deleteBucketCommand);
    console.log(`${BUCKET_NAME} deleted`);
  } catch (err) {
    console.error(err);
  }
};

main();
```

------
#### [ AWS SDK for JavaScript (V2) ]

In this step, you add some more code, this time to interact with Amazon S3 to create a bucket, list your available buckets, and then delete the bucket you just created. You will run this code later.

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `s3.js`.

```
if (process.argv.length < 4) {
  console.log(
    "Usage: node s3.js <the bucket name> <the AWS Region to use>\n" +
      "Example: node s3.js my-test-bucket us-east-2"
  );
  process.exit(1);
}

var AWS = require("aws-sdk"); // To set the AWS credentials and region.
var async = require("async"); // To call AWS operations asynchronously.

AWS.config.update({
  region: region,
});

var s3 = new AWS.S3({ apiVersion: "2006-03-01" });
var bucket_name = process.argv[2];
var region = process.argv[3];

var create_bucket_params = {
  Bucket: bucket_name,
  CreateBucketConfiguration: {
    LocationConstraint: region,
  },
};

var delete_bucket_params = { Bucket: bucket_name };

// List all of your available buckets in this AWS Region.
function listMyBuckets(callback) {
  s3.listBuckets(function (err, data) {
    if (err) {
    } else {
      console.log("My buckets now are:\n");

      for (var i = 0; i < data.Buckets.length; i++) {
        console.log(data.Buckets[i].Name);
      }
    }

    callback(err);
  });
}

// Create a bucket in this AWS Region.
function createMyBucket(callback) {
  console.log("\nCreating a bucket named " + bucket_name + "...\n");

  s3.createBucket(create_bucket_params, function (err, data) {
    if (err) {
      console.log(err.code + ": " + err.message);
    }

    callback(err);
  });
}

// Delete the bucket you just created.
function deleteMyBucket(callback) {
  console.log("\nDeleting the bucket named " + bucket_name + "...\n");

  s3.deleteBucket(delete_bucket_params, function (err, data) {
    if (err) {
      console.log(err.code + ": " + err.message);
    }

    callback(err);
  });
}

// Call the AWS operations in the following order.
async.series([
  listMyBuckets,
  createMyBucket,
  listMyBuckets,
  deleteMyBucket,
  listMyBuckets,
]);
```

------

## Step 6: Run the AWS SDK code
<a name="sample-nodejs-sdk-run"></a>

1. Enable the code to call Amazon S3 operations asynchronously by using npm to run the ** `install` ** command.

   ```
   npm install async
   ```

1. In the AWS Cloud9 IDE, on the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Node.js**.

1. If you are using AWS SDK for JavaScript (V3), for **Command** type `s3.js`. If you are using AWS SDK for Javascript (v2), for **Command** type `s3.js my-test-bucket us-east-2`, where `my-test-bucket` is the name of the bucket you want to create and then delete, and `us-east-2` is the ID of the AWS Region you want to create the bucket in. For more IDs, see [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.
**Note**  
Amazon S3 bucket names must be unique across AWS—not just your AWS account.

1. Choose the **Run** button, and compare your output.

   ```
   My buckets now are:
   
   Creating a new bucket named 'my-test-bucket'...
   
   My buckets now are:
   
   my-test-bucket
   
   Deleting the bucket named 'my-test-bucket'...
   
   My buckets now are:
   ```

## Step 7: Clean up
<a name="sample-nodejs-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# PHP tutorial for AWS Cloud9
<a name="sample-php"></a>

This tutorial enables you to run some PHP scripts in an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-php-prereqs)
+ [

## Step 1: Install required tools
](#sample-php-install)
+ [

## Step 2: Add code
](#sample-php-code)
+ [

## Step 3: Run the code
](#sample-php-run)
+ [

## Step 4: Install and configure the AWS SDK for PHP
](#sample-php-sdk)
+ [

## Step 5: Add AWS SDK code
](#sample-php-sdk-code)
+ [

## Step 6: Run the AWS SDK code
](#sample-php-sdk-run)
+ [

## Step 7: Clean up
](#sample-php-clean-up)

## Prerequisites
<a name="sample-php-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-php-install"></a>

In this step, you install PHP, which is required to run this sample.

**Note**  
The following procedure installs PHP only. To install related tools such as an Apache web server and a MySQL database, see [Tutorial: Installing a LAMP Web Server on Amazon Linux](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html) in the *Amazon EC2 User Guide*.

1. In a terminal session in the AWS Cloud9 IDE, confirm whether PHP is already installed by running the ** `php --version` ** command. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) If successful, the output contains the PHP version number. If PHP is installed, skip ahead to [Step 2: Add code](#sample-php-code).

1. Run the ** `yum update` ** for (Amazon Linux) or ** `apt update` ** for (Ubuntu Server) command to help ensure the latest security updates and bug fixes are installed.

   For Amazon Linux 2 and Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. Install PHP by running the ** `install` ** command.

   For Amazon Linux 2:

   ```
   sudo amazon-linux-extras install -y php7.2
   ```

   For Amazon Linux:

   ```
   sudo yum -y install php72
   ```
**Note**  
You can view your version of Amazon Linux using the following command:   

   ```
   cat /etc/system-release
   ```

   For Ubuntu Server:

   ```
   sudo apt install -y php php-xml
   ```

   For more information, see [Installation and Configuration](http://php.net/manual/en/install.php) on the PHP website.

## Step 2: Add code
<a name="sample-php-code"></a>

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `hello.php`. (To create a file, on the menu bar, choose **File**, **New File**. To save the file, choose **File**, **Save**, type `hello.php` for **Filename**, and then choose **Save**.)

```
<?php
  print('Hello, World!');

  print("\nThe sum of 2 and 3 is 5.");

  $sum = (int)$argv[1] + (int)$argv[2];

  print("\nThe sum of $argv[1] and $argv[2] is $sum.");
?>
```

**Note**  
The preceding code doesn't rely on any external files. However, if you ever include or require other PHP files in your file, and you want AWS Cloud9 to use those files to do code completion as you type, turn on the **Project, PHP Support, Enable PHP code completion** setting in **Preferences**, and then add the paths to those files to the **Project, PHP Support, PHP Completion Include Paths** setting. (To view and change your preferences, choose **AWS Cloud9, Preferences** on the menu bar.)

## Step 3: Run the code
<a name="sample-php-run"></a>

1. In the AWS Cloud9 IDE, on the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **PHP (cli)**.

1. For **Command**, type `hello.php 5 9`. In the code, `5` represents `$argv[1]`, and `9` represents `$argv[2]`. (`$argv[0]` represents the name of the file (`hello.php`).)

1. Choose the **Run** button, and compare your output.

   ```
   Hello, World!
   The sum of 2 and 3 is 5.
   The sum of 5 and 9 is 14.
   ```

![\[Output of running the PHP code in the AWS Cloud9 IDE\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/ide-php-simple.png)


## Step 4: Install and configure the AWS SDK for PHP
<a name="sample-php-sdk"></a>

You can enhance this sample to use the AWS SDK for PHP to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this step, you install and configure the AWS SDK for PHP, which provides a convenient way to interact with AWS services such as Amazon S3, from your PHP code. Before you can install the AWS SDK for PHP, you should install [Composer](https://getcomposer.org/). After you install the AWS SDK for PHP, you must set up credentials management in your environment. The AWS SDK for PHP needs these credentials to interact with AWS services.

### To install Composer
<a name="sample-php-sdk-install-composer"></a>

Run the ** `curl` ** command with the silent (`-s`) and show error (`-S`) options, piping the Composer installer into a PHP archive (PHAR) file, named `composer.phar` by convention.

```
curl -sS https://getcomposer.org/installer | php
```

### To install the AWS SDK for PHP
<a name="sample-php-sdk-install-sdk"></a>

For Ubuntu Server, install additional packages that Composer needs to install the AWS SDK for PHP.

```
sudo apt install -y php-xml php-curl
```

For Amazon Linux or Ubuntu Server, use the **php** command to run the Composer installer to install the AWS SDK for PHP.

```
php composer.phar require aws/aws-sdk-php
```

This command creates several folders and files in your environment. The primary file you will use is `autoload.php`, which is in the `vendor` folder in your environment.

**Note**  
After installation, Composer might suggest that you install additional dependencies. You can do this with a command such as the following, specifying the list of dependencies to install. For example, the following command instructs Composer to install the following list of dependencies.  

```
php composer.phar require psr/log ext-curl doctrine/cache aws/aws-php-sns-message-validator
```

For more information, see [Installation](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/installation.html) in the *AWS SDK for PHP Developer Guide*.

### To set up credentials management in your environment
<a name="sample-php-sdk-creds"></a>

Each time you use the AWS SDK for PHP to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the AWS SDK for PHP has the appropriate permissions to make that call. If the credentials don't cover the appropriate permissions, the call will fail.

In this step, you store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see the "Creating a client" section of [Basic Usage](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/basic-usage.html) in the *AWS SDK for PHP Developer Guide*.

## Step 5: Add AWS SDK code
<a name="sample-php-sdk-code"></a>

In this step, you add some more code, this time to interact with Amazon S3 to create a bucket, list your available buckets, and then delete the bucket you just created. You will run this code later.

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `s3.php`.

```
<?php
require './vendor/autoload.php';

if ($argc < 4) {
    exit("Usage: php s3.php <the time zone> <the bucket name> <the AWS Region to use>\n" .
        "Example: php s3.php America/Los_Angeles my-test-bucket us-east-2");
}

$timeZone = $argv[1];
$bucketName = $argv[2];
$region = $argv[3];

date_default_timezone_set($timeZone);

$s3 = new Aws\S3\S3Client([
    'region' => $region,
    'version' => '2006-03-01'
]);

# Lists all of your available buckets in this AWS Region.
function listMyBuckets($s3)
{
    print("\nMy buckets now are:\n");

    $promise = $s3->listBucketsAsync();

    $result = $promise->wait();

    foreach ($result['Buckets'] as $bucket) {
        print("\n");
        print($bucket['Name']);
    }
}

listMyBuckets($s3);

# Create a new bucket.
print("\n\nCreating a new bucket named '$bucketName'...\n");

try {
    $promise = $s3->createBucketAsync([
        'Bucket' => $bucketName,
        'CreateBucketConfiguration' => [
            'LocationConstraint' => $region
        ]
    ]);

    $promise->wait();
} catch (Exception $e) {
    if ($e->getCode() == 'BucketAlreadyExists') {
        exit("\nCannot create the bucket. " .
            "A bucket with the name '$bucketName' already exists. Exiting.");
    }
}

listMyBuckets($s3);

# Delete the bucket you just created.
print("\n\nDeleting the bucket named '$bucketName'...\n");

$promise = $s3->deleteBucketAsync([
    'Bucket' => $bucketName
]);

$promise->wait();

listMyBuckets($s3);

?>
```

## Step 6: Run the AWS SDK code
<a name="sample-php-sdk-run"></a>

1. In the AWS Cloud9 IDE, on the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **PHP (cli)**.

1. For **Command**, type `s3.php America/Los_Angeles my-test-bucket us-east-2`, where:
   +  `America/Los_Angeles` is your default time zone ID. For more IDs, see [List of Supported Timezones](http://php.net/manual/en/timezones.php) on the PHP website.
   +  `my-test-bucket` is the name of the bucket you want to create and then delete.
**Note**  
Amazon S3 bucket names must be unique across AWS—not just your AWS account.
   +  `us-east-2` is the ID of the AWS Region you want to create the bucket in. For more IDs, see [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. Choose the **Run** button, and compare your output.

   ```
   My buckets now are:
   
   Creating a new bucket named 'my-test-bucket'...
   
   My buckets now are:
   
   my-test-bucket
   
   Deleting the bucket named 'my-test-bucket'...
   
   My buckets now are:
   ```

## Step 7: Clean up
<a name="sample-php-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

### Troubleshooting issues with PHP runner for AWS Cloud9
<a name="sample-php-troubleshooting"></a>

In the event that you encounter issues with the PHP CLI runner, you must ensure that the runner has been set to PHP and that debugger mode is enabled.

# AWS SDK for Ruby in AWS Cloud9
<a name="tutorial-ruby"></a>

For information about using AWS Cloud9 with the AWS SDK for Ruby, see [Using AWS Cloud9 with the AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/cloud9-ruby.html) in the *AWS SDK for Ruby Developer Guide*.

**Note**  
Following this tutorial might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

# Go tutorial for AWS Cloud9
<a name="sample-go"></a>

This tutorial enables you to run some Go code in an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-go-prereqs)
+ [

## Step 1: Install required tools
](#sample-go-install)
+ [

## Step 2: Add code
](#sample-go-code)
+ [

## Step 3: Run the code
](#sample-go-run)
+ [

## Step 4: Install and configure the AWS SDK for Go
](#sample-go-sdk)
+ [

## Step 5: Add AWS SDK code
](#sample-go-sdk-code)
+ [

## Step 6: Run the AWS SDK code
](#sample-go-sdk-run)
+ [

## Step 7: Clean up
](#sample-go-clean-up)

## Prerequisites
<a name="sample-go-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-go-install"></a>

In this step, you install and configure Go, which is required to run this sample.

1. In a terminal session in the AWS Cloud9 IDE, confirm whether Go is already installed by running the ** `go version` ** command. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) If successful, the output should contain the Go version number. Otherwise, an error message should be output. If Go is installed, skip ahead to [Step 2: Add code](#sample-go-code).

1. Run the ** `yum update` ** for (Amazon Linux) or ** `apt update` ** for (Ubuntu Server) command to help ensure the latest security updates and bug fixes are installed.

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. To install Go, run these commands, one at a time.

   ```
   wget https://storage.googleapis.com/golang/go1.9.3.linux-amd64.tar.gz # Download the Go installer.
   sudo tar -C /usr/local -xzf ./go1.9.3.linux-amd64.tar.gz              # Install Go.
   rm ./go1.9.3.linux-amd64.tar.gz                                       # Delete the installer.
   ```

   The preceding commands assume the latest stable version of Go at the time this topic was written. For more information, see [Downloads](https://golang.org/dl/) on The Go Programming Language website.

1. Add the path to the Go binary to your `PATH` environment variable, like this.

   1. Open your shell profile file (for example, `~/.bashrc`) for editing.

   1. At the end of this line of code, type the following, so that the code now looks like this.

      ```
      PATH=$PATH:/usr/local/go/bin
      ```

   1. Save the file.

1. Source the `~/.bashrc` file so that the terminal can now find the Go binary you just referenced.

   ```
   . ~/.bashrc
   ```

1. Confirm that Go is now successfully installed and configured by running the ** `go version` ** command. If successful, the output contains the Go version number.

## Step 2: Add code
<a name="sample-go-code"></a>

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `hello.go`. (To create a file, on the menu bar, choose **File**, **New File**. To save the file, choose **File**, **Save**.)

```
package main

import (
  "fmt"
  "os"
  "strconv"
)

func main() {
  fmt.Printf("Hello, World!\n")

  fmt.Printf("The sum of 2 and 3 is 5.\n")

  first, _ := strconv.Atoi(os.Args[1])
  second, _ := strconv.Atoi(os.Args[2])
  sum := first + second

  fmt.Printf("The sum of %s and %s is %s.",
    os.Args[1], os.Args[2], strconv.Itoa(sum))
}
```

## Step 3: Run the code
<a name="sample-go-run"></a>

1. In the AWS Cloud9 IDE, on the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Go**.
**Note**  
If **Go** is not available, you can create a custom runner for Go.  
On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **New Runner**.
On the **My Runner.run** tab, replace the tab's contents with this code.  

      ```
      {
        "cmd" : ["go", "run", "$file", "$args"],
        "info" : "Running $project_path$file_name...",
        "selector" : "source.go"
      }
      ```
Choose **File**, **Save As** on the menu bar, and save the file as `Go.run` in the `/.c9/runners` folder.
On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Go**.
Choose the **hello.go** tab to make it active.

1. For **Command**, type `hello.go 5 9`. In the code, `5` represents `os.Args[1]`, and `9` represents `os.Args[2]`.  
![\[Output of running the Go code in the AWS Cloud9 IDE\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/ide-go-simple.png)

1. Choose the **Run** button, and compare your output.

   ```
   Hello, World!
   The sum of 2 and 3 is 5.
   The sum of 5 and 9 is 14.
   ```

## Step 4: Install and configure the AWS SDK for Go
<a name="sample-go-sdk"></a>

You can enhance this sample to use the AWS SDK for Go to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this step, you install and configure the AWS SDK for Go, which provides a convenient way to interact with AWS services such as Amazon S3, from your Go code. Before you install the AWS SDK for Go, you must set your `GOPATH` environment variable. After you install the AWS SDK for Go and set your `GOPATH` environment variable, you must set up credentials management in your environment. The AWS SDK for Go needs these credentials to interact with AWS services.

### To set your GOPATH environment variable
<a name="sample-go-sdk-set-gopath"></a>

1. Open your `~/.bashrc` file for editing.

1. After the last line in the file, type this code.

   ```
   GOPATH=~/environment/go
      
   export GOPATH
   ```

1. Save the file.

1. Source the `~/.bashrc` file so that the terminal can now find the `GOPATH` environment variable you just referenced.

   ```
   . ~/.bashrc
   ```

1. Confirm that the `GOPATH` environment variable is successfully set by running the ** `echo $GOPATH` ** command. If successful, `/home/ec2-user/environment/go` or `/home/ubuntu/environment/go` should be output.

### To install the AWS SDK for Go
<a name="sample-go-sdk-install-sdk"></a>

Run the ** `go get` ** command, specifying the location of the AWS SDK for Go source.

```
go get -u github.com/aws/aws-sdk-go/...
```

Go installs the AWS SDK for Go source into the location specified by your `GOPATH` environment variable, which is the `go` folder in your environment.

### To set up credentials management in your environment
<a name="sample-go-sdk-creds"></a>

Each time you use the AWS SDK for Go to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the AWS SDK for Go has the appropriate permissions to make that call. If the credentials don't cover the appropriate permissions, the call will fail.

In this step, you store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Specifying Credentials](https://docs.aws.amazon.com/sdk-for-go/latest/developer-guide/configuring-sdk.html#specifying-credentials) in the *AWS SDK for Go Developer Guide*.

## Step 5: Add AWS SDK code
<a name="sample-go-sdk-code"></a>

In this step, you add some more code, this time to interact with Amazon S3 to create a bucket, list your available buckets, and then delete the bucket you just created. You will run this code later.

In the AWS Cloud9 IDE, create a file with this content, and save the file with the name `s3.go`.

```
package main

import (
	"fmt"
	"os"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3"
)

func main() {

	if len(os.Args) < 3 {
		fmt.Printf("Usage: go run s3.go <the bucket name> <the AWS Region to use>\n" +
			"Example: go run s3.go my-test-bucket us-east-2\n")
		os.Exit(1)
	}

	sess := session.Must(session.NewSessionWithOptions(session.Options{
		SharedConfigState: session.SharedConfigEnable,
	}))
	svc := s3.New(sess, &aws.Config{
		Region: aws.String(os.Args[2]),
	})

	listMyBuckets(svc)
	createMyBucket(svc, os.Args[1], os.Args[2])
	listMyBuckets(svc)
	deleteMyBucket(svc, os.Args[1])
	listMyBuckets(svc)
}

// List all of your available buckets in this AWS Region.
func listMyBuckets(svc *s3.S3) {
	result, err := svc.ListBuckets(nil)

	if err != nil {
		exitErrorf("Unable to list buckets, %v", err)
	}

	fmt.Println("My buckets now are:\n")

	for _, b := range result.Buckets {
		fmt.Printf(aws.StringValue(b.Name) + "\n")
	}

	fmt.Printf("\n")
}

// Create a bucket in this AWS Region.
func createMyBucket(svc *s3.S3, bucketName string, region string) {
	fmt.Printf("\nCreating a new bucket named '" + bucketName + "'...\n\n")

	_, err := svc.CreateBucket(&s3.CreateBucketInput{
		Bucket: aws.String(bucketName),
		CreateBucketConfiguration: &s3.CreateBucketConfiguration{
			LocationConstraint: aws.String(region),
		},
	})

	if err != nil {
		exitErrorf("Unable to create bucket, %v", err)
	}

	// Wait until bucket is created before finishing
	fmt.Printf("Waiting for bucket %q to be created...\n", bucketName)

	err = svc.WaitUntilBucketExists(&s3.HeadBucketInput{
		Bucket: aws.String(bucketName),
	})
}

// Delete the bucket you just created.
func deleteMyBucket(svc *s3.S3, bucketName string) {
	fmt.Printf("\nDeleting the bucket named '" + bucketName + "'...\n\n")

	_, err := svc.DeleteBucket(&s3.DeleteBucketInput{
		Bucket: aws.String(bucketName),
	})

	if err != nil {
		exitErrorf("Unable to delete bucket, %v", err)
	}

	// Wait until bucket is deleted before finishing
	fmt.Printf("Waiting for bucket %q to be deleted...\n", bucketName)

	err = svc.WaitUntilBucketNotExists(&s3.HeadBucketInput{
		Bucket: aws.String(bucketName),
	})
}

// If there's an error, display it.
func exitErrorf(msg string, args ...interface{}) {
	fmt.Fprintf(os.Stderr, msg+"\n", args...)
	os.Exit(1)
}
```

## Step 6: Run the AWS SDK code
<a name="sample-go-sdk-run"></a>

1. In the AWS Cloud9 IDE, on the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Go**.

1. For **Command**, type `s3.go YOUR_BUCKET_NAME THE_AWS_REGION `, where ` YOUR_BUCKET_NAME ` is the name of the bucket you want to create and then delete, and ` THE_AWS_REGION ` is the ID of the AWS Region you want to create the bucket in. For example, for the US East (Ohio) Region, use `us-east-2`. For more IDs, see [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.
**Note**  
Amazon S3 bucket names must be unique across AWS—not just your AWS account.

1. Choose the **Run** button, and compare your output.

   ```
   My buckets now are:
   
   Creating a new bucket named 'my-test-bucket'...
   
   My buckets now are:
   
   my-test-bucket
   
   Deleting the bucket named 'my-test-bucket'...
   
   My buckets now are:
   ```

## Step 7: Clean up
<a name="sample-go-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# TypeScript tutorial for AWS Cloud9
<a name="sample-typescript"></a>

This tutorial shows you how to work with TypeScript in an AWS Cloud9 development environment.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2 and Amazon S3. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/) and [Amazon S3 Pricing](https://aws.amazon.com/s3/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-typescript-prereqs)
+ [

## Step 1: Install required tools
](#sample-typescript-install)
+ [

## Step 2: Add code
](#sample-typescript-code)
+ [

## Step 3: Run the code
](#sample-typescript-run)
+ [

## Step 4: Install and configure the AWS SDK for JavaScript in Node.js
](#sample-typescript-sdk)
+ [

## Step 5: Add AWS SDK code
](#sample-typescript-sdk-code)
+ [

## Step 6: Run the AWS SDK code
](#sample-typescript-sdk-run)
+ [

## Step 7: Clean up
](#sample-typescript-clean-up)

## Prerequisites
<a name="sample-typescript-prereqs"></a>

Before you use this sample, make sure that your setup meets the following requirements:
+ **You must have an existing AWS Cloud9 EC2 development environment.** This sample assumes that you already have an EC2 environment that's connected to an Amazon EC2 instance that runs Amazon Linux or Ubuntu Server. If you have a different type of environment or operating system, you might need to adapt this sample's instructions to set up related tools. For more information, see [Creating an environment in AWS Cloud9](create-environment.md).
+ **You have the AWS Cloud9 IDE for the existing environment already open.** When you open an environment, AWS Cloud9 opens the IDE for that environment in your web browser. For more information, see [Opening an environment in AWS Cloud9](open-environment.md).

## Step 1: Install required tools
<a name="sample-typescript-install"></a>

In this step, you install TypeScript by using Node Package Manager (** `npm` **). To install ** `npm` **, you use Node Version Manager (** `nvm` **). If you don't have ** `nvm` **, you install it in this step first.

1. In a terminal session in the AWS Cloud9 IDE, confirm whether TypeScript is already installed by running the command line TypeScript compiler with the ** `--version` ** option. (To start a new terminal session, on the menu bar, choose **Window**, **New Terminal**.) If successful, the output contains the TypeScript version number. If TypeScript is installed, skip ahead to [Step 2: Add code](#sample-typescript-code).

   ```
   tsc --version
   ```

1. Confirm whether ** `npm` ** is already installed by running ** `npm` ** with the ** `--version` ** option. If successful, the output contains the ** `npm` ** version number. If ** `npm` ** is installed, skip ahead to step 10 in this procedure to use ** `npm` ** to install TypeScript.

   ```
   npm --version
   ```

1. Run the ** `yum update` ** for (Amazon Linux) or ** `apt update` ** for (Ubuntu Server) command to help ensure the latest security updates and bug fixes are installed.

   For Amazon Linux:

   ```
   sudo yum -y update
   ```

   For Ubuntu Server:

   ```
   sudo apt update
   ```

1. To install ** `npm` **, begin by running the following command to download Node Version Manager (** `nvm` **). (** `nvm` ** is a simple Bash shell script that's useful for installing and managing Node.js versions. For more information, see [Node Version Manager](https://github.com/creationix/nvm/blob/master/README.md) on the GitHub website.)

   ```
   curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
   ```

1. To start using ** `nvm` **, either close the terminal session and start it again, or source the `~/.bashrc` file that contains the commands to load ** `nvm` **.

   ```
   . ~/.bashrc
   ```

1. Confirm that ** `nvm` ** is installed by running ** `nvm` ** with the ** `--version` ** option.

   ```
   nvm --version
   ```

1. Install the latest version 16 of Node.js by running ** `nvm` **. (** `npm` ** is included in Node.js.)

   ```
   nvm install v16
   ```

1. Confirm that Node.js is installed by running the command line version of Node.js with the ** `--version` ** option.

   ```
   node --version
   ```

1. Confirm that ** `npm` ** is installed by running ** `npm` ** with the ** `--version` ** option.

   ```
   npm --version
   ```

1. Install TypeScript by running ** `npm` ** with the ** `-g` ** option. This installs TypeScript as a global package in the environment.

   ```
   npm install -g typescript
   ```

1. Confirm that TypeScript is installed by running the command line TypeScript compiler with the ** `--version` ** option.

   ```
   tsc --version
   ```

## Step 2: Add code
<a name="sample-typescript-code"></a>

1. In the AWS Cloud9 IDE, create a file named `hello.ts`. (To create a file, on the menu bar, choose **File**, **New File**. To save the file, choose **File**, **Save**.)

1. In a terminal in the IDE, from the same directory as the `hello.ts` file, run ** `npm` ** to install the `@types/node` library.

   ```
   npm install @types/node
   ```

   This adds a `node_modules/@types/node` folder in the same directory as the `hello.ts` file. This new folder contains Node.js type definitions that TypeScript needs later in this procedure for the `console.log` and `process.argv` properties that you will add to the `hello.ts` file.

1. Add the following code to the `hello.ts` file:

   ```
   console.log('Hello, World!');
   
   console.log('The sum of 2 and 3 is 5.');
   
   const sum: number = parseInt(process.argv[2], 10) + parseInt(process.argv[3], 10);
   
   console.log('The sum of ' + process.argv[2] + ' and ' +
     process.argv[3] + ' is ' + sum + '.');
   ```

## Step 3: Run the code
<a name="sample-typescript-run"></a>

1. In the terminal, from the same directory as the `hello.ts` file, run the TypeScript compiler. Specify the `hello.ts` file and additional libraries to include.

   ```
   tsc hello.ts --lib es6
   ```

   TypeScript uses the `hello.ts` file and a set of ECMAScript 6 (ES6) library files to transpile the TypeScript code in the `hello.ts` file into equivalent JavaScript code in a file named `hello.js`.

1. In the **Environment** window, open the `hello.js` file.

1. On the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Node.js**.

1. For **Command**, type `hello.js 5 9`. In the code, `5` represents `process.argv[2]`, and `9` represents `process.argv[3]`. (`process.argv[0]` represents the name of the runtime (`node`), and `process.argv[1]` represents the name of the file (`hello.js`).)

1. Choose **Run**, and compare your output. When you're done, choose **Stop**.

   ```
   Hello, World!
   The sum of 2 and 3 is 5.
   The sum of 5 and 9 is 14.
   ```

![\[Node.js output after running the code in the AWS Cloud9 IDE\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/ide-nodejs-simple.png)


**Note**  
Instead of creating a new run configuration in the IDE, you can also execute this code by running the command ** `node hello.js 5 9` ** from the terminal.

## Step 4: Install and configure the AWS SDK for JavaScript in Node.js
<a name="sample-typescript-sdk"></a>

You can enhance this sample to use the AWS SDK for JavaScript in Node.js to create an Amazon S3 bucket, list your available buckets, and then delete the bucket you just created.

In this step, you install and configure the AWS SDK for JavaScript in Node.js. The SDK provides a convenient way to interact with AWS services such as Amazon S3, from your JavaScript code. After you install the AWS SDK for JavaScript in Node.js, you must set up credentials management in your environment. The SDK needs these credentials to interact with AWS services.

### To install the AWS SDK for JavaScript in Node.js
<a name="sample-typescript-sdk-install-sdk"></a>

In a terminal session in the AWS Cloud9 IDE, from the same directory as the `hello.js` file from [Step 3: Run the code](#sample-typescript-run), run ** `npm` ** to install the AWS SDK for JavaScript in Node.js.

```
npm install aws-sdk
```

This command adds several folders to the `node_modules` folder from [Step 3: Run the code](#sample-typescript-run). These folders contain source code and dependencies for the AWS SDK for JavaScript in Node.js. For more information, see [Installing the SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript/latest/developer-guide/installing-jssdk.html) in the *AWS SDK for JavaScript Developer Guide*.

### To set up credentials management in your environment
<a name="sample-typescript-sdk-creds"></a>

Each time you use the AWS SDK for JavaScript in Node.js to call an AWS service, you must provide a set of credentials with the call. These credentials determine whether the AWS SDK for JavaScript in Node.js has the appropriate permissions to make that call. If the credentials don't cover the appropriate permissions, the call will fail.

In this step, you store your credentials within the environment. To do this, follow the instructions in [Calling AWS services from an environment in AWS Cloud9](credentials.md), and then return to this topic.

For additional information, see [Setting Credentials in Node.js](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html) in the *AWS SDK for JavaScript Developer Guide*.

## Step 5: Add AWS SDK code
<a name="sample-typescript-sdk-code"></a>

In this step, you add some more code, this time to interact with Amazon S3 to create a bucket, list your available buckets, and then delete the bucket you just created. You'll run this code later.

1. In the AWS Cloud9 IDE, in the same directory as the `hello.js` file in previous steps, create a file named `s3.ts`.

1. From a terminal in the AWS Cloud9 IDE, in the same directory as the `s3.ts` file, enable the code to call Amazon S3 operations asynchronously by running ** `npm` ** twice to install the async library for TypeScript and again for JavaScript.

   ```
   npm install @types/async # For TypeScript.
   npm install async        # For JavaScript.
   ```

1. Add the following code to the `s3.ts` file:

   ```
   import * as async from 'async';
   import * as AWS from 'aws-sdk';
   
   if (process.argv.length < 4) {
     console.log('Usage: node s3.js <the bucket name> <the AWS Region to use>\n' +
       'Example: node s3.js my-test-bucket us-east-2');
     process.exit(1);
   }
   
   const AWS = require('aws-sdk'); // To set the AWS credentials and AWS Region.
   const async = require('async'); // To call AWS operations asynchronously.
   
   const s3: AWS.S3 = new AWS.S3({apiVersion: '2006-03-01'});
   const bucket_name: string = process.argv[2];
   const region: string = process.argv[3];
   
   AWS.config.update({
     region: region
   });
   
   const create_bucket_params: any = {
     Bucket: bucket_name,
     CreateBucketConfiguration: {
       LocationConstraint: region
     }
   };
   
   const delete_bucket_params: any = {
     Bucket: bucket_name
   };
   
   // List all of your available buckets in this AWS Region.
   function listMyBuckets(callback): void {
     s3.listBuckets(function(err, data) {
       if (err) {
   
       } else {
         console.log("My buckets now are:\n");
   
         for (let i: number = 0; i < data.Buckets.length; i++) {
           console.log(data.Buckets[i].Name);
         }
       }
   
       callback(err);
     });
   }
   
   // Create a bucket in this AWS Region.
   function createMyBucket(callback): void {
     console.log("\nCreating a bucket named '" + bucket_name + "'...\n");
   
     s3.createBucket(create_bucket_params, function(err, data) {
       if (err) {
         console.log(err.code + ": " + err.message);
       }
   
       callback(err);
     });
   }
   
   // Delete the bucket you just created.
   function deleteMyBucket(callback): void {
     console.log("\nDeleting the bucket named '" + bucket_name + "'...\n");
   
     s3.deleteBucket(delete_bucket_params, function(err, data) {
       if (err) {
         console.log(err.code + ": " + err.message);
       }
   
       callback(err);
     });
   }
   
   // Call the AWS operations in the following order.
   async.series([
     listMyBuckets,
     createMyBucket,
     listMyBuckets,
     deleteMyBucket,
     listMyBuckets
   ]);
   ```

## Step 6: Run the AWS SDK code
<a name="sample-typescript-sdk-run"></a>

1. In the terminal, from the same directory as the `s3.ts` file, run the TypeScript compiler. Specify the `s3.ts` file and additional libraries to include.

   ```
   tsc s3.ts --lib es6
   ```

   TypeScript uses the `s3.ts` file, the AWS SDK for JavaScript in Node.js, the async library, and a set of ECMAScript 6 (ES6) library files to transpile the TypeScript code in the `s3.ts` file into equivalent JavaScript code in a file named `s3.js`.

1. In the **Environment** window, open the `s3.js` file.

1. On the menu bar, choose **Run**, **Run Configurations**, **New Run Configuration**.

1. On the **[New] - Idle** tab, choose **Runner: Auto**, and then choose **Node.js**.

1. For **Command**, type `s3.js YOUR_BUCKET_NAME THE_AWS_REGION `, where ` YOUR_BUCKET_NAME ` is the name of the bucket you want to create and then delete, and ` THE_AWS_REGION ` is the ID of the AWS Region to create the bucket in. For example, for the US East (Ohio) Region, use `us-east-2`. For more IDs, see [Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.
**Note**  
Amazon S3 bucket names must be unique across AWS—not just your AWS account.

1. Choose **Run**, and compare your output. When you're done, choose **Stop**.

   ```
   My buckets now are:
   
   Creating a new bucket named 'my-test-bucket'...
   
   My buckets now are:
   
   my-test-bucket
   
   Deleting the bucket named 'my-test-bucket'...
   
   My buckets now are:
   ```

## Step 7: Clean up
<a name="sample-typescript-clean-up"></a>

To prevent ongoing charges to your AWS account after you're done using this sample, you should delete the environment. For instructions, see [Deleting an environment in AWS Cloud9](delete-environment.md).

# Docker tutorial for AWS Cloud9
<a name="sample-docker"></a>

This tutorial shows you how to connect an AWS Cloud9 SSH development environment to a running Docker container inside of an Amazon Linux instance in Amazon EC2. This enables you to use the AWS Cloud9 IDE to work with code and files inside of a Docker container and to run commands on that container. For information about Docker, see [What is Docker](https://www.docker.com/what-docker) on the Docker website.

Following this tutorial and creating this sample might result in charges to your AWS account. These include possible charges for services such as Amazon EC2. For more information, see [Amazon EC2 Pricing](https://aws.amazon.com/ec2/pricing/).

**Topics**
+ [

## Prerequisites
](#sample-docker-prereqs)
+ [

## Step 1: Install and run Docker
](#sample-docker-install)
+ [

## Step 2: Build the image
](#sample-docker-build)
+ [

## Step 3: Run the container
](#sample-docker-run)
+ [

## Step 4: Create the environment
](#sample-docker-env)
+ [

## Step 5: Run the code
](#sample-docker-code)
+ [

## Step 6: Clean up
](#sample-docker-clean-up)

## Prerequisites
<a name="sample-docker-prereqs"></a>
+  **You should have an Amazon EC2 instance running Amazon Linux or Ubuntu Server.** This sample assumes you already have an Amazon EC2 instance running Amazon Linux or Ubuntu Server in your AWS account. To launch an Amazon EC2 instance, see [Launch a Linux Virtual Machine](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/). In the **Choose an Amazon Machine Image (AMI)** page of the wizard, choose an AMI whose display name starts with **Amazon Linux AMI** or **Ubuntu Server**.
+  **If the Amazon EC2 instance runs within an Amazon VPC, there are additional requirements.** See [VPC settings for AWS Cloud9 Development Environments](vpc-settings.md).
+  **The Amazon EC2 instance should have at least 8 to 16 GB of free disk space available.** This sample uses Docker images that are over 3 GB in size and can use additional increments of 3 GB or more of disk space to build images. If you try to run this sample on a disk that has 8 GB of free space or less, we've found that the Docker image might not build or the Docker container might not run. To check the instance's free disk space, you can run a command such as ** `df -h` ** (for "disk filesystem information in human-readable format") on the instance. To increase an existing instance's disk size, see [Modifying a Volume](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-modify-volume.html) in the *Amazon EC2 User Guide*.

## Step 1: Install and run Docker
<a name="sample-docker-install"></a>

In this step, you check if Docker is installed on the Amazon EC2 instance, and install Docker if it isn't already installed. After you install Docker, you run it on the instance.

1. Connect to the running Amazon EC2 instance by using an SSH client such as the ** `ssh` ** utility or PuTTY. To do this, see "Step 3: Connect to Your Instance" in [Launch a Linux Virtual Machine](https://aws.amazon.com/getting-started/tutorials/launch-a-virtual-machine/).

1. Check if Docker is installed on the instance. To do this, run the ** `docker` ** command on the instance with the ** `--version` ** option.

   ```
   docker --version
   ```

   If Docker is installed, the Docker version and build number are displayed. In this case, skip ahead to step 5 later in this procedure.

1. Install Docker. To do this, run the ** `yum` ** or ** `apt` ** command with the ** `install` ** action, specifying the ** `docker` ** or ** `docker.io` ** package to install.

   For Amazon Linux:

   ```
   sudo yum install -y docker
   ```

   For Ubuntu Server:

   ```
   sudo apt install -y docker.io
   ```

1. Confirm that Docker is installed. To do this, run the ** `docker --version` ** command again. The Docker version and build number are displayed.

1. Run Docker. To do this, run the ** `service` ** command with the ** `docker` ** service and the ** `start` ** action.

   ```
   sudo service docker start
   ```

1. Confirm Docker is running. To do this, run the ** `docker` ** command with the ** `info` ** action.

   ```
   sudo docker info
   ```

   If Docker is running, information about Docker is displayed.

## Step 2: Build the image
<a name="sample-docker-build"></a>

In this step, you use a Dockerfile to build a Docker image onto the instance. This sample uses an image that includes Node.js and a sample chat server application.

1. On the instance, create the Dockerfile. To do this, with the SSH client still connected to the instance, in the `/tmp` directory on the instance, create a file named `Dockerfile`. For example, run the ** `touch` ** command as follows.

   ```
   sudo touch /tmp/Dockerfile
   ```

1. Add the following contents to the `Dockerfile` file.

   ```
   # Build a Docker image based on the Amazon Linux 2 Docker image.
   FROM amazonlinux:2
   
   # install common tools
   RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
   RUN yum update -y
   RUN yum install -y sudo bash curl wget git man-db nano vim bash-completion tmux  gcc gcc-c++ make tar
   
   # Enable the Docker container to communicate with AWS Cloud9 by
   # installing SSH.
   RUN yum install -y openssh-server
   
   # Ensure that Node.js is installed.
   RUN yum install -y nodejs
   
   # Create user and enable root access
   RUN useradd --uid 1000 --shell /bin/bash -m --home-dir /home/ubuntu ubuntu && \
       sed -i 's/%wheel\s.*/%wheel ALL=NOPASSWD:ALL/' /etc/sudoers && \
       usermod -a -G wheel ubuntu
   
   # Add the AWS Cloud9 SSH public key to the Docker container.
   # This assumes a file named authorized_keys containing the
   # AWS Cloud9 SSH public key already exists in the same
   # directory as the Dockerfile.
   RUN mkdir -p /home/ubuntu/.ssh
   ADD ./authorized_keys /home/ubuntu/.ssh/authorized_keys
   RUN chown -R ubuntu /home/ubuntu/.ssh /home/ubuntu/.ssh/authorized_keys && \
   chmod 700 /home/ubuntu/.ssh && \
   chmod 600 /home/ubuntu/.ssh/authorized_keys
   
   # Update the password to a random one for the user ubuntu.
   RUN echo "ubuntu:$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)" | chpasswd
   
   # pre-install Cloud9 dependencies
   USER ubuntu
   RUN curl https://d2j6vhu5uywtq3.cloudfront.net/static/c9-install.sh | bash
   
   USER root
   # Start SSH in the Docker container.
   CMD ssh-keygen -A && /usr/sbin/sshd -D
   ```

   To add the preceding contents to the `Dockerfile` file, you could use the ** `vi` ** utility on the instance as follows.

   1. Use the AWS Cloud9 to open and edit the `/tmp/Dockerfile` file.

      ```
      sudo vi /tmp/Dockerfile
      ```

   1. Paste the preceding contents into the `Dockerfile` file. If you're not sure how to do this, see your SSH client's documentation.

   1. Switch to command mode. To do this, press the `Esc` key. (`-- INSERT --` disappears from the bottom of the window.)

   1. Type `:wq` (to write to the `/tmp/Dockerfile` file, save the file, and then exit ** `vi` **), and then press `Enter`.
**Note**  
You can access a frequently updated list of Docker images from AWS CodeBuild. For more information, see [Docker images provided by CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-available.html) in the *AWS CodeBuild User Guide*.

1. On the instance, create a file that contains the AWS Cloud9 SSH public key for the Docker container to use. To do this, in the same directory as the `Dockerfile` file, create a file named `authorized_keys`, for example, by running the ** `touch` ** command.

   ```
   sudo touch /tmp/authorized_keys
   ```

1. Add the AWS Cloud9 SSH public key to the `authorized_keys` file. To get the AWS Cloud9 SSH public key, do the following:

   1. Open the AWS Cloud9 console at [https://console.aws.amazon.com/cloud9/](https://console.aws.amazon.com/cloud9/).

   1. In the AWS navigation bar, in the AWS Region selector, choose the AWS Region where you'll want to create the AWS Cloud9 development environment later in this topic.

   1. If a welcome page is displayed, for **New AWS Cloud9 environment**, choose **Create environment**. Otherwise, choose **Create environment**.

   1. On the **Name environment** page, for **Name**, type a name for the environment. (The name doesn't matter here. You'll choose a different name later.)

   1. Choose **Next step**.

   1. For **Environment type**, choose **Connect and run in remote server (SSH)**.

   1. Expand **View public SSH key**.

   1. Choose **Copy key to clipboard**. (This is between **View public SSH key** and **Advanced settings**.)

   1. Choose **Cancel**.

   1. Paste the contents of the clipboard into the `authorized_keys` file, and then save the file. For example, you can use the ** `vi` ** utility, as described earlier in this step.

1. Build the image by running the ** `docker` ** command with the ** `build` ** action, adding the tag `cloud9-image:latest` to the image and specifying the path to the `Dockerfile` file to use.

   ```
   sudo docker build -t cloud9-image:latest /tmp
   ```

   If successful, the last two lines of the build output display `Successfully built` and `Successfully tagged`.

   To confirm that Docker successfully built the image, run the ** `docker` ** command with the `image ls` action.

   ```
   sudo docker image ls
   ```

   If successful, the output displays an entry where the `REPOSITORY` field is set to `cloud9-image` and the `TAG` field is set to `latest`.

1. Make a note of the Amazon EC2 instance's public IP address. You'll need it for [Step 4: Create the environment](#sample-docker-env). If you're not sure what the public IP address of the instance is, you can run the following command on the instance to get it.

   ```
   curl http://169.254.169.254/latest/meta-data/public-ipv4
   ```

## Step 3: Run the container
<a name="sample-docker-run"></a>

In this step, you run a Docker container on the instance. This container is based on the image you built in the previous step.

1. To run the Docker container, run the ** `docker` ** command on the instance with the ** `run` ** action and the following options.

   ```
   sudo docker run -d -it --expose 9090 -p 0.0.0.0:9090:22 --name cloud9 cloud9-image:latest
   ```
   +  `-d` runs the container in detached mode, exiting whenever the root process that is used to run the container (in this sample, the SSH client) exits.
   +  `-it` runs the container with an allocated pseudo-TTY and keeps STDIN open, even if the container is not attached.
   +  `--expose` makes the specified port (in this sample, port `9090`) available from the container.
   +  `-p` makes the specified port available internally to the Amazon EC2 instance over the specified IP address and port. In this sample, port `9090` on the container can be accessed internally through port `22` on the Amazon EC2 instance.
   +  `--name` is a human-readable name for the container (in this sample, `cloud9`).
   +  `cloud9-image:latest` is the human-readable name of the built image to use to run the container.

   To confirm that Docker is successfully running the container, run the ** `docker` ** command with the `container ls` action.

   ```
   sudo docker container ls
   ```

   If successful, the output displays an entry where the `IMAGE` field is set to `cloud9-image:latest` and the `NAMES` field is set to `cloud9`.

1. Log in to the running container. To do this, run the ** `docker` ** command with the ** `exec` ** action and the following options.

   ```
   sudo docker exec -it cloud9 bash
   ```
   +  `-it` runs the container with an allocated pseudo-TTY and keeps STDIN open, even if the container isn't attached.
   +  `cloud9` is the human-readable name of the running container.
   +  `bash` starts the standard shell in the running container.

   If successful, the terminal prompt changes to display the logged-in user's name for the container and the ID of the container.
**Note**  
If you ever want to log out of the running container, run the ** `exit` ** command. The terminal prompt changes back to display the logged-in user's name for the instance and the private DNS of the instance. The container should still be running.

1. For the directory on the running container that you want AWS Cloud9 to start from after it logs in, set its access permissions to ** `rwxr-xr-x` **. This means read-write-execute permissions for the owner, read-execute permissions for the group, and read-execute permissions for others. For example, if the directory's path is `~`, you can set these permissions on the directory by running the ** `chmod` ** command in the running container as follows.

   ```
   sudo chmod u=rwx,g=rx,o=rx ~
   ```

1. Make a note of the path to the directory on the running container that contains the Node.js binary, as you'll need it for [Step 4: Create the environment](#sample-docker-env). If you're not sure what this path is, run the following command on the running container to get it.

   ```
   which node
   ```

## Step 4: Create the environment
<a name="sample-docker-env"></a>

In this step, you use AWS Cloud9 to create an AWS Cloud9 SSH development environment and connect it to the running Docker container. After AWS Cloud9 creates the environment, it displays the AWS Cloud9 IDE so that you can start working with the files and code in the container.

You create an AWS Cloud9 SSH development environment with the AWS Cloud9 console. You can't create an SSH environment using the CLI.

### Prerequisites
<a name="prerequisites"></a>
+ Make sure that you completed the steps in [Setting up AWS Cloud9](setting-up.md) first. That way, you can sign in to the AWS Cloud9 console and create environments.
+ Identify an existing cloud compute instance (for example, an Amazon EC2 instance in your AWS account) or your own server that you want AWS Cloud9 to connect to the environment.
+ Make sure that the existing instance or your own server meets all of the [SSH host requirements](ssh-settings.md#ssh-settings-requirements). This includes having specific versions of Python, Node.js, and other components installed, setting specific permissions on the directory that you want AWS Cloud9 to start from after login, and setting up any associated Amazon Virtual Private Cloud.

### Create the SSH Environment
<a name="create-the-envsshtitle"></a>

1. Make sure that you completed the preceding prerequisites.

1. Connect to your existing instance or your own server by using an SSH client, if you aren't already connected to it. This ensures that you can add the necessary public SSH key value to the instance or server. This is described later in this procedure.
**Note**  
To connect to an existing AWS Cloud compute instance, see one or more of the following resources:  
For Amazon EC2, see [Connect to Your Linux Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) in the *Amazon EC2 User Guide*.
For Amazon Lightsail, see [Connect to your Linux/Unix-based Lightsail instance](https://lightsail.aws.amazon.com/ls/docs/how-to/article/lightsail-how-to-connect-to-your-instance-virtual-private-server) in the *Amazon Lightsail Documentation*.
For AWS Elastic Beanstalk, see [Listing and Connecting to Server Instances](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.ec2connect.html) in the *AWS Elastic Beanstalk Developer Guide*.
For AWS OpsWorks, see [Using SSH to Log In to a Linux Instance](https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-ssh.html) in the *AWS OpsWorks User Guide*.
For other AWS services, see the documentation for that specific service.
To connect to your own server, use SSH. SSH is already installed on the macOS and Linux operating systems. To connect to your server by using SSH on Windows, you must install [PuTTY](https://www.putty.org/).

1. Sign in to the AWS Cloud9 console, at [https://console.aws.amazon.com/cloud9/](https://console.aws.amazon.com/cloud9/).

1. After you sign in to the AWS Cloud9 console, in the top navigation bar choose an AWS Region to create the environment in. For a list of available AWS Regions, see [AWS Cloud9](https://docs.aws.amazon.com/general/latest/gr/rande.html#cloud9_region) in the *AWS General Reference*.  
![\[Region selector in the AWS Cloud9 console\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/consolas_region_new_UX.png)

1. If this is the first time that you're creating a development environment, a welcome page is displayed. In the **New AWS Cloud9 environment** panel, choose **Create environment**.

   If you've previously created development environments, you can also expand the pane on the left of the screen. Choose **Your environments**, and then choose **Create environment**.

   In the **welcome** page:  
![\[Choose the Create environment button if the welcome page is displayed\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/create_welcome_env_new_UX.png)

   Or in the **Your environments** page:  
![\[Choose the Create environment button if the welcome page isn't displayed\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/console_create_env_new_UX.png)

1. On the **Create environment** page, enter a name for your environment.

1. For **Description**, enter something about your environment. For this tutorial, use `This environment is for the AWS Cloud9 tutorial.`

1. For **Environment type**, choose **Existing Compute** from the following options:
   + **New EC2 instance** – Launches an Amazon EC2 instance that AWS Cloud9 can connect to directly over SSH.
   + ** Existing compute ** – Launches an Amazon EC2 instance that doesn't require any open inbound ports. AWS Cloud9 connects to the instance through [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html).
     + If you select the **Existing compute** option, a service role and an IAM instance profile are created to allow Systems Manager to interact with the EC2 instance on your behalf. You can view the names of both in the **Service role and instance profile for Systems Manager access** section further down the interface. For more information, see [Accessing no-ingress EC2 instances with AWS Systems Manager](ec2-ssm.md). 
**Warning**  
Creating an EC2 instance for your environment might result in possible charges to your AWS account for Amazon EC2. There's no additional cost to use Systems Manager to manage connections to your EC2 instance.
**Warning**  
AWS Cloud9 uses SSH public key to connect securely to your server. To establish the secure connection, add our public key to your `~/.ssh/authorized_keys` file and provide your login credentials in the following steps. Choose **Copy key to clipboard** to copy the SSH key, or **View public SSH key to display it.**

1. On the **Existing compute** panel, for **User**, enter the login name that you used to connect to the instance or server earlier in this procedure. For example, for an AWS Cloud compute instance, it might be `ec2-user`, `ubuntu`, or `root`. 
**Note**  
We recommend that the login name is associated with administrative permissions or an administrator user on the instance or server. More specifically, we recommend that this login name owns the Node.js installation on the instance or server. To check this, from the terminal of your instance or server, run the command ** `ls -l $(which node)` ** (or ** `ls -l $(nvm which node)` ** if you're using `nvm`). This command displays the owner name of the Node.js installation. It also displays the installation's permissions, group name, and location.

1. For **Host**, enter the public IP address (preferred) or the hostname of the instance or server.

1. For **Port**, enter the port that you want AWS Cloud9 to use to try to connect to the instance or server. Alternatively, keep the default port.

1. Choose **Additional details - optional** to display the environment path, path to node.js binary and SSH jump host information.

1. For **Environment path**, enter the path to the directory on the instance or server that you want AWS Cloud9 to start from. You identified this earlier in the prerequisites to this procedure. If you leave this blank, AWS Cloud9 uses the directory that your instance or server typically starts with after login. This is usually a home or default directory.

1. For **Path to Node.js binary path**, enter the path information to specify the path to the Node.js binary on the instance or server. To get the path, you can run the command **`which node`** (or ** `nvm which node` ** if you're using `nvm`) on your instance or server. For example, the path might be `/usr/bin/node`. If you leave this blank, AWS Cloud9 attempts to guess where the Node.js binary is when it tries to connect.

1. For **SSH jump host**, enter information about the jump host that the instance or server uses. Use the format `USER_NAME@HOSTNAME:PORT_NUMBER` (for example, `ec2-user@:ip-192-0-2-0:22`).

   The jump host must meet the following requirements:
   + It must be reachable over the public internet using SSH.
   + It must allow inbound access by any IP address over the specified port.
   + The public SSH key value that was copied into the `~/.ssh/authorized_keys` file on the existing instance or server must also be copied into the `~/.ssh/authorized_keys` file on the jump host.
   + Netcat must be installed.

1. Add up to 50 tags by supplying a **Key** and a **Value** for each tag. Do so by selecting **Add new tag**. The tags are attached to the AWS Cloud9 environment as resource tags, and are propagated to the following underlying resources: the CloudFormation stack, the Amazon EC2 instance, and Amazon EC2 security groups. To learn more about tags, see [Control Access Using AWS Resource Tags](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html) in the *[IAM User Guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/)* and the [advanced information](tags.md) about tags in this guide.
**Warning**  
If you update these tags after you create them, the changes aren't propagated to the underlying resources. For more information, see [Propagating tag updates to underlying resources](tags.md#tags-propagate) in the advanced information about [tags](tags.md).

1. Choose **Create** to create your environment, and you're then redirected to the home page. When the account is created successfully, a green flash bar appears at the top of the AWS Cloud9 console. You can select the new environment and choose **Open in Cloud9** to launch the IDE.   
![\[AWS Cloud9 IDE selector in the AWS Cloud9 console\]](http://docs.aws.amazon.com/cloud9/latest/user-guide/images/cloud9-ide-open.png)

   If the account fails to create, a red flash bar appears at the top of the AWS Cloud9 console. Your account might fail to create due to a problem with your web browser, your AWS access permissions, the instance, or the associated network. You can find information about possible fixes to issues that might cause the account to fail in the [AWS Cloud9 Troubleshooting section.](troubleshooting.md#troubleshooting-env-loading)

**Note**  
If your environment is using a proxy to access the internet, you must provide proxy details to AWS Cloud9 so it can install dependencies. For more information, see [Failed to install dependencies](troubleshooting.md#proxy-failed-dependencies).

## Step 5: Run the code
<a name="sample-docker-code"></a>

In this step, you use the AWS Cloud9 IDE to run a sample application inside the running Docker container.

1. With the AWS Cloud9 IDE displayed for the running container, start the sample chat server. To do this, in the **Environment** window, right-click the sample `workspace/server.js` file, and then choose **Run**.

1. Preview the sample application. To do this, in the **Environment** window, open the the `workspace/client/index.html` file. Then, on the menu bar, choose **Tools, Preview, Preview Running Application**.

1. On the application preview tab, for **Your Name**, type your name. For **Message**, type a message. Then choose **Send**. The chat server adds your name and message to the list.

## Step 6: Clean up
<a name="sample-docker-clean-up"></a>

In this step, you delete the environment and remove AWS Cloud9 and Docker support files from the Amazon EC2 instance. Also, to prevent ongoing charges to your AWS account after you're done using this sample, you should terminate the Amazon EC2 instance that is running Docker.

### Step 6.1: Delete the environment
<a name="step-6-1-delete-the-envtitle"></a>

To delete the environment, see [Deleting an environment in AWS Cloud9](delete-environment.md).

### Step 6.2: Remove AWS Cloud9 support files from the container
<a name="step-6-2-remove-ac9-support-files-from-the-container"></a>

After you delete the environment, some AWS Cloud9 support files still remain in the container. If you want to keep using the container but no longer need these support files, delete the `.c9` folder from the directory on the container that you specified AWS Cloud9 to start from after it logs in. For example, if the directory is `~`, run the ** `rm` ** command with the ** `-r` ** option as follows.

```
sudo rm -r ~/.c9
```

### Step 6.3: Remove Docker support files from the instance
<a name="step-6-3-remove-docker-support-files-from-the-instance"></a>

If you no longer want to keep the Docker container, the Docker image, and Docker on the Amazon EC2 instance, but you want to keep the instance, you can remove these Docker support files as follows.

1. Remove the Docker container from the instance. To do this, run the ** `docker` ** command on the instance with the ** `stop` ** and ** `rm` ** stop actions and the human-readable name of the container.

   ```
   sudo docker stop cloud9
   sudo docker rm cloud9
   ```

1. Remove the Docker image from the instance. To do this, run the ** `docker` ** command on the instance with the ** `image rm` ** action and the image's tag.

   ```
   sudo docker image rm cloud9-image:latest
   ```

1. Remove any additional Docker support files that might still exit. To do this, run the ** `docker` ** command on the instance with the ** `system prune` ** action.

   ```
   sudo docker system prune -a
   ```

1. Uninstall Docker. To do this, run the ** `yum` ** command on the instance with the ** `remove` ** action, specifying the ** `docker` ** package to uninstall.

   For Amazon Linux:

   ```
   sudo yum -y remove docker
   ```

   For Ubuntu Server:

   ```
   sudo apt -y remove docker
   ```

   You can also remove the `Dockerfile` and `authorized_keys` files you created earlier. For example, run the ** `rm` ** command on the instance.

   ```
   sudo rm /tmp/Dockerfile
   sudo rm /tmp/authorized_keys
   ```

### Step 6.4: Terminate the instance
<a name="step-6-4-terminate-the-instance"></a>

To terminate the Amazon EC2 instance, see [Terminate Your Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html) in the *Amazon EC2 User Guide*.

## Related Tutorials
<a name="samples-additonal"></a>
+  [Getting Started with AWS RoboMaker](https://docs.aws.amazon.com/robomaker/latest/dg/getting-started.html) in the *AWS RoboMaker Developer Guide*. This tutorial uses AWS Cloud9 to modify, build, and bundle a sample robot application.