

# Action structure reference
<a name="action-reference"></a>

This section is a reference for action configuration only. For a conceptual overview of the pipeline structure, see [CodePipeline pipeline structure reference](reference-pipeline-structure.md).

Each action provider in CodePipeline uses a set of required and optional configuration fields in the pipeline structure. This section provides the following reference information by action provider:
+ Valid values for the `ActionType` fields included in the pipeline structure action block, such as `Owner` and `Provider`.
+ Descriptions and other reference information for the `Configuration` parameters (required and optional) included in the pipeline structure action section.
+ Valid example JSON and YAML action fields.

This section is updated periodically with more action providers. Reference information is currently available for the following action providers:

**Topics**
+ [Amazon EC2 action reference](action-reference-EC2Deploy.md)
+ [Amazon ECR source action reference](action-reference-ECR.md)
+ [`ECRBuildAndPublish` build action reference](action-reference-ECRBuildAndPublish.md)
+ [Amazon ECS and CodeDeploy blue-green deploy action reference](action-reference-ECSbluegreen.md)
+ [Amazon Elastic Container Service deploy action reference](action-reference-ECS.md)
+ [Amazon Elastic Kubernetes Service `EKS` deploy action reference](action-reference-EKS.md)
+ [AWS Lambda deploy action reference](action-reference-LambdaDeploy.md)
+ [Amazon S3 deploy action reference](action-reference-S3Deploy.md)
+ [Amazon S3 source action reference](action-reference-S3.md)
+ [AWS AppConfig deploy action reference](action-reference-AppConfig.md)
+ [CloudFormation deploy action reference](action-reference-CloudFormation.md)
+ [CloudFormation StackSets](action-reference-StackSets.md)
+ [AWS CodeBuild build and test action reference](action-reference-CodeBuild.md)
+ [AWS CodePipeline invoke action reference](action-reference-PipelineInvoke.md)
+ [AWS CodeCommit source action reference](action-reference-CodeCommit.md)
+ [AWS CodeDeploy deploy action reference](action-reference-CodeDeploy.md)
+ [CodeStarSourceConnection for Bitbucket Cloud, GitHub, GitHub Enterprise Server, GitLab.com, and GitLab self-managed actions](action-reference-CodestarConnectionSource.md)
+ [Commands action reference](action-reference-Commands.md)
+ [AWS Device Farm test action reference](action-reference-DeviceFarm.md)
+ [Elastic Beanstalk deploy action reference](action-reference-Beanstalk.md)
+ [Amazon Inspector `InspectorScan` invoke action reference](action-reference-InspectorScan.md)
+ [AWS Lambda invoke action reference](action-reference-Lambda.md)
+ [AWS OpsWorks deploy action reference](action-reference-OpsWorks.md)
+ [AWS Service Catalogdeploy action reference](action-reference-ServiceCatalog.md)
+ [AWS Step Functions](action-reference-StepFunctions.md)

# Amazon EC2 action reference
<a name="action-reference-EC2Deploy"></a>

You use an Amazon EC2 `EC2` action to deploy application code to your deployment fleet. Your deployment fleet can consist of Amazon EC2 Linux instances or Linux SSM-managed nodes. Your instances must have the SSM agent installed.

**Note**  
This action supports Linux instance types only. The maximum fleet size supported is 500 instances.

The action will choose a number of instances based on a specified maximum. The failed instances from previous instances will be chosen first. The action will skip the deployment on certain instances if the instance has already received deployment of the same input artifact, such as a case where the action failed previously.

**Note**  
This action is only supported for V2 type pipelines.

**Topics**
+ [Action type](#action-reference-EC2Deploy-type)
+ [Configuration parameters](#action-reference-EC2Deploy-parameters)
+ [Input artifacts](#action-reference-EC2Deploy-input)
+ [Output artifacts](#action-reference-EC2Deploy-output)
+ [Service role policy permissions for the EC2 deploy action](#action-reference-EC2Deploy-permissions-action)
+ [Deploy spec file reference](#action-reference-EC2Deploy-spec-reference)
+ [Action declaration](#action-reference-EC2Deploy-example)
+ [Action declaration with Deploy spec example](#action-reference-EC2Deploy-example-spec)
+ [See also](#action-reference-EC2Deploy-links)

## Action type
<a name="action-reference-EC2Deploy-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `EC2`
+ Version: `1`

## Configuration parameters
<a name="action-reference-EC2Deploy-parameters"></a>

**InstanceTagKey**  
Required: Yes  
The tag key of the instances that you created in Amazon EC2, such as `Name`.

**InstanceTagValue**  
Required: No  
The tag value of the instances that you created in Amazon EC2, such as `my-instances`.  
When this value is not specified, all instances with **InstanceTagKey** will be matched.

**InstanceType**  
Required: Yes  
The type of instances or SSM nodes created in Amazon EC2. The valid values are `EC2` and `SSM_MANAGED_NODE`.  
You must have already created, tagged, and installed the SSM agent on all instances.  
When you create the instance, you create or use an existing EC2 instance role. To avoid `Access Denied` errors, you must add S3 bucket permissions to the instance role to give the instance permissions to the CodePipeline artifact bucket. Create a default role or update your existing role with the `s3:GetObject` permission scoped down to the artifact bucket for your pipeline's Region.

**TargetDirectory**  
Required: Yes (If script is specified)  
The directory to be used on your Amazon EC2 instance to run scripts.

**DeploySpec**  
Required: Yes (If deploy spec is specified)  
The file to be used to configure deployment install and lifecycle events. For deploy spec field descriptions and information, see [Deploy spec file reference](#action-reference-EC2Deploy-spec-reference). To view an action configuration with the deploy spec file specified, see the example in [Action declaration with Deploy spec example](#action-reference-EC2Deploy-example-spec).

**MaxBatch**  
Required: No  
The maximum number of instances allowed to deploy in parallel.

**MaxError**  
Required: No  
The maximum number of instance errors allowed during deployment.

**TargetGroupNameList**  
Required: No  
The list of target group names for deployment. You must have already created the target groups.  
Target groups provide a set of instances to process specific requests. If the target group is specified, instances will be removed from the target group before deployment and added back to the target group after deployment.

**PreScript**  
Required: No  
The script to be run before the action Deploy phase.

**PostScript**  
Required: Yes  
The script to be run after the action Deploy phase.

The following image shows an example of the **Edit** page for the action where **Use action configurations** is chosen.

![\[The Edit action page for a new pipeline with the EC2Deploy action specifying using the action configuration\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/ec2deploy-action.png)


The following image shows an example of the **Edit** page for the action where **Use a DeploySpec file** is chosen.

![\[The Edit action page for a new pipeline with the EC2Deploy action option to use a spec file\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/ec2deploy-action-spec.png)


## Input artifacts
<a name="action-reference-EC2Deploy-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The files provided, if any, to support the script actions during the deployment.

## Output artifacts
<a name="action-reference-EC2Deploy-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role policy permissions for the EC2 deploy action
<a name="action-reference-EC2Deploy-permissions-action"></a>

When CodePipeline runs the action, CodePipeline service role requires the following permissions, appropriately scoped down for access with least privilege.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "StatementWithAllResource",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "elasticloadbalancing:DescribeTargetGroupAttributes",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTargetHealth",
                "ssm:CancelCommand",
                "ssm:DescribeInstanceInformation",
                "ssm:ListCommandInvocations"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "StatementForLogs",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:111122223333:log-group:/aws/codepipeline/{{pipelineName}}:*"
            ]
        },
        {
            "Sid": "StatementForElasticloadbalancing",
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:DeregisterTargets",
                "elasticloadbalancing:RegisterTargets"
            ],
            "Resource": [
                "arn:aws:elasticloadbalancing:us-east-1:111122223333:targetgroup/[[targetGroupName]]/*"
            ]
        },
        {
            "Sid": "StatementForSsmOnTaggedInstances",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:us-east-1:111122223333:instance/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/{{tagKey}}": "{{tagValue}}"
                }
            }
        },
        {
            "Sid": "StatementForSsmApprovedDocuments",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ssm:us-east-1::document/AWS-RunPowerShellScript",
                "arn:aws:ssm:us-east-1::document/AWS-RunShellScript"
            ]
        }
    ]
}
```

------

### Log groups for your pipeline in CloudWatch logs
<a name="action-reference-EC2Deploy-logs"></a>

When CodePipeline runs the action, CodePipeline creates a log group using the name of the pipeline as follows. This enables you to scope down permissions to log resources using the pipeline name.

```
/aws/codepipeline/MyPipelineName
```

The following permissions for logging are included in the above updates for the service role.
+ logs:CreateLogGroup
+ logs:CreateLogStream
+ logs:PutLogEvents

To view logs in the console using the action details dialog page, the permission to view logs must be added to the console role. For more information, see the console permissions policy example in [Permissions required to view compute logs in the console](security-iam-permissions-console-logs.md).

### Service role policy permissions for CloudWatch logs
<a name="w2aac56c13c21c11"></a>

When CodePipeline runs the action, CodePipeline creates a log group using the name of the pipeline as follows. This enables you to scope down permissions to log resources using the pipeline name.

```
/aws/codepipeline/MyPipelineName
```

To view logs in the console using the action details dialog page, the permission to view logs must be added to the console role. For more information, see the console permissions policy example in [Permissions required to view compute logs in the console](security-iam-permissions-console-logs.md).

## Deploy spec file reference
<a name="action-reference-EC2Deploy-spec-reference"></a>

When CodePipeline runs the action, you can specify a spec file to configure deployment to your instances. The deploy spec file specifies what to install and which lifecycle event hooks to run in response to deployment lifecycle events. The deploy spec file is always YAML-formatted. The deploy spec file is used to:
+ Map the source files in your application revision to their destinations on the instance.
+ Specify custom permissions for deployed files.
+ Specify scripts to be run on each instance at various stages of the deployment process.

The deploy spec file supports specific deployment configuration parameters supported by CodeDeploy with the AppSpec file. You can use your existing AppSpec file directly, and any unsupported parameters will be ignored. For more information about the AppSpec file in CodeDeploy, see the Application Specification file reference in the *[CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html) User Guide*.

The file deployment parameters are specified as follows. 
+ `files` - The deploy spec file designates the `source:` and `destination:` for the deployment files. 
+ `scripts` - The scripted events for the deployment. Two events are supported: `BeforeDeploy` and `AfterDeploy`.
+ `hooks` - The lifecycle hooks for the event. The following hooks are supported: `ApplicationStop`, `BeforeInstall`, `AfterInstall`, `ApplicationStart`, and `ValidateService`.
**Note**  
The hooks parameter is available for AppSpec compatibility with CodeDeploy and is only available in version 0.0 (AppSpec format). For this format, CodePipeline will perform a best effort mapping of the events.

Correct YAML spacing must be used in the spec file; otherwise, an error is raised if the locations and number of spaces in a deploy spec file are not correct. For more information about spacing, see the [YAML](http://www.yaml.org/) specification.

An example deploy spec file is below. 

```
version: 0.1
files:
  - source: /index.html
    destination: /var/www/html/
scripts:
  BeforeDeploy:
    - location: scripts/install_dependencies
      timeout: 300
      runas: myuser
  AfterDeploy:
    - location: scripts/start_server
      timeout: 300
      runas: myuser
```

To view an action configuration with the deploy spec file specified, see the example in [Action declaration with Deploy spec example](#action-reference-EC2Deploy-example-spec).

## Action declaration
<a name="action-reference-EC2Deploy-example"></a>

------
#### [ YAML ]

```
name: DeployEC2
actions:
- name: EC2
  actionTypeId:
    category: Deploy
    owner: AWS
    provider: EC2
    version: '1'
  runOrder: 1
  configuration:
    InstanceTagKey: Name
    InstanceTagValue: my-instances
    InstanceType: EC2
    PostScript: "test/script.sh",
    TargetDirectory: "/home/ec2-user/deploy"
  outputArtifacts: []
  inputArtifacts:
  - name: SourceArtifact
  region: us-east-1
```

------
#### [ JSON ]

```
{
    "name": "DeployEC2",
    "actions": [
        {
            "name": "EC2Deploy",
            "actionTypeId": {
                "category": "Deploy",
                "owner": "AWS",
                "provider": "EC2",
                "version": "1"
            },
            "runOrder": 1,
            "configuration": {
                "InstanceTagKey": "Name",
                "InstanceTagValue": "my-instances",
                "InstanceType": "EC2",
                "PostScript": "test/script.sh",
                "TargetDirectory": "/home/ec2-user/deploy"
            },
            "outputArtifacts": [],
            "inputArtifacts": [
                {
                    "name": "SourceArtifact"
                }
            ],
            "region": "us-east-1"
        }
    ]
},
```

------

## Action declaration with Deploy spec example
<a name="action-reference-EC2Deploy-example-spec"></a>

------
#### [ YAML ]

```
name: DeployEC2
actions:
- name: EC2
  actionTypeId:
    category: Deploy
    owner: AWS
    provider: EC2
    version: '1'
  runOrder: 1
  configuration:
    DeploySpec: "deployspec.yaml"
    InstanceTagKey: Name
    InstanceTagValue: my-instances
    InstanceType: EC2
  outputArtifacts: []
  inputArtifacts:
  - name: SourceArtifact
  region: us-east-1
```

------
#### [ JSON ]

```
{
    "name": "DeployEC2",
    "actions": [
        {
            "name": "EC2Deploy",
            "actionTypeId": {
                "category": "Deploy",
                "owner": "AWS",
                "provider": "EC2",
                "version": "1"
            },
            "runOrder": 1,
            "configuration": {
                "DeploySpec": "deployspec.yaml",
                "InstanceTagKey": "Name",
                "InstanceTagValue": "my-instances",
                "InstanceType": "EC2"
            },
            "outputArtifacts": [],
            "inputArtifacts": [
                {
                    "name": "SourceArtifact"
                }
            ],
            "region": "us-east-1"
        }
    ]
},
```

------

## See also
<a name="action-reference-EC2Deploy-links"></a>

The following related resources can help you as you work with this action.
+  [Tutorial: Deploy to Amazon EC2 instances with CodePipeline](tutorials-ec2-deploy.md) – This tutorial walks you through the creation of a EC2 instances where you will deploy a script file, along with creation of the pipeline using the EC2 action.
+ [EC2 Deploy action fails with an error message `No such file`](troubleshooting.md#troubleshooting-ec2-deploy) – This topic describes troubleshooting for file not found errors with the EC2 action.

# Amazon ECR source action reference
<a name="action-reference-ECR"></a>

Triggers the pipeline when a new image is pushed to the Amazon ECR repository. This action provides an image definitions file referencing the URI for the image that was pushed to Amazon ECR. This source action is often used in conjunction with another source action, such as CodeCommit, to allow a source location for all other source artifacts. For more information, see [Tutorial: Create a pipeline with an Amazon ECR source and ECS-to-CodeDeploy deployment](tutorials-ecs-ecr-codedeploy.md).

When you use the console to create or edit your pipeline, CodePipeline creates an EventBridge rule that starts your pipeline when a change occurs in the repository.

**Note**  
For Amazon ECR, Amazon S3, or CodeCommit sources, you can also create a source override using input transform entry to use the `revisionValue` in EventBridge for your pipeline event, where the `revisionValue` is derived from the source event variable for your object key, commit, or image ID. For more information, see the optional step for input transform entry included in the procedures under [Amazon ECR source actions and EventBridge resources](create-cwe-ecr-source.md), [Connecting to Amazon S3 source actions with a source enabled for events](create-S3-source-events.md), or [CodeCommit source actions and EventBridge](triggering.md).

You must have already created an Amazon ECR repository and pushed an image before you connect the pipeline through an Amazon ECR action.

**Topics**
+ [Action type](#action-reference-ECR-type)
+ [Configuration parameters](#action-reference-ECR-config)
+ [Input artifacts](#action-reference-ECR-input)
+ [Output artifacts](#action-reference-ECR-output)
+ [Output variables](#action-reference-ECR-variables)
+ [Service role permissions: Amazon ECR action](#edit-role-ecr)
+ [Action declaration (Amazon ECR example)](#action-reference-ECR-example)
+ [See also](#action-reference-ECR-links)

## Action type
<a name="action-reference-ECR-type"></a>
+ Category: `Source`
+ Owner: `AWS`
+ Provider: `ECR`
+ Version: `1`

## Configuration parameters
<a name="action-reference-ECR-config"></a>

**RepositoryName**  
Required: Yes  
The name of the Amazon ECR repository where the image was pushed.

**ImageTag**  
Required: No  
The tag used for the image.  
If a value for `ImageTag` is not specified, the value defaults to `latest`.

## Input artifacts
<a name="action-reference-ECR-input"></a>
+ **Number of artifacts:** `0`
+ **Description:** Input artifacts do not apply for this action type.

## Output artifacts
<a name="action-reference-ECR-output"></a>
+ **Number of artifacts:** `1` 
+ **Description:** This action produces an artifact that contains an `imageDetail.json` file that contains the URI for the image that triggered the pipeline execution. For information about the `imageDetail.json` file, see [imageDetail.json file for Amazon ECS blue/green deployment actions](file-reference.md#file-reference-ecs-bluegreen).

## Output variables
<a name="action-reference-ECR-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions.

For more information, see [Variables reference](reference-variables.md).

**RegistryId**  
The AWS account ID associated with the registry that contains the repository.

**RepositoryName**  
The name of the Amazon ECR repository where the image was pushed.

**ImageTag**  
The tag used for the image.  
The `ImageTag` output variable is not output when the source revision is overridden

**ImageDigest**  
The `sha256` digest of the image manifest.

**ImageURI**  
The URI for the image.

## Service role permissions: Amazon ECR action
<a name="edit-role-ecr"></a>

For Amazon ECR support, add the following to your policy statement:

```
{
    "Effect": "Allow",
    "Action": [
        "ecr:DescribeImages"
    ],
    "Resource": "resource_ARN"
},
```

For more information about this action, see [Amazon ECR source action reference](#action-reference-ECR).

## Action declaration (Amazon ECR example)
<a name="action-reference-ECR-example"></a>

------
#### [ YAML ]

```
Name: Source
Actions:
  - InputArtifacts: []
    ActionTypeId:
      Version: '1'
      Owner: AWS
      Category: Source
      Provider: ECR
    OutputArtifacts:
      - Name: SourceArtifact
    RunOrder: 1
    Configuration:
      ImageTag: latest
      RepositoryName: my-image-repo

    Name: ImageSource
```

------
#### [ JSON ]

```
{
    "Name": "Source",
    "Actions": [
        {
            "InputArtifacts": [],
            "ActionTypeId": {
                "Version": "1",
                "Owner": "AWS",
                "Category": "Source",
                "Provider": "ECR"
            },
            "OutputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "RunOrder": 1,
            "Configuration": {
                "ImageTag": "latest",
                "RepositoryName": "my-image-repo"
            },
            "Name": "ImageSource"
        }
    ]
},
```

------

## See also
<a name="action-reference-ECR-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a pipeline with an Amazon ECR source and ECS-to-CodeDeploy deployment](tutorials-ecs-ecr-codedeploy.md) – This tutorial provides a sample app spec file and sample CodeDeploy application and deployment group to create a pipeline with a CodeCommit and Amazon ECR source that deploys to Amazon ECS instances.

# `ECRBuildAndPublish` build action reference
<a name="action-reference-ECRBuildAndPublish"></a>

This build action allows you to automate building and pushing a new image when a change occurs in your source. This action builds based on a specified Docker file location and pushes the image. This build action is not the same as the Amazon ECR source action in CodePipeline, which triggers pipeline when a change occurs in your Amazon ECR source repository. For information about that action, see [Amazon ECR source action reference](action-reference-ECR.md).

This is not a source action that will trigger the pipeline. This action builds an image and pushes it to your Amazon ECR image repository.

You must have already created an Amazon ECR repository and have added a Dockerfile to your source code repository, such as GitHub, before you add the action to your pipeline.

**Important**  
This action uses CodePipeline managed CodeBuild compute to run commands in a build environment. Running the commands action will incur separate charges in AWS CodeBuild.

**Note**  
This action is only available for V2 type pipelines.

**Topics**
+ [Action type](#action-reference-ECRBuildAndPublish-type)
+ [Configuration parameters](#action-reference-ECRBuildAndPublish-config)
+ [Input artifacts](#action-reference-ECRBuildAndPublish-input)
+ [Output artifacts](#action-reference-ECRBuildAndPublish-output)
+ [Output variables](#action-reference-ECRBuildAndPublish-output-variables)
+ [Service role permissions: `ECRBuildAndPublish` action](#edit-role-ECRBuildAndPublish)
+ [Action declaration](#action-reference-ECRBuildAndPublish-example)
+ [See also](#action-reference-ECRBuildAndPublish-links)

## Action type
<a name="action-reference-ECRBuildAndPublish-type"></a>
+ Category: `Build`
+ Owner: `AWS`
+ Provider: `ECRBuildAndPublish`
+ Version: `1`

## Configuration parameters
<a name="action-reference-ECRBuildAndPublish-config"></a>

**ECRRepositoryName**  
Required: Yes  
The name of the Amazon ECR repository where the image is pushed.

**DockerFilePath**  
Required: No  
The location of the Docker file used to build the image. Optionally, you can provide an alternate docker file location if not at the root level.  
If a value for `DockerFilePath` is not specified, the value defaults to the source repository root level.

**ImageTags**  
Required: No  
The tags used for the image. You can enter multiple tags as a comma-delimited list of strings.  
If a value for `ImageTags` is not specified, the value defaults to `latest`.

**RegistryType**  
Required: No  
Specifies whether the repository is public or private. Valid values are `private | public`.  
If a value for `RegistryType` is not specified, the value defaults to `private`.

## Input artifacts
<a name="action-reference-ECRBuildAndPublish-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The artifact produced by the source action that contains the Dockerfile needed to build the image.

## Output artifacts
<a name="action-reference-ECRBuildAndPublish-output"></a>
+ **Number of artifacts:** `0` 

## Output variables
<a name="action-reference-ECRBuildAndPublish-output-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions. 

For more information, see [Variables reference](reference-variables.md).

**ECRImageDigestId **  
The `sha256` digest of the image manifest.

**ECRRepositoryName **  
The name of the Amazon ECR repository where the image was pushed.

## Service role permissions: `ECRBuildAndPublish` action
<a name="edit-role-ECRBuildAndPublish"></a>

For the `ECRBuildAndPublish` action support, add the following to your policy statement:

```
{
    "Statement": [
         {
            "Sid": "ECRRepositoryAllResourcePolicy",
            "Effect": "Allow",
            "Action": [
                "ecr:DescribeRepositories",
                "ecr:GetAuthorizationToken",
                "ecr-public:DescribeRepositories",
                "ecr-public:GetAuthorizationToken"
            ],
        "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload",
                "ecr:PutImage",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchCheckLayerAvailability"
            ],
            "Resource": "PrivateECR_Resource_ARN"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ecr-public:GetAuthorizationToken",
                "ecr-public:DescribeRepositories",
                "ecr-public:InitiateLayerUpload",
                "ecr-public:UploadLayerPart",
                "ecr-public:CompleteLayerUpload",
                "ecr-public:PutImage",
                "ecr-public:BatchCheckLayerAvailability",
                "sts:GetServiceBearerToken"
            ],
            "Resource": "PublicECR_Resource_ARN"
        },
        {
            "Effect": "Allow",
            "Action": [
                "sts:GetServiceBearerToken"
            ],
            "Resource": "*"
        }
    ]
}
```

In addition, if not already added for the `Commands` action, add the following permissions to your service role in order to view CloudWatch logs.

```
{
    "Effect": "Allow",
    "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream", 
        "logs:PutLogEvents"
    ],
    "Resource": "resource_ARN"
},
```

**Note**  
Scope down the permissions to the pipeline resource level by using resource-based permissions in the service role policy statement.

For more information about this action, see [`ECRBuildAndPublish` build action reference](#action-reference-ECRBuildAndPublish).

## Action declaration
<a name="action-reference-ECRBuildAndPublish-example"></a>

------
#### [ YAML ]

```
name: ECRBuild
actionTypeId:
  category: Build
  owner: AWS
  provider: ECRBuildAndPublish
  version: '1'
runOrder: 1
configuration:
  ECRRepositoryName: actions/my-imagerepo
outputArtifacts: []
inputArtifacts:
- name: SourceArtifact
region: us-east-1
namespace: BuildVariables
```

------
#### [ JSON ]

```
{
    "name": "ECRBuild",
    "actionTypeId": {
        "category": "Build",
        "owner": "AWS",
        "provider": "ECRBuildAndPublish",
        "version": "1"
    },
    "runOrder": 1,
    "configuration": {
        "ECRRepositoryName": "actions/my-imagerepo"
    },
    "outputArtifacts": [],
    "inputArtifacts": [
        {
            "name": "SourceArtifact"
        }
    ],
    "region": "us-east-1",
    "namespace": "BuildVariables"
},
```

------

## See also
<a name="action-reference-ECRBuildAndPublish-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Build and push a Docker image to Amazon ECR with CodePipeline (V2 type)](tutorials-ecr-build-publish.md) – This tutorial provides a sample Dockerfile and instructions to create a pipeline that pushes your image to ECR on a change to your source repository and then deploys to Amazon ECS.

# Amazon Elastic Container Service and CodeDeploy blue-green deploy action reference
<a name="action-reference-ECSbluegreen"></a>

You can configure a pipeline in AWS CodePipeline that deploys container applications using a blue/green deployment. In a blue/green deployment, you can launch a new version of your application alongside the old version, and you can test the new version before you reroute traffic to it. You can also monitor the deployment process and rapidly roll back if there is an issue.

The completed pipeline detects changes to your images or task definition file and and uses CodeDeploy to route and deploy traffic to an Amazon ECS cluster and load balancer. CodeDeploy creates a new listener on your load balancer which can target your new task through a special port. You can also configure the pipeline to use a source location, such as a CodeCommit repository, where your Amazon ECS task definition is stored.

Before you create your pipeline, you must have already created the Amazon ECS resources, the CodeDeploy resources, and the load balancer and target group. You must have already tagged and stored the image in your image repository, and uploaded the task definition and AppSpec file to your file repository.

**Note**  
This topic describes the Amazon ECS to CodeDeploy blue/green deployment action for CodePipeline. For reference information about Amazon ECS standard deployment actions in CodePipeline, see [Amazon Elastic Container Service deploy action reference](action-reference-ECS.md).

**Topics**
+ [Action type](#action-reference-ECSbluegreen-type)
+ [Configuration parameters](#action-reference-ECSbluegreen-config)
+ [Input artifacts](#action-reference-ECSbluegreen-input)
+ [Output artifacts](#action-reference-ECSbluegreen-output)
+ [Service role permissions: `CodeDeployToECS` action](#edit-role-codedeploy-ecs)
+ [Action declaration](#action-reference-ECSbluegreen-example)
+ [See also](#action-reference-ECSbluegreen-links)

## Action type
<a name="action-reference-ECSbluegreen-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `CodeDeployToECS`
+ Version: `1`

## Configuration parameters
<a name="action-reference-ECSbluegreen-config"></a>

**ApplicationName**  
Required: Yes  
The name of the application in CodeDeploy. Before you create your pipeline, you must have already created the application in CodeDeploy.

**DeploymentGroupName**  
Required: Yes  
The deployment group specified for Amazon ECS task sets that you created for your CodeDeploy application. Before you create your pipeline, you must have already created the deployment group in CodeDeploy.

**TaskDefinitionTemplateArtifact**  
Required: Yes  
The name of the input artifact that provides the task definition file to the deployment action. This is generally the name of the output artifact from the source action. When you use the console, the default name for the source action output artifact is `SourceArtifact`.

**AppSpecTemplateArtifact**  
Required: Yes  
The name of the input artifact that provides the AppSpec file to the deployment action. This value is updated when your pipeline runs. This is generally the name of the output artifact from the source action. When you use the console, the default name for the source action output artifact is `SourceArtifact`. For `TaskDefinition` in AppSpec file, you can keep the `<TASK_DEFINITION>` placeholder text as shown [here](tutorials-ecs-ecr-codedeploy.md#tutorials-ecs-ecr-codedeploy-taskdefinition).

**AppSpecTemplatePath**  
Required: No  
The file name of the AppSpec file stored in the pipeline source file location, such as your pipeline's CodeCommit repository. The default file name is `appspec.yaml`. If your AppSpec file has the same name and is stored at the root level in your file repository, you do not need to provide the file name. If the path is not the default, enter the path and file name.

**TaskDefinitionTemplatePath**  
Required: No  
The file name of the task definition stored in the pipeline file source location, such as your pipeline's CodeCommit repository. The default file name is `taskdef.json`. If your task definition file has the same name and is stored at the root level in your file repository, you do not need to provide the file name. If the path is not the default, enter the path and file name.

**Image<Number>ArtifactName**  
Required: No  
The name of the input artifact that provides the image to the deployment action. This is generally the image repository's output artifact, such as output from the Amazon ECR source action.  
Available values for `<Number>` are 1 through 4.

**Image<Number>ContainerName**  
Required: No  
The name of the image available from the image repository, such as the Amazon ECR source repository.  
Available values for `<Number>` are 1 through 4.

## Input artifacts
<a name="action-reference-ECSbluegreen-input"></a>
+ **Number of Artifacts:** `1 to 5`
+ **Description:** The `CodeDeployToECS` action first looks for the task definition file and the AppSpec file in the source file repository, next looks for the image in the image repository, then dynamically generates a new revision of task definition, and finallyruns the AppSpec commands to deploy the task set and container to the cluster.

  The `CodeDeployToECS` action looks for an `imageDetail.json` file that maps the image URI to the image. When you commit a change to your Amazon ECR image repository, the pipeline ECR source action creates an `imageDetail.json` file for that commit. You can also manually add an `imageDetail.json` file for a pipeline where the action is not automated. For information about the `imageDetail.json` file, see [imageDetail.json file for Amazon ECS blue/green deployment actions](file-reference.md#file-reference-ecs-bluegreen).

  The `CodeDeployToECS` action dynamically generates a new revision of the task definition. In this phase, this action replaces placeholders in task definition file into image URI retrieved from imageDetail.json files. For example, if you set *IMAGE1\$1NAME* as Image1ContainerName parameter, you should specify the placeholder *<IMAGE1\$1NAME>* as the value of image field in your task definition file. In this case, the CodeDeployToECS action replaces the placeholder *<IMAGE1\$1NAME>* into actual image URI retrieved from imageDetail.json in the artifact which you specify as Image1ArtifactName.

  For task definition updates, the CodeDeploy `AppSpec.yaml` file contains the `TaskDefinition` property. 

  ```
  TaskDefinition: <TASK_DEFINITION>
  ```

  This property will be updated by the `CodeDeployToECS` action after the new task definition is created.

  For the value of the `TaskDefinition` field, the placeholder text must be <TASK\$1DEFINITION>. The `CodeDeployToECS` action replaces this placeholder with the actual ARN of the dynamically generated task definition.

## Output artifacts
<a name="action-reference-ECSbluegreen-output"></a>
+ **Number of Artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: `CodeDeployToECS` action
<a name="edit-role-codedeploy-ecs"></a>

For the `CodeDeployToECS` action (blue/green deployments), the following are the minimum permissions needed to create pipelines with a CodeDeploy to Amazon ECS blue/green deployment action.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowCodeDeployDeploymentActions",
            "Action": [
                "codedeploy:CreateDeployment",
                "codedeploy:GetDeployment"
            ],
            "Resource": [
                "arn:aws:codedeploy:*:111122223333:deploymentgroup:[[ApplicationName]]/*"
            ],
            "Effect": "Allow"
        },
        {
            "Sid": "AllowCodeDeployApplicationActions",
            "Action": [
                "codedeploy:GetApplication",
                "codedeploy:GetApplicationRevision",
                "codedeploy:RegisterApplicationRevision"
            ],
            "Resource": [
                "arn:aws:codedeploy:*:111122223333:application:[[ApplicationName]]",
                "arn:aws:codedeploy:*:111122223333:application:[[ApplicationName]]/*"
            ],
            "Effect": "Allow"
        },
        {
            "Sid": "AllowCodeDeployDeploymentConfigAccess",
            "Action": [
                "codedeploy:GetDeploymentConfig"
            ],
            "Resource": [
                "arn:aws:codedeploy:*:111122223333:deploymentconfig:*"
            ],
            "Effect": "Allow"
        },
        {
            "Sid": "AllowECSRegisterTaskDefinition",
            "Action": [
                "ecs:RegisterTaskDefinition"
            ],
            "Resource": [
                "*"
            ],
            "Effect": "Allow"
        },
        {
            "Sid": "AllowPassRoleToECS",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::111122223333:role/[[PassRoles]]"
            ],
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": [
                        "ecs.amazonaws.com",
                        "ecs-tasks.amazonaws.com"
                    ]
                }
            }
        }
    ]
}
```

------

You can opt in to using tagging authorization in Amazon ECS. By opting in, you must grant the following permissions: `ecs:TagResource`. For more information about how to opt in and to determine whether the permission is required and tag authorization is enforced, see [Tagging authorization timeline](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#tag-resources-timeline) in the Amazon Elastic Container Service Developer Guide.

You must also add the `iam:PassRole` permissions to use IAM roles for tasks. For more information, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) and [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html).

You can also add `ecs-tasks.amazonaws.com` to the list of services under the `iam:PassedToService` condition, as shown in the above example.

## Action declaration
<a name="action-reference-ECSbluegreen-example"></a>

------
#### [ YAML ]

```
Name: Deploy
Actions:
  - Name: Deploy
    ActionTypeId:
      Category: Deploy
      Owner: AWS
      Provider: CodeDeployToECS
      Version: '1'
    RunOrder: 1
    Configuration:
      AppSpecTemplateArtifact: SourceArtifact
      ApplicationName: ecs-cd-application
      DeploymentGroupName: ecs-deployment-group
      Image1ArtifactName: MyImage
      Image1ContainerName: IMAGE1_NAME
      TaskDefinitionTemplatePath: taskdef.json
      AppSpecTemplatePath: appspec.yaml
      TaskDefinitionTemplateArtifact: SourceArtifact
    OutputArtifacts: []
    InputArtifacts:
      - Name: SourceArtifact
      - Name: MyImage
    Region: us-west-2
    Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "Deploy",
    "Actions": [
        {
            "Name": "Deploy",
            "ActionTypeId": {
                "Category": "Deploy",
                "Owner": "AWS",
                "Provider": "CodeDeployToECS",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "AppSpecTemplateArtifact": "SourceArtifact",
                "ApplicationName": "ecs-cd-application",
                "DeploymentGroupName": "ecs-deployment-group",
                "Image1ArtifactName": "MyImage",
                "Image1ContainerName": "IMAGE1_NAME",
                "TaskDefinitionTemplatePath": "taskdef.json",
                "AppSpecTemplatePath": "appspec.yaml",
                "TaskDefinitionTemplateArtifact": "SourceArtifact"
            },
            "OutputArtifacts": [],
            "InputArtifacts": [
                {
                    "Name": "SourceArtifact"
                },
                {
                    "Name": "MyImage"
                }
            ],
            "Region": "us-west-2",
            "Namespace": "DeployVariables"
        }
    ]
}
```

------

## See also
<a name="action-reference-ECSbluegreen-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a pipeline with an Amazon ECR source and ECS-to-CodeDeploy deployment](tutorials-ecs-ecr-codedeploy.md) – This tutorial walks you through creation of the CodeDeploy and Amazon ECS resources you need for a blue/green deployment. The tutorial shows you how to push a Docker image to Amazon ECR and create an Amazon ECS task definition that lists your Docker image name, container name, Amazon ECS service name, and load balancer configuration. The tutorial then walks you through creating the AppSpec file and pipeline for your deployment.
**Note**  
This topic and tutorial describe the CodeDeploy/ECS blue/green action for CodePipeline. For information about ECS standard actions in CodePipeline, see [Tutorial: Continuous Deployment with CodePipeline](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cd-pipeline.html).
+ *AWS CodeDeploy User Guide* – For information about how to use the load balancer, production listener, target groups, and your Amazon ECS application in a blue/green deployment, see [Tutorial: Deploy an Amazon ECS Service](https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorial-ecs-deployment.html). This reference information in the *AWS CodeDeploy User Guide* provides an overview for blue/green deployments with Amazon ECS and AWS CodeDeploy.
+ *Amazon Elastic Container Service Developer Guide* – For information about working with Docker images and containers, ECS services and clusters, and ECS task sets, see [What Is Amazon ECS?](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/)

# Amazon Elastic Container Service deploy action reference
<a name="action-reference-ECS"></a>

You can use an Amazon ECS action to deploy an Amazon ECS service and task set. An Amazon ECS service is a container application that is deployed to an Amazon ECS cluster. An Amazon ECS cluster is a collection of instances that host your container application in the cloud. The deployment requires a task definition that you create in Amazon ECS and an image definitions file that CodePipeline uses to deploy the image.

**Important**  
The Amazon ECS standard deployment action for CodePipeline creates its own revision of the task definition based on the the revision used by the Amazon ECS service. If you create new revisions for the task definition without updating the Amazon ECS service, the deployment action will ignore those revisions.

Before you create your pipeline, you must have already created the Amazon ECS resources, tagged and stored the image in your image repository, and uploaded the BuildSpec file to your file repository.

**Note**  
This reference topic describes the Amazon ECS standard deployment action for CodePipeline. For reference information about Amazon ECS to CodeDeploy blue/green deployment actions in CodePipeline, see [Amazon Elastic Container Service and CodeDeploy blue-green deploy action reference](action-reference-ECSbluegreen.md).

**Topics**
+ [Action type](#action-reference-ECS-type)
+ [Configuration parameters](#action-reference-ECS-config)
+ [Input artifacts](#action-reference-ECS-input)
+ [Output artifacts](#action-reference-ECS-output)
+ [Service role permissions: Amazon ECS standard action](#edit-role-ecs)
+ [Action declaration](#action-reference-ECS-example)
+ [See also](#action-reference-ECS-links)

## Action type
<a name="action-reference-ECS-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `ECS`
+ Version: `1`

## Configuration parameters
<a name="action-reference-ECS-config"></a>

**ClusterName**  
Required: Yes  
The Amazon ECS cluster in Amazon ECS.

**ServiceName**  
Required: Yes  
The Amazon ECS service that you created in Amazon ECS.

**FileName**  
Required: No  
The name of your image definitions file, the JSON file that describes your service's container name and the image and tag. You use this file for ECS standard deployments. For more information, see [Input artifacts](#action-reference-ECS-input) and [imagedefinitions.json file for Amazon ECS standard deployment actions](file-reference.md#pipelines-create-image-definitions).

**DeploymentTimeout**  
Required: No  
The Amazon ECS deployment action timeout in minutes. The timeout is configurable up to the maximum default timeout for this action. For example:   

```
"DeploymentTimeout": "15"
```

## Input artifacts
<a name="action-reference-ECS-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The action looks for an `imagedefinitions.json` file in the source file repository for the pipeline. An image definitions document is a JSON file that describes your Amazon ECS container name and the image and tag. CodePipeline uses the file to retrieve the image from your image repository such as Amazon ECR. You can manually add an `imagedefinitions.json` file for a pipeline where the action is not automated. For information about the `imagedefinitions.json` file, see [imagedefinitions.json file for Amazon ECS standard deployment actions](file-reference.md#pipelines-create-image-definitions).

  The action requires an existing image that has already been pushed to your image repository. Because the image mapping is provided by the `imagedefinitions.json` file, the action does not require that the Amazon ECR source be included as a source action in the pipeline.

## Output artifacts
<a name="action-reference-ECS-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: Amazon ECS standard action
<a name="edit-role-ecs"></a>

For Amazon ECS, the following are the minimum permissions needed to create pipelines with an Amazon ECS deploy action.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "TaskDefinitionPermissions",
            "Effect": "Allow",
            "Action": [
                "ecs:DescribeTaskDefinition",
                "ecs:RegisterTaskDefinition"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "ECSServicePermissions",
            "Effect": "Allow",
            "Action": [
                "ecs:DescribeServices",
                "ecs:UpdateService"
            ],
            "Resource": [
                "arn:aws:ecs:*:111122223333:service/[[clusters]]/*"
            ]
        },
        {
            "Sid": "ECSTagResource",
            "Effect": "Allow",
            "Action": [
                "ecs:TagResource"
            ],
            "Resource": [
                "arn:aws:ecs:*:111122223333:task-definition/[[taskDefinitions]]:*"
            ],
            "Condition": {
                "StringEquals": {
                    "ecs:CreateAction": [
                        "RegisterTaskDefinition"
                    ]
                }
            }
        },
        {
            "Sid": "IamPassRolePermissions",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": [
                "arn:aws:iam::111122223333:role/[[passRoles]]"
            ],
            "Condition": {
                "StringEquals": {
                    "iam:PassedToService": [
                        "ecs.amazonaws.com",
                        "ecs-tasks.amazonaws.com"
                    ]
                }
            }
        }
    ]
}
```

------

You can opt in to using tagging authorization in Amazon ECS. By opting in, you must grant the following permissions: `ecs:TagResource`. For more information about how to opt in and to determine whether the permission is required and tag authorization is enforced, see [Tagging authorization timeline](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-account-settings.html#tag-resources-timeline) in the Amazon Elastic Container Service Developer Guide.

You must add the `iam:PassRole` permissions to use IAM roles for tasks. For more information, see [Amazon ECS task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) and [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). Use the following policy text.

## Action declaration
<a name="action-reference-ECS-example"></a>

------
#### [ YAML ]

```
Name: DeployECS
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: ECS
  Version: '1'
RunOrder: 2
Configuration:
  ClusterName: my-ecs-cluster
  ServiceName: sample-app-service
  FileName: imagedefinitions.json
  DeploymentTimeout: '15'
OutputArtifacts: []
InputArtifacts:
  - Name: my-image
```

------
#### [ JSON ]

```
{
    "Name": "DeployECS",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "ECS",
        "Version": "1"
    },
    "RunOrder": 2,
    "Configuration": {
        "ClusterName": "my-ecs-cluster",
        "ServiceName": "sample-app-service",
        "FileName": "imagedefinitions.json",
        "DeploymentTimeout": "15"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "my-image"
        }
    ]
},
```

------

## See also
<a name="action-reference-ECS-links"></a>

The following related resources can help you as you work with this action.
+ See [Tutorial: Build and push a Docker image to Amazon ECR with CodePipeline (V2 type)](tutorials-ecr-build-publish.md) for a tutorial that shows you how to use the ECRBuildandPublish action to push an image and then use the ECS standard action to deploy to Amazon ECS.
+ [Tutorial: Continuous Deployment with CodePipeline](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cd-pipeline.html) – This tutorial shows you how to create a Dockerfile that you store in a source file repository such as CodeCommit. Next, the tutorial shows you how to incorporate a CodeBuild BuildSpec file that builds and pushes your Docker image to Amazon ECR and creates your imagedefinitions.json file. Finally, you create an Amazon ECS service and task definition, and then you create your pipeline with an Amazon ECS deployment action.
**Note**  
This topic and tutorial describe the Amazon ECS standard deployment action for CodePipeline. For information about Amazon ECS to CodeDeploy blue/green deployment actions in CodePipeline, see [Tutorial: Create a pipeline with an Amazon ECR source and ECS-to-CodeDeploy deployment](tutorials-ecs-ecr-codedeploy.md).
+ *Amazon Elastic Container Service Developer Guide* – For information about working with Docker images and containers, Amazon ECS services and clusters, and Amazon ECS task sets, see [What Is Amazon ECS?](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/)

# Amazon Elastic Kubernetes Service `EKS` deploy action reference
<a name="action-reference-EKS"></a>

You can use the `EKSDeploy` action to deploy an Amazon EKS service. The deployment requires a Kubernetes manifest file that CodePipeline uses to deploy the image.

Before you create your pipeline, you must have already created the Amazon EKS resources and have stored the image in your image repository. Optionally, you can provide VPC information for your cluster.

**Important**  
This action uses CodePipeline managed CodeBuild compute to run commands in a build environment. Running the commands action will incur separate charges in AWS CodeBuild.

**Note**  
The `EKS` deploy action is only available for V2 type pipelines.

The EKS action supports both public and private EKS clusters. Private clusters are the type recommended by EKS; however, both types are supported.

The EKS action is supported for cross-account actions. To add a cross-account EKS action, add `actionRoleArn` from your target account in the action declaration.

**Topics**
+ [Action type](#action-reference-EKS-type)
+ [Configuration parameters](#action-reference-EKS-config)
+ [Input artifacts](#action-reference-EKS-input)
+ [Output artifacts](#action-reference-EKS-output)
+ [Environment variables](#action-reference-EKS-env-variables)
+ [Output variables](#action-reference-EKS-output-vars)
+ [Service role policy permissions](#action-reference-EKS-service-role)
+ [Action declaration](#action-reference-EKS-example)
+ [See also](#action-reference-EKS-links)

## Action type
<a name="action-reference-EKS-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `EKS`
+ Version: `1`

## Configuration parameters
<a name="action-reference-EKS-config"></a>

**ClusterName**  
Required: Yes  
The Amazon EKS cluster in Amazon EKS.

**Options under Helm**  
The following are available options when **Helm** is the selected deployment tool.    
**HelmReleaseName **  
Required: Yes (Required only for **Helm** type)  
The release name for your deployment.  
**HelmChartLocation **  
Required: Yes (Required only for **Helm** type)  
The chart location for your deployment.  
**HelmValuesFiles **  
Required: No (Optional only for **Helm** type)  
To override for helm values files, enter the comma-separated helm values files in the helm chart location.

**Options under Kubectl**  
The following are available options when **Kubectl** is the selected deployment tool.    
**ManifestFiles**  
Required: Yes (Required only for **Kubectl** type)  
The name of your manifest file, the text file that describes your service's container name and the image and tag. You use this file to parameterize your image URI and other information. You can use environment variable for this purpose.  
 You store this file in the source repository for your pipeline.

**Namespace**  
Required: No  
The kubernetes namepsace to be used in `kubectl` or `helm` commands.

**Subnets**  
Required: No  
The subnets for the VPC for your cluster. These are part of the same VPC that is attached to your cluster. You can also provide subnets that aren't already attached to your cluster and specify them here.

**SecurityGroupIds**  
Required: No  
The security groups for the VPC for your cluster. These are part of the same VPC that is attached to your cluster. You can also provide security groups that aren't already attached to your cluster and specify them here.

## Input artifacts
<a name="action-reference-EKS-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The action looks for the Kubernetes manifest file or Helm chart in the source file repository for the pipeline. If you want to use helm charts in .tgz format stored in an S3 bucket, you can do so by configuring the S3 Bucket/Key as your source action. For example, the object key provided would be `my-chart-0.1.0.tgz`.

## Output artifacts
<a name="action-reference-EKS-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Environment variables
<a name="action-reference-EKS-env-variables"></a>

Used to replace variables such as image repositories or image tags in manifest files or helm chart values files.

**Key**  
The key in a key-value environment variable pair, such as `$IMAGE_TAG`.

**Value**  
The value for the key-value pair, such as `v1.0`. The value can be parameterized with output variables from pipeline actions or pipeline variables. For example, pipeline can have an ECRBuildAndPublish action that creates an ECR image with `${codepipeline.PipelineExecutionId}`, and the EKS action can use this image using `${codepipeline.PipelineExecutionId}` as the value of the environment variable. 

## Output variables
<a name="action-reference-EKS-output-vars"></a>

**EKSClusterName**  
The Amazon EKS cluster in Amazon EKS.

## Service role policy permissions
<a name="action-reference-EKS-service-role"></a>

To run this action, the following permissions must be available in your pipeline's service role policy.
+ **EC2 actions:** When CodePipeline runs the action EC2 instance permissions are required. Note that this is not the same as the EC2 instance role required when you create your EKS cluster.

  If you are using an existing service role, to use this action, you will need to add the following permissions for the service role.
  + ec2:CreateNetworkInterface
  + ec2:DescribeDhcpOptions
  + ec2:DescribeNetworkInterfaces
  + ec2:DeleteNetworkInterface
  + ec2:DescribeSubnets
  + ec2:DescribeSecurityGroups
  + ec2:DescribeVpcs
+ **EKS actions:** When CodePipeline runs the action, EKS cluster permissions are required. Note that this is not the same as the IAM EKS cluster role required when you create your EKS cluster.

  If you are using an existing service role, to use this action, you will need to add the following permission for the service role.
  + eks:DescribeCluster
+ **Log stream actions:** When CodePipeline runs the action, CodePipeline creates a log group using the name of the pipeline as follows. This enables you to scope down permissions to log resources using the pipeline name.

  ```
  /aws/codepipeline/MyPipelineName
  ```

  If you are using an existing service role, to use this action, you will need to add the following permissions for the service role.
  + logs:CreateLogGroup
  + logs:CreateLogStream
  + logs:PutLogEvents

In the service role policy statement, scope down the permissions to the resource level as shown in the following example.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "eks:DescribeCluster"
            ],
            "Resource": "arn:aws:eks:*:111122223333:cluster/YOUR_CLUSTER_NAME"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateNetworkInterface",
                "ec2:CreateNetworkInterfacePermission",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DeleteNetworkInterface",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeVpcs",
                "ec2:DescribeRouteTables"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:CreateLogGroup",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:111122223333:log-group:/aws/codepipeline/YOUR_PIPELINE_NAME",
                "arn:aws:logs:*:111122223333:log-group:/aws/codepipeline/YOUR_PIPELINE_NAME:*"
            ]
        }
    ]
}
```

------

To view logs in the console using the action details dialog page, the permission to view logs must be added to the console role. For more information, see the console permissions policy example in [Permissions required to view compute logs in the console](security-iam-permissions-console-logs.md).

### Adding the service role as an access entry for your cluster
<a name="action-reference-EKS-service-role-access"></a>

After the permissions are available in your pipeline's service role policy, you configure your cluster permissions by adding the CodePipeline service role as an access entry for your cluster.

You can also use an action role that has the updated permissions. For more information, see the tutorial example in [Step 4: Create an access entry for the CodePipeline service role](tutorials-eks-deploy.md#tutorials-eks-deploy-access-entry).

## Action declaration
<a name="action-reference-EKS-example"></a>

------
#### [ YAML ]

```
Name: DeployEKS
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: EKS
  Version: '1'
RunOrder: 2
Configuration:
  ClusterName: my-eks-cluster
  ManifestFiles: ManifestFile.json
OutputArtifacts: []
InputArtifacts:
  - Name: SourceArtifact
```

------
#### [ JSON ]

```
{
    "Name": "DeployECS",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "EKS",
        "Version": "1"
    },
    "RunOrder": 2,
    "Configuration": {
        "ClusterName": "my-eks-cluster",
        "ManifestFiles": "ManifestFile.json"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "SourceArtifact"
        }
    ]
},
```

------

## See also
<a name="action-reference-EKS-links"></a>

The following related resources can help you as you work with this action.
+ See [Tutorial: Deploy to Amazon EKS with CodePipeline](tutorials-eks-deploy.md) for a tutorial that demonstrates how to create an EKS cluster and Kubernetes manifest file to add the action to your pipeline.

# AWS Lambda deploy action reference
<a name="action-reference-LambdaDeploy"></a>

You use an AWS Lambda deploy action to manage deploying your application code for your serverless deployment. You can deploy a function and use deployment strategies for traffic deployment as follows:
+ Canary and linear deployments for traffic shifting
+ All at once deployments

**Note**  
This action is only supported for V2 type pipelines.

**Topics**
+ [Action type](#action-reference-LambdaDeploy-type)
+ [Configuration parameters](#action-reference-LambdaDeploy-parameters)
+ [Input artifacts](#action-reference-LambdaDeploy-input)
+ [Output artifacts](#action-reference-LambdaDeploy-output)
+ [Output variables](#action-reference-LambdaDeploy-output-variables)
+ [Service role policy permissions for the Lambda deploy action](#action-reference-LambdaDeploy-permissions-action)
+ [Action declaration](#action-reference-LambdaDeploy-example)
+ [See also](#action-reference-LambdaDeploy-links)

## Action type
<a name="action-reference-LambdaDeploy-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `Lambda`
+ Version: `1`

## Configuration parameters
<a name="action-reference-LambdaDeploy-parameters"></a>

**FunctionName**  
Required: Yes  
The name of the function that you created in Lambda, such as `MyLambdaFunction`.  
You must have already created a version.

**FunctionAlias**  
Required: No  
The alias of the function that you created in Lambda and is the function to be deployed to, such as `live`. The alias must exist and has one version behind it when the action executions starts. (It will be the rollback target version.)  
If not provided, the action deploys the source artifact to `$LATEST` and creates a new version. In this use case, the deploy strategy and target version options are not available. 

**PublishedTargetVersion**  
Required: No  
The desired Lambda Function version to be deployed to FunctionAlias. It can be pipeline or action level variables, such as `#{variables.lambdaTargetVersion}`. The version must be published when action execution starts.  
Required if no input artifact is provided.

**DeployStrategy**  
Required: No (Default is `AllAtOnce`)  
  
Determines the rate that the Lambda deploy action adopts to shift traffic from the original version of the Lambda function to the new version for **FunctionAlias**. Available deploy strategies are canary or linear. Accepted formats:  
+ `AllAtOnce` - 

  Shifts all traffic to the updated Lambda functions at once.

   If not specified, the default is `AllAtOnce`)
+ `Canary10Percent5Minutes` - Shifts 10 percent of traffic in the first increment. The remaining 90 percent is deployed five minutes later.

  The values for both percentage and minutes can be changed.
+ `Linear10PercentEvery1Minute` - Shifts 10 percent of traffic every minute until all traffic is shifted.

  The values for both percentage and minutes can be changed.
The following considerations apply for this field:  
+ Maximum total wait time is 2 days.
+ Only available when **FunctionAlias** is provided.


**Alarms**  
Required: No  
A comma-separated list of alarm names configured for the Lambda deployment. A maximum of 10 alarms can be added. The action fails when monitored alarms go to the ALARM state.

The following image shows an example of the Edit page for the action.

![\[The Edit action page for a new pipeline with the Lambda deploy action\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/lambdadeploy-edit-screen.png)


## Input artifacts
<a name="action-reference-LambdaDeploy-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The files provided, if any, to support the script actions during the deployment.

## Output artifacts
<a name="action-reference-LambdaDeploy-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Output variables
<a name="action-reference-LambdaDeploy-output-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions. 

For more information, see [Variables reference](reference-variables.md).

**FunctionVersion**  
The new Lambda function version that was deployed.

## Service role policy permissions for the Lambda deploy action
<a name="action-reference-LambdaDeploy-permissions-action"></a>

When CodePipeline runs the action, CodePipeline service role requires the following permissions, appropriately scoped down for access with least privilege.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "StatementForLambda",
            "Effect": "Allow",
            "Action": [
                "lambda:GetAlias",
                "lambda:GetFunctionConfiguration",
                "lambda:GetProvisionedConcurrencyConfig",
                "lambda:PublishVersion",
                "lambda:UpdateAlias",
                "lambda:UpdateFunctionCode"
            ],
            "Resource": [
                "arn:aws:lambda:us-east-1:111122223333:function:{{FunctionName}}",
                "arn:aws:lambda:us-east-1:111122223333:function:{{FunctionName}}:*"
            ]
        },
        {
            "Sid": "StatementForCloudWatch",
            "Effect": "Allow",
            "Action": [
                "cloudwatch:DescribeAlarms"
            ],
            "Resource": [
                "arn:aws:cloudwatch:us-east-1:111122223333:alarm:{{AlarmNames}}"
            ]
        },
        {
            "Sid": "StatementForLogs1",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:111122223333:log-group:/us-east-1/codepipeline/{{pipelineName}}",
                "arn:aws:logs:us-east-1:111122223333:log-group:/us-east-1/codepipeline/{{pipelineName}}:*"
            ]
        },
        {
            "Sid": "StatementForLogs2",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:111122223333:log-group:/us-east-1/codepipeline/{{pipelineName}}:log-stream:*"
            ]
        }
    ]
}
```

------

## Action declaration
<a name="action-reference-LambdaDeploy-example"></a>

------
#### [ YAML ]

```
name: Deploy
actionTypeId:
  category: Deploy
  owner: AWS
  provider: Lambda
  version: '1'
runOrder: 1
configuration:
  DeployStrategy: Canary10Percent5Minutes
  FunctionAlias: aliasV1
  FunctionName: MyLambdaFunction
outputArtifacts: []
inputArtifacts:
- name: SourceArtifact
region: us-east-1
namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "name": "Deploy",
    "actionTypeId": {
        "category": "Deploy",
        "owner": "AWS",
        "provider": "Lambda",
        "version": "1"
    },
    "runOrder": 1,
    "configuration": {
        "DeployStrategy": "Canary10Percent5Minutes",
        "FunctionAlias": "aliasV1",
        "FunctionName": "MyLambdaFunction"
    },
    "outputArtifacts": [],
    "inputArtifacts": [
        {
            "name": "SourceArtifact"
        }
    ],
    "region": "us-east-1",
    "namespace": "DeployVariables"
},
```

------

## See also
<a name="action-reference-LambdaDeploy-links"></a>

The following related resources can help you as you work with this action.
+  [Tutorial: Lambda function deployments with CodePipeline](tutorials-lambda-deploy.md) – This tutorial walks you through the creation of a sample Lambda function where you will create an alias and version, add the zipped Lambda function to your source location, and run the Lambda action in your pipeline.

# Amazon S3 deploy action reference
<a name="action-reference-S3Deploy"></a>

You use an Amazon S3 deploy action to deploy files to an Amazon S3 bucket for static web site hosting or archive. You can specify whether to extract deployment files before upload to your bucket.

**Note**  
This reference topic describes the Amazon S3 deployment action for CodePipeline where the deployment platform is an Amazon S3 bucket configured for hosting. For reference information about the Amazon S3 source action in CodePipeline, see [Amazon S3 source action reference](action-reference-S3.md).

**Topics**
+ [Action type](#action-reference-S3Deploy-type)
+ [Configuration parameters](#action-reference-S3Deploy-config)
+ [Input artifacts](#action-reference-S3Deploy-input)
+ [Output artifacts](#action-reference-S3Deploy-output)
+ [Service role permissions: S3 deploy action](#edit-role-s3deploy)
+ [Example action configuration](#action-reference-S3Deploy-example)
+ [See also](#action-reference-S3Deploy-links)

## Action type
<a name="action-reference-S3Deploy-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `S3`
+ Version: `1`

## Configuration parameters
<a name="action-reference-S3Deploy-config"></a>

**BucketName**  
Required: Yes  
The name of the Amazon S3 bucket where files are to be deployed.

**Extract**  
Required: Yes  
If true, specifies that files are to be extracted before upload. Otherwise, application files remain zipped for upload, such as in the case of a hosted static web site. If false, then the `ObjectKey` is required.

**ObjectKey**  
Conditional. Required if `Extract` = false  
The name of the Amazon S3 object key that uniquely identifies the object in the S3 bucket.

**KMSEncryptionKeyARN**  
Required: No  
The ARN of the AWS KMS encryption key for the host bucket. The `KMSEncryptionKeyARN` parameter encrypts uploaded artifacts with the provided AWS KMS key. For a KMS key, you can use the key ID, the key ARN, or the alias ARN.  
Aliases are recognized only in the account that created the KMS key. For cross-account actions, you can only use the key ID or key ARN to identify the key. Cross-account actions involve using the role from the other account (AccountB), so specifying the key ID will use the key from the other account (AccountB).
CodePipeline only supports symmetric KMS keys. Do not use an asymmetric KMS key to encrypt the data in your S3 bucket.

**CannedACL**  
Required: No  
The `CannedACL` parameter applies the specified [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl) to objects deployed to Amazon S3. This overwrites any existing ACL that was applied to the object.

**CacheControl**  
Required: No  
The `CacheControl` parameter controls caching behavior for requests/responses for objects in the bucket. For a list of valid values, see the [http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9) header field for HTTP operations. To enter multiple values in `CacheControl`, use a comma between each value. You can add a space after each comma (optional), as shown in this example for the CLI:  

```
"CacheControl": "public, max-age=0, no-transform"
```

## Input artifacts
<a name="action-reference-S3Deploy-input"></a>
+ **Number of Artifacts:** `1`
+ **Description:** The files for deployment or archive are obtained from the source repository, zipped, and uploaded by CodePipeline.

## Output artifacts
<a name="action-reference-S3Deploy-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: S3 deploy action
<a name="edit-role-s3deploy"></a>

For S3 deploy action support, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:PutObjectVersionAcl",
        "s3:GetBucketVersioning",
        "s3:GetBucketAcl",
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::[[s3DeployBuckets]]",
        "arn:aws:s3:::[[s3DeployBuckets]]/*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:ResourceAccount": "111122223333"
        }
      }
    }
  ]
}
```

------

For the S3 deploy action support, if your S3 objects have tags, you must also add the following permissions to your policy statement:

```
"s3:GetObjectTagging",
"s3:GetObjectVersionTagging",
"s3:PutObjectTagging"
```

## Example action configuration
<a name="action-reference-S3Deploy-example"></a>

The following show examples for the action configuration.

### Example configuration when `Extract` is set to `false`
<a name="action-reference-S3Deploy-extractfalse"></a>

The following example shows the default action configuration when the action is created with the `Extract` field set to `false`.

------
#### [ YAML ]

```
Name: Deploy
Actions:
  - Name: Deploy
    ActionTypeId:
      Category: Deploy
      Owner: AWS
      Provider: S3
      Version: '1'
    RunOrder: 1
    Configuration:
      BucketName: website-bucket
      Extract: 'false'
      ObjectKey: MyWebsite
    OutputArtifacts: []
    InputArtifacts:
      - Name: SourceArtifact
    Region: us-west-2
    Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "Deploy",
    "Actions": [
        {
            "Name": "Deploy",
            "ActionTypeId": {
                "Category": "Deploy",
                "Owner": "AWS",
                "Provider": "S3",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "BucketName": "website-bucket",
                "Extract": "false",
                "ObjectKey": "MyWebsite"
                },
            "OutputArtifacts": [],
            "InputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "Region": "us-west-2",
            "Namespace": "DeployVariables"
        }
    ]
},
```

------

### Example configuration when `Extract` is set to `true`
<a name="action-reference-S3Deploy-extracttrue"></a>

The following example shows the default action configuration when the action is created with the `Extract` field set to `true`.

------
#### [ YAML ]

```
Name: Deploy
Actions:
  - Name: Deploy
    ActionTypeId:
      Category: Deploy
      Owner: AWS
      Provider: S3
      Version: '1'
    RunOrder: 1
    Configuration:
      BucketName: website-bucket
      Extract: 'true'
    OutputArtifacts: []
    InputArtifacts:
      - Name: SourceArtifact
    Region: us-west-2
    Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "Deploy",
    "Actions": [
        {
            "Name": "Deploy",
            "ActionTypeId": {
                "Category": "Deploy",
                "Owner": "AWS",
                "Provider": "S3",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "BucketName": "website-bucket",
                "Extract": "true"
                },
            "OutputArtifacts": [],
            "InputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "Region": "us-west-2",
            "Namespace": "DeployVariables"
        }
    ]
},
```

------

## See also
<a name="action-reference-S3Deploy-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a pipeline that uses Amazon S3 as a deployment provider](tutorials-s3deploy.md) – This tutorial walks you through two examples for creating a pipeline with an S3 deploy action. You download sample files, upload the files to your CodeCommit repository, create your S3 bucket, and configure your bucket for hosting. Next, you use the CodePipeline console to create your pipeline and specify an Amazon S3 deployment configuration.
+ [Amazon S3 source action reference](action-reference-S3.md) – This action reference provides reference information and examples for Amazon S3 source actions in CodePipeline.

# Amazon S3 source action reference
<a name="action-reference-S3"></a>

Triggers the pipeline when a new object is uploaded to the configured bucket and object key.

**Note**  
This reference topic describes the Amazon S3 source action for CodePipeline where the source location is an Amazon S3 bucket configured for versioning. For reference information about the Amazon S3 deploy action in CodePipeline, see [Amazon S3 deploy action reference](action-reference-S3Deploy.md).

You can create an Amazon S3 bucket to use as the source location for your application files.

**Note**  
When you create your source bucket, make sure you enable versioning on the bucket. If you want to use an existing Amazon S3 bucket, see [Using versioning](http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) to enable versioning on an existing bucket.

If you use the console to create or edit your pipeline, CodePipeline creates an EventBridge rule that starts your pipeline when a change occurs in the S3 source bucket.

**Note**  
For Amazon ECR, Amazon S3, or CodeCommit sources, you can also create a source override using input transform entry to use the `revisionValue` in EventBridge for your pipeline event, where the `revisionValue` is derived from the source event variable for your object key, commit, or image ID. For more information, see the optional step for input transform entry included in the procedures under [Amazon ECR source actions and EventBridge resources](create-cwe-ecr-source.md), [Connecting to Amazon S3 source actions with a source enabled for events](create-S3-source-events.md), or [CodeCommit source actions and EventBridge](triggering.md).

You must have already created an Amazon S3 source bucket and uploaded the source files as a single ZIP file before you connect the pipeline through an Amazon S3 action.

**Note**  
When Amazon S3 is the source provider for your pipeline, you may zip your source file or files into a single .zip and upload the .zip to your source bucket. You may also upload a single unzipped file; however, downstream actions that expect a .zip file will fail.

**Topics**
+ [Action type](#action-reference-S3-type)
+ [Configuration parameters](#action-reference-S3-config)
+ [Input artifacts](#action-reference-S3-input)
+ [Output artifacts](#action-reference-S3-output)
+ [Output variables](#action-reference-S3-variables)
+ [Service role permissions: S3 source action](#edit-role-s3source)
+ [Action declaration](#action-reference-S3-example)
+ [See also](#action-reference-S3-links)

## Action type
<a name="action-reference-S3-type"></a>
+ Category: `Source`
+ Owner: `AWS`
+ Provider: `S3`
+ Version: `1`

## Configuration parameters
<a name="action-reference-S3-config"></a>

**S3Bucket**  
Required: Yes  
The name of the Amazon S3 bucket where source changes are to be detected.

**S3ObjectKey**  
Required: Yes  
The name of the Amazon S3 object key where source changes are to be detected.

**AllowOverrideForS3ObjectKey**  
Required: No  
`AllowOverrideForS3ObjectKey` controls whether source overrides from `StartPipelineExecution` can override the already configured `S3ObjectKey` in the source action. For more information on source overrides with the S3 Object Key, see [Start a pipeline with a source revision override](pipelines-trigger-source-overrides.md).  
If you omit `AllowOverrideForS3ObjectKey`, CodePipeline defaults the ability to override the S3 ObjectKey in the source action by setting this parameter to `false`.
Valid values for this parameter:  
+ `true`: If set, the pre-configured S3 Object Key can be overridden by source revision overrides during a pipeline execution.
**Note**  
If you intend to allow all CodePipeline users the ability to override the pre-configured S3 Object Key while starting a new pipeline execution, you must set `AllowOverrideForS3ObjectKey` to `true`.
+ `false`: 

  If set, CodePipeline will not allow the S3 Object Key to be overridden using source revision overrides. This is also the default value for this parameter.

**PollForSourceChanges**  
Required: No  
`PollForSourceChanges` controls whether CodePipeline polls the Amazon S3 source bucket for source changes. We recommend that you use CloudWatch Events and CloudTrail to detect source changes instead. For more information about configuring CloudWatch Events, see [Migrate polling pipelines with an S3 source and CloudTrail trail (CLI)](update-change-detection.md#update-change-detection-cli-S3) or [Migrate polling pipelines with an S3 source and CloudTrail trail (CloudFormation template)](update-change-detection.md#update-change-detection-cfn-s3).  
If you intend to configure CloudWatch Events, you must set `PollForSourceChanges` to `false` to avoid duplicate pipeline executions.
Valid values for this parameter:  
+ `true`: If set, CodePipeline polls your source location for source changes.
**Note**  
If you omit `PollForSourceChanges`, CodePipeline defaults to polling your source location for source changes. This behavior is the same as if `PollForSourceChanges` is included and set to `true`.
+ `false`: If set, CodePipeline does not poll your source location for source changes. Use this setting if you intend to configure a CloudWatch Events rule to detect source changes.

## Input artifacts
<a name="action-reference-S3-input"></a>
+ **Number of Artifacts:** `0`
+ **Description:** Input artifacts do not apply for this action type.

## Output artifacts
<a name="action-reference-S3-output"></a>
+ **Number of artifacts:** `1` 
+ **Description:** Provides the artifacts that are available in the source bucket configured to connect to the pipeline. The artifacts generated from the bucket are the output artifacts for the Amazon S3 action. The Amazon S3 object metadata (ETag and version ID) is displayed in CodePipeline as the source revision for the triggered pipeline execution.

## Output variables
<a name="action-reference-S3-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions.

For more information about variables in CodePipeline, see [Variables reference](reference-variables.md).

**BucketName**  
The name of the Amazon S3 bucket related to the source change that triggered the pipeline.

**ETag**  
The entity tag for the object related to the source change that triggered the pipeline. The ETag is an MD5 hash of the object. ETag reflects only changes to the contents of an object, not its metadata.

**ObjectKey**  
The name of the Amazon S3 object key related to the source change that triggered the pipeline.

**VersionId**  
The version ID for the version of the object related to the source change that triggered the pipeline.

## Service role permissions: S3 source action
<a name="edit-role-s3source"></a>

For S3 source action support, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:GetBucketVersioning",
        "s3:GetBucketAcl",
        "s3:GetBucketLocation",
        "s3:GetObjectTagging",
        "s3:GetObjectVersionTagging"
      ],
      "Resource": [
        "arn:aws:s3:::[[S3Bucket]]",
        "arn:aws:s3:::[[S3Bucket]]/*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:ResourceAccount": "111122223333"
        }
      }
    }
  ]
}
```

------

## Action declaration
<a name="action-reference-S3-example"></a>

------
#### [ YAML ]

```
Name: Source
Actions:
  - RunOrder: 1
    OutputArtifacts:
      - Name: SourceArtifact
    ActionTypeId:
      Provider: S3
      Owner: AWS
      Version: '1'
      Category: Source
    Region: us-west-2
    Name: Source
    Configuration:
      S3Bucket: amzn-s3-demo-source-bucket
      S3ObjectKey: my-application.zip
      PollForSourceChanges: 'false'
    InputArtifacts: []
```

------
#### [ JSON ]

```
{
    "Name": "Source",
    "Actions": [
        {
            "RunOrder": 1,
            "OutputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "ActionTypeId": {
                "Provider": "S3",
                "Owner": "AWS",
                "Version": "1",
                "Category": "Source"
            },
            "Region": "us-west-2",
            "Name": "Source",
            "Configuration": {
                "S3Bucket": "amzn-s3-demo-source-bucket",
                "S3ObjectKey": "my-application.zip",
                "PollForSourceChanges": "false"
            },
            "InputArtifacts": []
        }
    ]
},
```

------

## See also
<a name="action-reference-S3-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a simple pipeline (S3 bucket)](tutorials-simple-s3.md) – This tutorial provides a sample app spec file and sample CodeDeploy application and deployment group. Use this tutorial to create a pipeline with an Amazon S3 source that deploys to Amazon EC2 instances.

# AWS AppConfig deploy action reference
<a name="action-reference-AppConfig"></a>

AWS AppConfig is a capability of AWS Systems Manager. AppConfig supports controlled deployments to applications of any size and includes built-in validation checks and monitoring. You can use AppConfig with applications hosted on Amazon EC2 instances, AWS Lambda, containers, mobile applications, or IoT devices.

The `AppConfig` deploy action is an AWS CodePipeline action that deploys configurations stored in your pipeline source location to a specified AppConfig *application*, *environment*, and *configuration* profile. It uses the preferences defined in an AppConfig *deployment strategy*.

## Action type
<a name="action-reference-AppConfig-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `AppConfig`
+ Version: `1`

## Configuration parameters
<a name="action-reference-AppConfig-config"></a>

**Application**  
Required: Yes  
The ID of the AWS AppConfig application with the details for your configuration and deployment.

**Environment**  
Required: Yes  
The ID of the AWS AppConfig environment where the configuration is deployed.

**ConfigurationProfile**  
Required: Yes  
The ID of the AWS AppConfig configuration profile to deploy.

**InputArtifactConfigurationPath**  
Required: Yes  
The file path of the configuration data within the input artifact to deploy.

**DeploymentStrategy**  
Required: No  
The AWS AppConfig deployment strategy to use for deployment.

## Input artifacts
<a name="action-reference-AppConfig-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The input artifact for the deploy action.

## Output artifacts
<a name="action-reference-AppConfig-output"></a>

Not applicable.

## Service role permissions: `AppConfig` action
<a name="edit-role-appconfig"></a>

When CodePipeline runs the action, the CodePipeline service role policy requires the following permissions, appropriately scoped down to the resource level in order to maintain access with least privilege.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "appconfig:StartDeployment",
                "appconfig:StopDeployment",
                "appconfig:GetDeployment"
            ],
            "Resource": [
                "arn:aws:appconfig:*:111122223333:application/[[Application]]",
                "arn:aws:appconfig:*:111122223333:application/[[Application]]/*",
                "arn:aws:appconfig:*:111122223333:deploymentstrategy/*"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

## Example action configuration
<a name="action-reference-AppConfig-example"></a>

------
#### [ YAML ]

```
name: Deploy
actions:
  - name: Deploy
    actionTypeId:
      category: Deploy
      owner: AWS
      provider: AppConfig
      version: '1'
    runOrder: 1
    configuration:
      Application: 2s2qv57
      ConfigurationProfile: PvjrpU
      DeploymentStrategy: frqt7ir
      Environment: 9tm27yd
      InputArtifactConfigurationPath: /
    outputArtifacts: []
    inputArtifacts:
      - name: SourceArtifact
    region: us-west-2
    namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "name": "Deploy",
    "actions": [
        {
            "name": "Deploy",
            "actionTypeId": {
                "category": "Deploy",
                "owner": "AWS",
                "provider": "AppConfig",
                "version": "1"
            },
            "runOrder": 1,
            "configuration": {
                "Application": "2s2qv57",
                "ConfigurationProfile": "PvjrpU",
                "DeploymentStrategy": "frqt7ir",
                "Environment": "9tm27yd",
                "InputArtifactConfigurationPath": "/"
            },
            "outputArtifacts": [],
            "inputArtifacts": [
                {
                    "name": "SourceArtifact"
                }
            ],
            "region": "us-west-2",
            "namespace": "DeployVariables"
        }
    ]
}
```

------

## See also
<a name="action-reference-StepFunctions-links"></a>

The following related resources can help you as you work with this action.
+ [AWS AppConfig](https://docs.aws.amazon.com/systems-manager/latest/userguide/appconfig.html) – For information about AWS AppConfig deployments, see the *AWS Systems Manager User Guide*.
+ [Tutorial: Create a pipeline that uses AWS AppConfig as a deployment provider](tutorials-AppConfig.md) – This tutorial gets you started setting up simple deployment configuration files and AppConfig resources, and shows you how to use the console to create a pipeline with an AWS AppConfig deployment action.

# CloudFormation deploy action reference
<a name="action-reference-CloudFormation"></a>

Executes an operation on an CloudFormation stack. A stack is a collection of AWS resources that you can manage as a single unit. The resources in a stack are defined by the stack's CloudFormation template. A change set creates a comparison that can be viewed without altering the original stack. For information about the types of CloudFormation actions that can be performed on stacks and change sets, see the `ActionMode` parameter.

To construct an error message for an CloudFormation action where a stack operation has failed, CodePipeline calls the CloudFormation `DescribeStackEvents` API. If an action IAM role has permission to access that API, the details about the first failed resource will be included in the CodePipeline error message. Otherwise, if the role policy does not have the appropriate permission, CodePipeline will ignore accessing the API and show a generic error message instead. To do this, the `cloudformation:DescribeStackEvents` permission must be added to the service role or other IAM roles for the pipeline.

If you do not want the resource details surfaced in the pipeline error messages, you can revoke this permission for the action IAM role by removing the `cloudformation:DescribeStackEvents` permission.

**Topics**
+ [Action type](#action-reference-CloudFormation-type)
+ [Configuration parameters](#action-reference-CloudFormation-config)
+ [Input artifacts](#action-reference-CloudFormation-input)
+ [Output artifacts](#action-reference-CloudFormation-output)
+ [Output variables](#action-reference-CloudFormation-variables)
+ [Service role permissions: CloudFormation action](#edit-role-cloudformation)
+ [Action declaration](#action-reference-CloudFormation-example)
+ [See also](#action-reference-CloudFormation-links)

## Action type
<a name="action-reference-CloudFormation-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `CloudFormation`
+ Version: `1`

## Configuration parameters
<a name="action-reference-CloudFormation-config"></a>

**ActionMode**  
Required: Yes  
`ActionMode` is the name of the action CloudFormation performs on a stack or change set. The following action modes are available:  
+ `CHANGE_SET_EXECUTE` executes a change set for the resource stack that is based on a set of specified resource updates. With this action, CloudFormation starts to alter the stack.
+ `CHANGE_SET_REPLACE` creates the change set, if it doesn't exist, based on the stack name and template that you submit. If the change set exists, CloudFormation deletes it, and then creates a new one. 
+ `CREATE_UPDATE` creates the stack if it doesn't exist. If the stack exists, CloudFormation updates the stack. Use this action to update existing stacks. Unlike `REPLACE_ON_FAILURE`, if the stack exists and is in a failed state, CodePipeline won't delete and replace the stack.
+ `DELETE_ONLY` deletes a stack. If you specify a stack that doesn't exist, the action is completed successfully without deleting a stack.
+ `REPLACE_ON_FAILURE` creates a stack, if it doesn't exist. If the stack exists and is in a failed state, CloudFormation deletes the stack, and then creates a new stack. If the stack isn't in a failed state, CloudFormation updates it. 

  The stack is in a failed state when any of the following status types are displayed in CloudFormation: 
  + `ROLLBACK_FAILED`
  + `CREATE_FAILED`
  + `DELETE_FAILED`
  + `UPDATE_ROLLBACK_FAILED`

  Use this action to automatically replace failed stacks without recovering or troubleshooting them.
**Important**  
We recommend that you use `REPLACE_ON_FAILURE` for testing purposes only because it might delete your stack.

**StackName**  
Required: Yes  
`StackName` is the name of an existing stack or a stack that you want to create.

**Capabilities**  
Required: Conditional  
Use of `Capabilities` acknowledges that the template might have the capabilities to create and update some resources on its own, and that these capabilities are determined based on the types of resources in the template.  
This property is required if you have IAM resources in your stack template or you create a stack directly from a template containing macros. In order for the CloudFormation action to successfully operate in this way, you must explicitly acknowledge that you would like it to do so with one of the following capabilities:  
+ `CAPABILITY_IAM` 
+ `CAPABILITY_NAMED_IAM` 
+ `CAPABILITY_AUTO_EXPAND` 
 You can specify more than one capability by using a comma (no space) between capabilities. The example in [Action declaration](#action-reference-CloudFormation-example) shows an entry with both the CAPABILITY\$1IAM and CAPABILITY\$1AUTO\$1EXPAND properties.  
For more information about `Capabilities`, see the properties under [UpdateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStack.html) in the *AWS CloudFormation API Reference*.

**ChangeSetName**  
Required: Conditional  
`ChangeSetName` is the name of an existing change set or a new change set that you want to create for the specified stack.  
 This property is required for the following action modes: CHANGE\$1SET\$1REPLACE and CHANGE\$1SET\$1EXECUTE. For all other action modes, this property is ignored.

**RoleArn**  
Required: Conditional  
The `RoleArn` is the ARN of the IAM service role that CloudFormation assumes when it operates on resources in the specified stack. `RoleArn` is not applied when executing a change set. If you do not use CodePipeline to create the change set, make sure that the change set or stack has an associated role.  
This role must be in the same account as the role for the action that is running, as configured in the action declaration `RoleArn`.
This property is required for the following action modes:  
+ CREATE\$1UPDATE
+ REPLACE\$1ON\$1FAILURE
+ DELETE\$1ONLY
+ CHANGE\$1SET\$1REPLACE
CloudFormation is given an S3-signed URL to the template; therefore, this `RoleArn` does not need permission to access the artifact bucket. However, the action `RoleArn` *does* need permission to access the artifact bucket, in order to generate the signed URL.

**TemplatePath**  
Required: Conditional  
`TemplatePath` represents the CloudFormation template file. You include the file in an input artifact to this action. The file name follows this format:  
`Artifactname::TemplateFileName`  
`Artifactname` is the input artifact name as it appears in CodePipeline. For example, a source stage with the artifact name of `SourceArtifact` and a `template-export.json` file name creates a `TemplatePath` name, as shown in this example:  

```
"TemplatePath": "SourceArtifact::template-export.json"
```
This property is required for the following action modes:   
+ CREATE\$1UPDATE
+ REPLACE\$1ON\$1FAILURE
+ CHANGE\$1SET\$1REPLACE
For all other action modes, this property is ignored.  
The CloudFormation template file containing the template body has a minimum length of 1 byte and a maximum length of 1 MB. For CloudFormation deployment actions in CodePipeline, the maximum input artifact size is always 256 MB. For more information, see [Quotas in AWS CodePipeline](limits.md) and [CloudFormation Limits](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html).

**OutputFileName**  
Required: No  
Use `OutputFileName` to specify an output file name, such as `CreateStackOutput.json`, that CodePipeline adds to the pipeline output artifact for this action. The JSON file contains the contents of the `Outputs` section from the CloudFormation stack.  
If you don't specify a name, CodePipeline doesn't generate an output file or artifact.

**ParameterOverrides**  
Required: No  
Parameters are defined in your stack template and allow you to provide values for them at the time of stack creation or update. You can use a JSON object to set parameter values in your template. (These values override those set in the template configuration file.) For more information about using parameter overrides, see [Configuration Properties (JSON Object)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-action-reference.html#w4363ab1c13c13b9).  
We recommend that you use the template configuration file for most of your parameter values. Use parameter overrides only for values that aren't known until the pipeline is running. For more information, see [Using Parameter Override Functions with CodePipeline Pipelines](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html) in the *AWS CloudFormation User Guide*.  
All parameter names must be present in the stack template.

**TemplateConfiguration**  
Required: No  
`TemplateConfiguration` is the template configuration file. You include the file in an input artifact to this action. It can contain template parameter values and a stack policy. For more information about the template configuration file format, see [AWS CloudFormation Artifacts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-cfn-artifacts.html).   
The template configuration file name follows this format:   
`Artifactname::TemplateConfigurationFileName`  
`Artifactname` is the input artifact name as it appears in CodePipeline. For example, a source stage with the artifact name of `SourceArtifact` and a `test-configuration.json` file name creates a `TemplateConfiguration` name as shown in this example:  

```
"TemplateConfiguration": "SourceArtifact::test-configuration.json"
```

## Input artifacts
<a name="action-reference-CloudFormation-input"></a>
+ **Number of artifacts:** `0 to 10`
+ **Description:** As input, the CloudFormation action optionally accepts artifacts for these purposes:
  + To provide the stack template file to execute. (See the `TemplatePath` parameter.)
  + To provide the template configuration file to use. (See the `TemplateConfiguration` parameter.) For more information about the template configuration file format, see [AWS CloudFormation Artifacts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-cfn-artifacts.html). 
  + To provide the artifact for a Lambda function to be deployed as part of the CloudFormation stack.

## Output artifacts
<a name="action-reference-CloudFormation-output"></a>
+ **Number of artifacts:** `0 to 1` 
+ **Description:** If the `OutputFileName` parameter is specified, there is an output artifact produced by this action that contains a JSON file with the specified name. The JSON file contains the contents of the Outputs section from the CloudFormation stack.

  For more information about the outputs section you can create for your CloudFormation action, see [Outputs](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html).

## Output variables
<a name="action-reference-CloudFormation-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. You configure an action with a namespace to make those variables available to the configuration of downstream actions.

For CloudFormation actions, variables are produced from any values designated in the `Outputs` section of a stack template. Note that the only CloudFormation action modes that generate outputs are those that result in creating or updating a stack, such as stack creation, stack updates, and change set execution. The corresponding action modes that generate variables are:
+ `CHANGE_SET_EXECUTE`
+ `CHANGE_SET_REPLACE`
+ `CREATE_UPDATE`
+ `REPLACE_ON_FAILURE`

For more information, see [Variables reference](reference-variables.md). For a tutorial that shows you how to create a pipeline with a CloudFormation deployment action in a pipeline that uses CloudFormation output variables, see [Tutorial: Create a pipeline that uses variables from AWS CloudFormation deployment actions](tutorials-cloudformation-action.md).

## Service role permissions: CloudFormation action
<a name="edit-role-cloudformation"></a>

When CodePipeline runs the action, the CodePipeline service role policy requires the following permissions, appropriately scoped down to the pipeline resource ARN in order to maintain access with least privilege. For example, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowCFNStackAccess",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateStack",
                "cloudformation:UpdateStack",
                "cloudformation:DeleteStack",
                "cloudformation:DescribeStacks",
                "cloudformation:DescribeStackResources",
                "cloudformation:DescribeStackEvents",
                "cloudformation:GetTemplate",
                "cloudformation:DescribeChangeSet",
                "cloudformation:CreateChangeSet",
                "cloudformation:DeleteChangeSet",
                "cloudformation:ExecuteChangeSet"
            ],
            "Resource": [
                "arn:aws:cloudformation:*:111122223333:stack/[[cfnDeployStackNames]]/*"
            ]
        },
        {
            "Sid": "ValidateTemplate",
            "Effect": "Allow",
            "Action": [
                "cloudformation:ValidateTemplate"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowIAMPassRole",
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::111122223333:role/[[cfnExecutionRoles]]"
            ],
            "Condition": {
                "StringEqualsIfExists": {
                    "iam:PassedToService": [
                        "cloudformation.amazonaws.com"
                    ]
                }
            }
        }
    ]
}
```

------

Note that the `cloudformation:DescribeStackEvents` permission is optional. It allows the CloudFormation action to show a more detailed error message. This permission can be revoked from the IAM role if you don't want resource details surfaced in the pipeline error messages.

## Action declaration
<a name="action-reference-CloudFormation-example"></a>

------
#### [ YAML ]

```
Name: ExecuteChangeSet
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: CloudFormation
  Version: '1'
RunOrder: 2
Configuration:
  ActionMode: CHANGE_SET_EXECUTE
  Capabilities: CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND
  ChangeSetName: pipeline-changeset
  ParameterOverrides: '{"ProjectId": "my-project","CodeDeployRole": "CodeDeploy_Role_ARN"}'
  RoleArn: CloudFormation_Role_ARN
  StackName: my-project--lambda
  TemplateConfiguration: 'my-project--BuildArtifact::template-configuration.json'
  TemplatePath: 'my-project--BuildArtifact::template-export.yml'
OutputArtifacts: []
InputArtifacts:
  - Name: my-project-BuildArtifact
```

------
#### [ JSON ]

```
{
    "Name": "ExecuteChangeSet",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "CloudFormation",
        "Version": "1"
    },
    "RunOrder": 2,
    "Configuration": {
        "ActionMode": "CHANGE_SET_EXECUTE",
        "Capabilities": "CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND",
        "ChangeSetName": "pipeline-changeset",
        "ParameterOverrides": "{\"ProjectId\": \"my-project\",\"CodeDeployRole\": \"CodeDeploy_Role_ARN\"}",
        "RoleArn": "CloudFormation_Role_ARN",
        "StackName": "my-project--lambda",
        "TemplateConfiguration": "my-project--BuildArtifact::template-configuration.json",
        "TemplatePath": "my-project--BuildArtifact::template-export.yml"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
             "Name": "my-project-BuildArtifact"
        }
    ]
},
```

------

## See also
<a name="action-reference-CloudFormation-links"></a>

The following related resources can help you as you work with this action.
+ [Configuration Properties Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-action-reference.html) – This reference chapter in the *AWS CloudFormation User Guide* provides more descriptions and examples for these CodePipeline parameters.
+ [AWS CloudFormation API Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/) – The [CreateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html) parameter in the *AWS CloudFormation API Reference* describes stack parameters for CloudFormation templates.

# CloudFormation StackSets deploy action reference
<a name="action-reference-StackSets"></a>

CodePipeline offers the ability to perform CloudFormation StackSets operations as part of your CI/CD process. You use a stack set to create stacks in AWS accounts across AWS Regions by using a single CloudFormation template. All the resources included in each stack are defined by the stack set’s CloudFormation template. When you create the stack set, you specify the template to use, as well as any parameters and capabilities that the template requires.

For more information about concepts for CloudFormation StackSets, see [StackSets concepts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-concepts.html) in the *AWS CloudFormation User Guide*.

You integrate your pipeline with CloudFormation StackSets through two distinct action types that you use together:
+ The `CloudFormationStackSet` action creates or updates a stack set or stack instances from the template stored in the pipeline source location. Each time a stack set is created or updated, it initiates a deployment of those changes to specified instances. In the console, you can choose the **CloudFormation Stack Set** action provider when you create or edit your pipeline.
+ The `CloudFormationStackInstances` action deploys changes from the `CloudFormationStackSet` action to specified instances, creates new stack instances, and defines parameter overrides to specified instances. In the console, you can choose the **CloudFormation Stack Instances** action provider when you edit an existing pipeline.

You can use these actions to deploy to target AWS accounts or target AWS Organizations organizational unit IDs.

**Note**  
To deploy to target AWS Organizations accounts or organizational unit IDs and use the service-managed permissions model, you must enable trusted access between AWS CloudFormation StackSets and AWS Organizations. For more information, see [Enabling trusted access with AWS CloudFormation Stacksets](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-cloudformation.html#integrate-enable-ta-cloudformation).

**Topics**
+ [How CloudFormation StackSets actions work](#action-reference-StackSets-concepts)
+ [How to structure StackSets actions in a pipeline](#action-reference-StackSets-bestpractices)
+ [The `CloudFormationStackSet` action](#action-reference-StackSet)
+ [The CloudFormationStackInstances action](#action-reference-StackInstances)
+ [Service role permissions: `CloudFormationStackSet` action](#edit-role-cfn-stackset)
+ [Service role permissions: `CloudFormationStackInstances` action](#edit-role-cfn-stackinstances)
+ [Permissions models for stack set operations](#action-reference-StackSets-permissions)
+ [Template parameter data types](#action-reference-StackSets-datatypes)
+ [See also](#action-reference-CloudFormation-links)

## How CloudFormation StackSets actions work
<a name="action-reference-StackSets-concepts"></a>

A `CloudFormationStackSet` action creates or updates resources depending on whether the action is running for the first time.

The `CloudFormationStackSet` action *creates* or *updates* the stack set and deploys those changes to specified instances.

**Note**  
If you use this action to make an update that includes adding stack instances, the new instances are deployed first and the update is completed last. The new instances first receive the old version, and then the update is applied to all instances.
+ *Create*: When no instances are specified and the stack set does not exist, the **CloudFormationStackSet** action creates the stack set without creating any instances.
+ *Update*: When the **CloudFormationStackSet** action is run for a stack set that is already created, the action updates the stack set. If no instances are specified and the stack set already exists, all instances are updated. If this action is used to update specific instances, all remaining instances move to an OUTDATED status.

  You can use the **CloudFormationStackSet** action to update the stack set in the following ways. 
  + Update the template on some or all instances.
  + Update parameters on some or all instances.
  + Update the execution role for the stack set (this must match the execution role specified in the Administrator role).
  + Change the permissions model (only if no instances have been created).
  + Enable/Disable `AutoDeployment` if the stack set permissions model is `Service Managed`.
  + Act as a delegated administrator in a member account if the stack set permissions model is `Service Managed`.
  + Update the Administrator role.
  + Update the description on the stack set.
  + Add deployment targets to the stack set update to create new stack instances.

The `CloudFormationStackInstances` action creates new stack instances or updates outdated stack instances. An instance becomes outdated when a stack set is updated, but not all instances within it are updated.
+ *Create*: If the stack already exists, the `CloudFormationStackInstances` action only updates instances and does not create stack instances.
+ *Update*: After the `CloudFormationStackSet` action is performed, if the template or parameters have been updated in only some instances, the rest will be marked `OUTDATED`. In later pipeline stages, `CloudFormationStackInstances` updates the rest of the instances in the stack set in waves so that all instances are marked `CURRENT`. This action can also be used to add additional instances or override parameters on new or existing instances.

As part of an update, the `CloudFormationStackSet` and `CloudFormationStackInstances` actions can specify new deployment targets, which creates new stack instances.

As part of an update, the `CloudFormationStackSet` and `CloudFormationStackInstances` actions do not delete stack sets, instances, or resources. When the action updates a stack but does not specify all instances to be updated, the instances that were not specified for update are removed from the update and set to a status of `OUTDATED`.

During a deployment, stack instances can also show a status of `OUTDATED` if the deployment to instances failed.

## How to structure StackSets actions in a pipeline
<a name="action-reference-StackSets-bestpractices"></a>

As a best practice, you should construct your pipeline so that the stack set is created and initially deploys to a subset or a single instance. After you test your deployment and view the generated stack set, then add the `CloudFormationStackInstances` action so that the remaining instances are created and updated.

Use the console or the CLI to create the recommended pipeline structure as follows:

1. Create a pipeline with a source action (required) and the `CloudFormationStackSet` action as the deploy action. Run your pipeline.

1. When your pipeline first runs, the `CloudFormationStackSet` action *creates* your stack set and at least one initial instance. Verify the stack set creation and review the deployment to your initial instance. For example, for initial stack set creation for account Account-A where `us-east-1` is the specified Region, the stack instance is created with the stack set:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-StackSets.html)

1. Edit your pipeline to add `CloudFormationStackInstances` as the second deployment action to create/update stack instances for the targets you designate. For example, for stack instance creation for account `Account-A` where the `us-east-2` and `eu-central-1` Regions are specified, the remaining stack instances are created and the initial instance remains updated as follows:  
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-StackSets.html)

1. Run your pipeline as needed to update your stack set and update or create stack instances.

When you initiate a stack update where you have removed deployment targets from the action configuration, then the stack instances that were not designated for update are removed from the deployment and move into an OUTDATED status. For example, for stack instance update for account `Account-A` where the `us-east-2` Region is removed from the action configuration, the remaining stack instances are created and the removed instance is set to OUTDATED as follows:


****  

| Stack instance | Region | Status | 
| --- | --- | --- | 
| StackInstanceID-1 | us-east-1 | CURRENT | 
| StackInstanceID-2 | us-east-2 | OUTDATED | 
| StackInstanceID-3 | eu-central-1 | CURRENT | 

For more information about best practices for deploying stack sets, see [Best practices](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-bestpractices.html) for StackSets in the *AWS CloudFormation User Guide*.

## The `CloudFormationStackSet` action
<a name="action-reference-StackSet"></a>

This action creates or updates a stack set from the template stored in the pipeline source location. 

After you define a stack set, you can create, update, or delete stacks in the target accounts and Regions specified in the configuration parameters. When creating, updating and deleting stacks, you can specify other preferences, such as the order of Regions for operations to be performed, the failure tolerance percentage beyond which stack operations stop, and the number of accounts in which operations are performed on stacks concurrently.

A stack set is a regional resource. If you create a stack set in one AWS Region, you cannot access it from other Regions.

When this action is used as an update action to the stack set, updates to the stack are not allowed without a deployment to at least one stack instance.

**Topics**
+ [Action type](#action-reference-StackSet-type)
+ [Configuration parameters](#action-reference-StackSet-config)
+ [Input artifacts](#action-reference-StackSet-input)
+ [Output artifacts](#action-reference-StackSet-output)
+ [Output variables](#action-reference-StackSet-variables)
+ [Example **CloudFormationStackSet** action configuration](#action-reference-StackSet-example)

### Action type
<a name="action-reference-StackSet-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `CloudFormationStackSet`
+ Version: `1`

### Configuration parameters
<a name="action-reference-StackSet-config"></a>

**StackSetName**  
Required: Yes  
The name to associate with the stack set. This name must be unique in the Region where it is created.  
The name may only contain alphanumeric and hyphen characters. It must begin with an alphabetic character and be 128 characters or fewer.

**Description**  
Required: No  
A description of the stack set. You can use this to describe the stack set’s purpose or other relevant information.

**TemplatePath**  
Required: Yes  
The location of the template that defines the resources in the stack set. This must point to a template with a maximum size of 460,800 bytes.  
Enter the path to the source artifact name and template file in the format `"InputArtifactName::TemplateFileName"`, as shown in the following example.  

```
SourceArtifact::template.txt
```

**Parameters**  
Required: No  
A list of template parameters for your stack set that update during a deployment.  
You can provide parameters as a literal list or a file path:  
+ You can enter parameters in the following shorthand syntax format: `ParameterKey=string,ParameterValue=string,UsePreviousValue=boolean,ResolvedValue=string ParameterKey=string,ParameterValue=string,UsePreviousValue=boolean,ResolvedValue=string`. For more information about these data types, see [Template parameter data types](#action-reference-StackSets-datatypes).

  The following example shows a parameter named `BucketName` with the value `amzn-s3-demo-source-bucket`.

  ```
  ParameterKey=BucketName,ParameterValue=amzn-s3-demo-source-bucket
  ```

  The following example shows an entry with multiple parameters:

  ```
                                                                                                        
    ParameterKey=BucketName,ParameterValue=amzn-s3-demo-source-bucket
    ParameterKey=Asset1,ParameterValue=true              
    ParameterKey=Asset2,ParameterValue=true
  ```
+ You can enter the location of the file containing a list of template parameter overrides entered in the format `"InputArtifactName::ParametersFileName"`, as shown in the following example.

  ```
  SourceArtifact::parameters.txt
  ```

  The following example shows the file contents for `parameters.txt`.

  ```
  [
      {
          "ParameterKey": "KeyName",
          "ParameterValue": "true"
      },
      {
          "ParameterKey": "KeyName",
          "ParameterValue": "true"
      }
  ]
  ```

**Capabilities**  
Required: No  
Indicates that the template can create and update resources, depending on the types of resources in the template.  
You must use this property if you have IAM resources in your stack template or you create a stack directly from a template containing macros. For the CloudFormation action to successfully operate in this way, you must use one of the following capabilities:  
+ `CAPABILITY_IAM` 
+ `CAPABILITY_NAMED_IAM` 
 You can specify more than one capability by using a comma and no spaces between capabilities. The example in [Example **CloudFormationStackSet** action configuration](#action-reference-StackSet-example) shows an entry with multiple capabilities.

**PermissionModel**  
Required: No  
Determines how IAM roles are created and managed. If the field is not specified, the default is used. For information, see [Permissions models for stack set operations](#action-reference-StackSets-permissions).  
Valid values are:   
+ `SELF_MANAGED` (default): You must create administrator and execution roles to deploy to target accounts.
+ `SERVICE_MANAGED`: CloudFormation StackSets automatically creates the IAM roles required to deploy to accounts managed by AWS Organizations. This requires an account to be a member of an Organization.
This parameter can only be changed when no stack instances exist in the stack set.

****AdministrationRoleArn****  
Because CloudFormation StackSets performs operations across multiple accounts, you must define the necessary permissions in those accounts before you can create the stack set.
Required: No  
This parameter is optional for the SELF\$1MANAGED permissions model and is not used for the SERVICE\$1MANAGED permissions model.
The ARN of the IAM role in the administrator account used to perform stack set operations.  
The name may contain alphanumeric characters, any of the following characters: \$1\$1=,.@-, and no spaces. The name is not case sensitive. This role name must be a minimum length of 20 characters and maximum length of 2048 characters. Role names must be unique within the account. The role name specified here must be an existing role name. If you do not specify the role name, it is set to AWSCloudFormationStackSetAdministrationRole. If you specify ServiceManaged, you must not define a role name.

****ExecutionRoleName****  
Because CloudFormation StackSets performs operations across multiple accounts, you must define the necessary permissions in those accounts before you can create the stack set.
Required: No  
This parameter is optional for the SELF\$1MANAGED permissions model and is not used for the SERVICE\$1MANAGED permissions model.
The name of the IAM role in the target accounts used to perform stack set operations. The name may contain alphanumeric characters, any of the following characters: \$1\$1=,.@-, and no spaces. The name is not case sensitive. This role name must be a minimum length of 1 character and maximum length of 64 characters. Role names must be unique within the account. The role name specified here must be an existing role name. Do not specify this role if you are using customized execution roles. If you do not specify the role name, it is set to `AWSCloudFormationStackSetExecutionRole`. If you set Service\$1Managed to true, you must not define a role name.

****OrganizationsAutoDeployment****  
Required: No  
This parameter is optional for the SERVICE\$1MANAGED permissions model and is not used for the SELF\$1MANAGED permissions model.
Describes whether CloudFormation StackSets automatically deploys to AWS Organizations accounts that are added to a target organization or organizational unit (OU). If `OrganizationsAutoDeployment` is specified, do not specify `DeploymentTargets` and `Regions`.   
If no input is provided for `OrganizationsAutoDeployment`, then the default value is `Disabled`.
Valid values are:  
+ `Enabled`. Required: No. 

  StackSets automatically deploys additional stack instances to AWS Organizations accounts that are added to a target organization or organizational unit (OU) in the specified Regions. If an account is removed from a target organization or OU, CloudFormation StackSets deletes stack instances from the account in the specified Regions.
+ `Disabled`. Required: No. 

  StackSets does not automatically deploy additional stack instances to AWS Organizations accounts that are added to a target organization or organizational unit (OU) in the specified Regions.
+ `EnabledWithStackRetention`. Required: No.

  Stack resources are retained when an account is removed from a target organization or OU.

****DeploymentTargets****  
Required: No  
For the SERVICE\$1MANAGED permissions model, you can provide either the organization root ID or organizational Unit IDs for deployment targets. For the SELF\$1MANAGED permissions model, you can only provide accounts.
When this parameter is selected, you must also select **Regions**.
A list of AWS accounts or organizational unit IDs where stack set instances should be created/updated.  
+ **Accounts**:

  You can provide accounts as a literal list or a file path:
  + *Literal*: Enter parameters in the shorthand syntax format `account_ID,account_ID`, as shown in the following example.

    ```
    111111222222,333333444444
    ```
  + *File path: * The location of the file containing a list of AWS accounts where stack set instances should be created/updated, entered in the format `InputArtifactName::AccountsFileName`. If you use file path to specify either **accounts** or **OrganizationalUnitIds**, the file format must be in JSON, as shown in the following example.

    ```
    SourceArtifact::accounts.txt
    ```

    The following example shows the file contents for `accounts.txt`.

    ```
    [
        "111111222222"
    ]
    ```

    The following example shows the file contents for `accounts.txt` when listing more than one account:

    ```
    [
        "111111222222","333333444444"
    ]
    ```
+ **OrganizationalUnitIds**: 
**Note**  
This parameter is optional for the SERVICE\$1MANAGED permissions model and is not used for the SELF\$1MANAGED permissions model. Do not use this if you select **OrganizationsAutoDeployment**.

  The AWS organizational units in which to update associated stack instances.

  You can provide organizational unit IDs as a literal list or a file path:
  + *Literal*: Enter an array of strings separated by commas, as shown in the following example.

    ```
    ou-examplerootid111-exampleouid111,ou-examplerootid222-exampleouid222
    ```
  + *File path: * The location of the file containing a list of OrganizationalUnitIds in which to create or update stack set instances. If you use file path to specify either **accounts** or **OrganizationalUnitIds**, the file format must be in JSON, as shown in the following example.

    Enter a path to the file in the format `InputArtifactName::OrganizationalUnitIdsFileName`.

    ```
    SourceArtifact::OU-IDs.txt
    ```

    The following example shows the file contents for `OU-IDs.txt`:

    ```
    [
        "ou-examplerootid111-exampleouid111","ou-examplerootid222-exampleouid222"
    ]
    ```

****Regions****  
Required: No  
When this parameter is selected, you must also select **DeploymentTargets**.
A list of AWS Regions where stack set instances are created or updated. Regions are updated in the order in which they are entered.  
Enter a list of valid AWS Regions in the format `Region1,Region2`, as shown in the following example.  

```
us-west-2,us-east-1
```

****FailureTolerancePercentage****  
Required: No  
The percentage of accounts per Region for which this stack operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in subsequent Regions. When calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.

****MaxConcurrentPercentage****  
Required: No  
The maximum percentage of accounts in which to perform this operation at one time. When calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number. If rounding down would result in zero, CloudFormation sets the number as one instead. Although you use this setting to specify the *maximum*, for large deployments the actual number of accounts acted upon concurrently may be lower due to service throttling.

**RegionConcurrencyType**  
Required: No  
You can specify if the stack set should deploy across AWS Regions sequentially or in parallel by configuring the Region concurrency deployment parameter. When the Region concurrency is specified to deploy stacks across multiple AWS Regions in parallel, this can result in faster overall deployment times.  
+ *Parallel*: Stack set deployments will be conducted at the same time, as long as a Region's deployment failures don't exceed a specified failure tolerance.
+ *Sequential*: Stack set deployments will be conducted one at a time, as long as a Region's deployment failures don't exceed a specified failure tolerance. Sequential deployment is the default selection.

**ConcurrencyMode**  
Required: No  
The concurrency mode allows you to choose how the concurrency level behaves during stack set operations, whether with strict or soft failure tolerance. **Strict Failure Tolerance** lowers the deployment speed as stack set operation failures occur because concurrency decreases for each failure. **Soft Failure Tolerance** prioritizes deployment speed while still leveraging CloudFormation safety capabilities.   
+ `STRICT_FAILURE_TOLERANCE`: This option dynamically lowers the concurrency level to ensure the number of failed accounts never exceeds a particular failure tolerance. This is the default behavior.
+ `SOFT_FAILURE_TOLERANCE`: This option decouples failure tolerance from the actual concurrency. This allows stack set operations to run at a set concurrency level, regardless of the number of failures.

**CallAs**  
Required: No  
This parameter is optional for the `SERVICE_MANAGED` permissions model and is not used for the `SELF_MANAGED` permissions model.
Specifies whether you are acting in the organization's management account or as a delegated administrator in a member account.  
If this parameter is set to `DELEGATED_ADMIN`, make sure that the pipeline IAM role has `organizations:ListDelegatedAdministrators` permission. Otherwise, the action will fail while running with an error similar to the following: `Account used is not a delegated administrator`.
+ `SELF`: Stack set deployment will use service-managed permissions while signed in to the management account.
+ `DELEGATED_ADMIN`: Stack set deployment will use service-managed permissions while signed in to a delegated administrator account.

### Input artifacts
<a name="action-reference-StackSet-input"></a>

You must include at least one input artifact that contains the template for the stack set in a `CloudFormationStackSet` action. You can include more input artifacts for lists of deployment targets, accounts, and parameters.
+ **Number of artifacts:** `1 to 3`
+ **Description:** You can include artifacts to provide:
  + The stack template file. (See the `TemplatePath` parameter.)
  + The parameters file. (See the `Parameters` parameter.)
  + The accounts file. (See the `DeploymentTargets` parameter.)

### Output artifacts
<a name="action-reference-StackSet-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

### Output variables
<a name="action-reference-StackSet-variables"></a>

If you configure this action, it produces variables that can be referenced by the action configuration of a downstream action in the pipeline. You configure an action with a namespace to make those variables available to the configuration of downstream actions.
+ **StackSetId**: The ID of the stack set.
+ **OperationId**: The ID of the stack set operation.

For more information, see [Variables reference](reference-variables.md).

### Example **CloudFormationStackSet** action configuration
<a name="action-reference-StackSet-example"></a>

The following examples show the action configuration for the **CloudFormationStackSet** action.

#### Example for the self-managed permissions model
<a name="action-reference-StackSet-example-selfmanaged"></a>

The following example shows a **CloudFormationStackSet** action where the deployment target entered is an AWS account ID.

------
#### [ YAML ]

```
Name: CreateStackSet
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: CloudFormationStackSet
  Version: '1'
RunOrder: 1
Configuration:
  DeploymentTargets: '111111222222'
  FailureTolerancePercentage: '20'
  MaxConcurrentPercentage: '25'
  PermissionModel: SELF_MANAGED
  Regions: us-east-1
  StackSetName: my-stackset
  TemplatePath: 'SourceArtifact::template.json'
OutputArtifacts: []
InputArtifacts:
  - Name: SourceArtifact
Region: us-west-2
Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "CreateStackSet",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "CloudFormationStackSet",
        "Version": "1"
    },
    "RunOrder": 1,
    "Configuration": {
        "DeploymentTargets": "111111222222",
        "FailureTolerancePercentage": "20",
        "MaxConcurrentPercentage": "25",
        "PermissionModel": "SELF_MANAGED",
        "Regions": "us-east-1",
        "StackSetName": "my-stackset",
        "TemplatePath": "SourceArtifact::template.json"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "SourceArtifact"
        }
    ],
    "Region": "us-west-2",
    "Namespace": "DeployVariables"
}
```

------

#### Example for the service-managed permissions model
<a name="action-reference-StackSet-example-servicemanaged"></a>

The following example shows a **CloudFormationStackSet** action for the service-managed permissions model where the option for auto deployment to AWS Organizations is enabled with stack retention.

------
#### [ YAML ]

```
Name: Deploy
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: CloudFormationStackSet
  Version: '1'
RunOrder: 1
Configuration:
  Capabilities: 'CAPABILITY_IAM,CAPABILITY_NAMED_IAM'
  OrganizationsAutoDeployment: EnabledWithStackRetention
  PermissionModel: SERVICE_MANAGED
  StackSetName: stacks-orgs
  TemplatePath: 'SourceArtifact::template.json'
OutputArtifacts: []
InputArtifacts:
  - Name: SourceArtifact
Region: eu-central-1
Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "Deploy",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "CloudFormationStackSet",
        "Version": "1"
    },
    "RunOrder": 1,
    "Configuration": {
        "Capabilities": "CAPABILITY_IAM,CAPABILITY_NAMED_IAM",
        "OrganizationsAutoDeployment": "EnabledWithStackRetention",
        "PermissionModel": "SERVICE_MANAGED",
        "StackSetName": "stacks-orgs",
        "TemplatePath": "SourceArtifact::template.json"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "SourceArtifact"
        }
    ],
    "Region": "eu-central-1",
    "Namespace": "DeployVariables"
}
```

------

## The CloudFormationStackInstances action
<a name="action-reference-StackInstances"></a>

This action creates new instances and deploys stack sets to specified instances. A stack instance is a reference to a stack in a target account within a Region. A stack instance can exist without a stack; for example, if the stack creation is not successful, the stack instance shows the reason for stack creation failure. A stack instance is associated with only one stack set.

After the initial creation of a stack set, you can add new stack instances by using `CloudFormationStackInstances`. Template parameter values can be overridden at the stack instance level during create or update stack set instance operations.

Each stack set has one template and set of template parameters. When you update the template or template parameters, you update them for the entire set. Then all instance statuses are set to `OUTDATED` until the changes are deployed to that instance.

To override parameter values on specific instances, for example, if the template contains a parameter for `stage` with a value of `prod`, you can override the value of that parameter to be `beta` or `gamma`.

**Topics**
+ [Action type](#action-reference-StackInstances-type)
+ [Configuration parameters](#action-reference-StackInstances-config)
+ [Input artifacts](#action-reference-StackInstances-input)
+ [Output artifacts](#action-reference-StackInstances-output)
+ [Output variables](#action-reference-StackInstances-variables)
+ [Example action configuration](#action-reference-StackInstances-example)

### Action type
<a name="action-reference-StackInstances-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `CloudFormationStackInstances`
+ Version: `1`

### Configuration parameters
<a name="action-reference-StackInstances-config"></a>

**StackSetName**  
Required: Yes  
The name to associate with the stack set. This name must be unique in the Region where it is created.  
The name may only contain alphanumeric and hyphen characters. It must begin with an alphabetic character and be 128 characters or fewer.

****DeploymentTargets****  
Required: No  
For the SERVICE\$1MANAGED permissions model, you can provide either the organization root ID or organizational Unit IDs for deployment targets. For the SELF\$1MANAGED permissions model, you can only provide accounts.
When this parameter is selected, you must also select **Regions**.
A list of AWS accounts or organizational unit IDs where stack set instances should be created/updated.  
+ **Accounts**:

  You can provide accounts as a literal list or a file path:
  + *Literal*: Enter parameters in the shorthand syntax format `account_ID,account_ID`, as shown in the following example.

    ```
    111111222222,333333444444
    ```
  + *File path: * The location of the file containing a list of AWS accounts where stack set instances should be created/updated, entered in the format `InputArtifactName::AccountsFileName`. If you use file path to specify either **accounts** or **OrganizationalUnitIds**, the file format must be in JSON, as shown in the following example.

    ```
    SourceArtifact::accounts.txt
    ```

    The following example shows the file contents for `accounts.txt`:

    ```
    [
        "111111222222"
    ]
    ```

    The following example shows the file contents for `accounts.txt` when listing more than one account:

    ```
    [
        "111111222222","333333444444"
    ]
    ```
+ **OrganizationalUnitIds**: 
**Note**  
This parameter is optional for the SERVICE\$1MANAGED permissions model and is not used for the SELF\$1MANAGED permissions model. Do not use this if you select **OrganizationsAutoDeployment**.

  The AWS organizational units in which to update associated stack instances.

  You can provide organizational unit IDs as a literal list or a file path.
  + *Literal*: Enter an array of strings separated by commas, as shown in the following example.

    ```
    ou-examplerootid111-exampleouid111,ou-examplerootid222-exampleouid222
    ```
  + *File path: * The location of the file containing a list of OrganizationalUnitIds in which to create or update stack set instances. If you use file path to specify either **accounts** or **OrganizationalUnitIds**, the file format must be in JSON, as shown in the following example.

    Enter a path to the file in the format `InputArtifactName::OrganizationalUnitIdsFileName`.

    ```
    SourceArtifact::OU-IDs.txt
    ```

    The following example shows the file contents for `OU-IDs.txt`:

    ```
    [
        "ou-examplerootid111-exampleouid111","ou-examplerootid222-exampleouid222"
    ]
    ```

****Regions****  
Required: Yes  
When this parameter is selected, you must also select **DeploymentTargets**.
A list of AWS Regions where stack set instances are created or updated. Regions are updated in the order in which they are entered.  
Enter a list of valid AWS Regions in the format: `Region1,Region2`, as shown in the following example.  

```
us-west-2,us-east-1
```

**ParameterOverrides**  
Required: No  
A list of stack set parameters that you want to override in the selected stack instances. Overridden parameter values are applied to all stack instances in the specified accounts and Regions.  
You can provide parameters as a literal list or a file path:  
+ You can enter parameters in the following shorthand syntax format: `ParameterKey=string,ParameterValue=string,UsePreviousValue=boolean,ResolvedValue=string ParameterKey=string,ParameterValue=string,UsePreviousValue=boolean,ResolvedValue=string`. For more information about these data types, see [Template parameter data types](#action-reference-StackSets-datatypes).

  The following example shows a parameter named `BucketName` with the value `amzn-s3-demo-source-bucket`.

  ```
  ParameterKey=BucketName,ParameterValue=amzn-s3-demo-source-bucket
  ```

  The following example shows an entry with multiple parameters.

  ```
                                                                                                        
    ParameterKey=BucketName,ParameterValue=amzn-s3-demo-source-bucket
    ParameterKey=Asset1,ParameterValue=true              
    ParameterKey=Asset2,ParameterValue=true
  ```
+ You can enter the location of the file containing a list of template parameter overrides entered in the format `InputArtifactName::ParameterOverridessFileName`, as shown in the following example.

  ```
  SourceArtifact::parameter-overrides.txt
  ```

  The following example shows the file contents for `parameter-overrides.txt`.

  ```
  [
      {
          "ParameterKey": "KeyName",
          "ParameterValue": "true"
      },
      {
          "ParameterKey": "KeyName",
          "ParameterValue": "true"
      }
  ]
  ```

****FailureTolerancePercentage****  
Required: No  
The percentage of accounts per Region for which this stack operation can fail before CloudFormation stops the operation in that Region. If the operation is stopped in a Region, CloudFormation doesn't attempt the operation in subsequent Regions. When calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number.

****MaxConcurrentPercentage****  
Required: No  
The maximum percentage of accounts on which to perform this operation at one time. When calculating the number of accounts based on the specified percentage, CloudFormation rounds *down* to the next whole number. If rounding down would result in zero, CloudFormation sets the number as one instead. Although you specify the *maximum*, for large deployments the actual number of accounts acted upon concurrently may be lower due to service throttling.

**RegionConcurrencyType**  
Required: No  
You can specify if the stack set should deploy across AWS Regions sequentially or in parallel by configuring the region concurrency deployment parameter. When the Region concurrency is specified to deploy stacks across multiple AWS Regions in parallel, this can result in faster overall deployment times.  
+ *Parallel*: Stack set deployments will be conducted at the same time, as long as a Region's deployment failures don't exceed a specified failure tolerance.
+ *Sequential*: Stack set deployments will be conducted one at a time, as long as a Region's deployment failures don't exceed a specified failure tolerance. Sequential deployment is the default selection.

**ConcurrencyMode**  
Required: No  
The concurrency mode allows you to choose how the concurrency level behaves during stack set operations, whether with strict or soft failure tolerance. **Strict Failure Tolerance** lowers the deployment speed as stack set operation failures occur because concurrency decreases for each failure. **Soft Failure Tolerance** prioritizes deployment speed while still leveraging CloudFormation safety capabilities.   
+ `STRICT_FAILURE_TOLERANCE`: This option dynamically lowers the concurrency level to ensure the number of failed accounts never exceeds a particular failure tolerance. This is the default behavior.
+ `SOFT_FAILURE_TOLERANCE`: This option decouples failure tolerance from the actual concurrency. This allows stack set operations to run at a set concurrency level, regardless of the number of failures.

**CallAs**  
Required: No  
This parameter is optional for the `SERVICE_MANAGED` permissions model and is not used for the `SELF_MANAGED` permissions model.
Specifies whether you are acting in the organization's management account or as a delegated administrator in a member account.  
If this parameter is set to `DELEGATED_ADMIN`, make sure that the pipeline IAM role has `organizations:ListDelegatedAdministrators` permission. Otherwise, the action will fail while running with an error similar to the following: `Account used is not a delegated administrator`.
+ `SELF`: Stack set deployment will use service-managed permissions while signed in to the management account.
+ `DELEGATED_ADMIN`: Stack set deployment will use service-managed permissions while signed in to a delegated administrator account.

### Input artifacts
<a name="action-reference-StackInstances-input"></a>

`CloudFormationStackInstances` can contain artifacts that list deployment targets and parameters.
+ **Number of artifacts:** `0 to 2`
+ **Description:** As input, the stack set action optionally accepts artifacts for these purposes:
  + To provide the parameters file to use. (See the `ParameterOverrides` parameter.)
  + To provide the target accounts file to use. (See the `DeploymentTargets` parameter.)

### Output artifacts
<a name="action-reference-StackInstances-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

### Output variables
<a name="action-reference-StackInstances-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. You configure an action with a namespace to make those variables available to the configuration of downstream actions.
+ **StackSetId**: The ID of the stack set.
+ **OperationId**: The ID of the stack set operation.

For more information, see [Variables reference](reference-variables.md).

### Example action configuration
<a name="action-reference-StackInstances-example"></a>

The following examples show the action configuration for the **CloudFormationStackInstances** action.

#### Example for the self-managed permissions model
<a name="action-reference-StackInstances-example-selfmanaged"></a>

The following example shows a **CloudFormationStackInstances** action where the deployment target entered is an AWS account ID `111111222222`.

------
#### [ YAML ]

```
Name: my-instances
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: CloudFormationStackInstances
  Version: '1'
RunOrder: 2
Configuration:
  DeploymentTargets: '111111222222'
  Regions: 'us-east-1,us-east-2,us-west-1,us-west-2'
  StackSetName: my-stackset
OutputArtifacts: []
InputArtifacts:
  - Name: SourceArtifact
Region: us-west-2
```

------
#### [ JSON ]

```
{
    "Name": "my-instances",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "CloudFormationStackInstances",
        "Version": "1"
    },
    "RunOrder": 2,
    "Configuration": {
        "DeploymentTargets": "111111222222",
        "Regions": "us-east-1,us-east-2,us-west-1,us-west-2",
        "StackSetName": "my-stackset"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "SourceArtifact"
        }
    ],
    "Region": "us-west-2"
}
```

------

#### Example for the service-managed permissions model
<a name="action-reference-StackInstances-example-servicemanaged"></a>

The following example shows a **CloudFormationStackInstances** action for the service-managed permissions model where the deployment target is an AWS Organizations organizational unit ID `ou-1111-1example`.

------
#### [ YAML ]

```
Name: Instances
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Provider: CloudFormationStackInstances
  Version: '1'
RunOrder: 2
Configuration:
  DeploymentTargets: ou-1111-1example
  Regions: us-east-1
  StackSetName: my-stackset
OutputArtifacts: []
InputArtifacts:
  - Name: SourceArtifact
Region: eu-central-1
```

------
#### [ JSON ]

```
{
    "Name": "Instances",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Provider": "CloudFormationStackInstances",
        "Version": "1"
    },
    "RunOrder": 2,
    "Configuration": {
        "DeploymentTargets": "ou-1111-1example",
        "Regions": "us-east-1",
        "StackSetName": "my-stackset"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "SourceArtifact"
        }
    ],
    "Region": "eu-central-1"
}
```

------

## Service role permissions: `CloudFormationStackSet` action
<a name="edit-role-cfn-stackset"></a>

For CloudFormation StackSets actions, the following minimum permissions are required.

For the `CloudFormationStackSet` action, add the following to your policy statement:

```
{
    "Effect": "Allow",
    "Action": [
        "cloudformation:CreateStackSet",
        "cloudformation:UpdateStackSet",
        "cloudformation:CreateStackInstances",
        "cloudformation:DescribeStackSetOperation",
        "cloudformation:DescribeStackSet",
        "cloudformation:ListStackInstances"
    ],
    "Resource": "resource_ARN"
},
```

## Service role permissions: `CloudFormationStackInstances` action
<a name="edit-role-cfn-stackinstances"></a>

For the `CloudFormationStackInstances` action, add the following to your policy statement:

```
{
    "Effect": "Allow",
    "Action": [
        "cloudformation:CreateStackInstances",
        "cloudformation:DescribeStackSetOperation"
    ],
    "Resource": "resource_ARN"
},
```

## Permissions models for stack set operations
<a name="action-reference-StackSets-permissions"></a>

Because CloudFormation StackSets performs operations across multiple accounts, you must define the necessary permissions in those accounts before you can create the stack set. You can define permissions through self-managed permissions or service-managed permissions.

With self-managed permissions, you create the two IAM roles required by StackSets - an administrator role such as the AWSCloudFormationStackSetAdministrationRole in the account where you define the stack set and an execution role such as the AWSCloudFormationStackSetExecutionRole in each of the accounts where you deploy stack set instances. Using this permissions model, StackSets can deploy to any AWS account in which the user has permissions to create an IAM role. For more information, see [Grant self-managed permissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-prereqs-self-managed.html) in the *AWS CloudFormation User Guide*.

**Note**  
Because CloudFormation StackSets performs operations across multiple accounts, you must define the necessary permissions in those accounts before you can create the stack set.

With service-managed permissions, you can deploy stack instances to accounts managed by AWS Organizations. Using this permissions model, you don't have to create the necessary IAM roles because StackSets creates the IAM roles on your behalf. With this model, you can also enable automatic deployments to accounts that are added to the organization in the future. See [Enable trusted access with AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-enable-trusted-access.html) in the *AWS CloudFormation User Guide*.

## Template parameter data types
<a name="action-reference-StackSets-datatypes"></a>

The template parameters used in stack set operations include the following data types. For more information, see [DescribeStackSet](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Parameter.html).

ParameterKey  
+ Description: The key associated with the parameter. If you don't specify a key and value for a particular parameter, AWS CloudFormation uses the default value that is specified in the template.
+ Example:

  ```
  "ParameterKey=BucketName,ParameterValue=amzn-s3-demo-source-bucket"
  ```

ParameterValue  
+ Description: The input value associated with the parameter.
+ Example:

  ```
  "ParameterKey=BucketName,ParameterValue=amzn-s3-demo-source-bucket"
  ```

UsePreviousValue  
+ During a stack update, use the existing parameter value that the stack is using for a given parameter key. If you specify `true`, do not specify a parameter value.
+ Example:

  ```
  "ParameterKey=Asset1,UsePreviousValue=true"
  ```

Each stack set has one template and set of template parameters. When you update the template or template parameters, you update them for the entire set. Then all instance statuses are set to OUTDATED until the changes are deployed to that instance.

To override parameter values on specific instances, for example, if the template contains a parameter for `stage` with a value of `prod`, you can override the value of that parameter to be `beta` or `gamma`.

## See also
<a name="action-reference-CloudFormation-links"></a>

The following related resources can help you as you work with this action.
+ [Parameter types](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html#parameters-section-structure-properties-type) – This reference chapter in the *AWS CloudFormation User Guide* provides more descriptions and examples for CloudFormation template parameters.
+ Best practices – For more information about best practices for deploying stack sets, see [https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-bestpractices.html](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-bestpractices.html) in the *AWS CloudFormation User Guide*.
+ [AWS CloudFormation API Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/) – You can reference the following CloudFormation actions in the *AWS CloudFormation API Reference* for more information about the parameters used in stack set operations:

  
  + The [CreateStackSet](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStackSet.html) action creates a stack set.
  + The [UpdateStackSet](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStackSet.html) action updates the stack set and associated stack instances in the specified accounts and Regions. Even if the stack set operation created by updating the stack set fails (completely or partially, below or above a specified failure tolerance), the stack set is updated with these changes. Subsequent CreateStackInstances calls on the specified stack set use the updated stack set.
  + The [CreateStackInstances](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStackInstances.html) action creates a stack instance for all specified regions within all specified accounts on a self-managed permission model, or within all specified deployment targets on a service-managed permission model. You can override parameters for the instances created by this action. If the instances already exist, CreateStackInstances calls UpdateStackInstances with the same input parameters. When you use this action to create instances, it does not change the status of other stack instances.
  + The [UpdateStackInstances](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStackInstances.html) action brings stack instances up to date with the stack set for all specified regions within all specified accounts on a self-managed permission model, or within all specified deployment targets on a service-managed permission model. You can override parameters for the instances updated by this action. When you use this action to update a subset of instances, it does not change the status of other stack instances.
  + The [DescribeStackSetOperation](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackSetOperation.html) action returns the description of the specified stack set operation.
  + The [DescribeStackSet](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackSet.html) action returns the description of the specified stack set.

# AWS CodeBuild build and test action reference
<a name="action-reference-CodeBuild"></a>

Allows you to run builds and tests as part of your pipeline. When you run a CodeBuild build or test action, commands specified in the buildspec are run inside of a CodeBuild container. All artifacts that are specified as input artifacts to a CodeBuild action are available inside of the container running the commands. CodeBuild can provide either a build or test action. For more information, see the [AWS CodeBuild User Guide](https://docs.aws.amazon.com/codebuild/latest/userguide/).

When you use the CodePipeline wizard in the console to create a build project, the CodeBuild build project shows the source provider is CodePipeline. When you create a build project in the CodeBuild console, you cannot specify CodePipeline as the source provider, but adding the build action to your pipeline adjusts the source in the CodeBuild console. For more information, see [ProjectSource](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_ProjectSource.html) in the *AWS CodeBuild API Reference*.

**Topics**
+ [Action type](#action-reference-CodeBuild-type)
+ [Configuration parameters](#action-reference-CodeBuild-config)
+ [Input artifacts](#action-reference-CodeBuild-input)
+ [Output artifacts](#action-reference-CodeBuild-output)
+ [Output variables](#action-reference-CodeBuild-variables)
+ [Service role permissions: CodeBuild action](#edit-role-codebuild)
+ [Action declaration (CodeBuild example)](#action-reference-CodeBuild-example)
+ [See also](#action-reference-CodeBuild-links)

## Action type
<a name="action-reference-CodeBuild-type"></a>
+ Category: `Build` or `Test`
+ Owner: `AWS`
+ Provider: `CodeBuild`
+ Version: `1`

## Configuration parameters
<a name="action-reference-CodeBuild-config"></a>

**ProjectName**  
Required: Yes  
`ProjectName` is the name of the build project in CodeBuild.

**PrimarySource**  
Required: Conditional  
The value of the `PrimarySource` parameter must be the name of one of the input artifacts to the action. CodeBuild looks for the buildspec file and runs the buildspec commands in the directory that contains the unzipped version of this artifact.  
This parameter is required if multiple input artifacts are specified for a CodeBuild action. When there is only one source artifact for the action, the `PrimarySource` artifact defaults to that artifact.

**BatchEnabled**  
Required: No  
The Boolean value of the `BatchEnabled` parameter allows the action to run multiple builds in the same build execution.  
When this option is enabled, the `CombineArtifacts` option is available.  
For pipeline examples with batch builds enabled, see [CodePipeline integration with CodeBuild and batch builds](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-pipeline-batch.html).

**BuildspecOverride**  
Required: No  
An inline buildspec definition or buildspec file declaration that overrides the latest one defined in the build project, for this build only. The buildspec defined on the project is not changed.  
If this value is set, it can be one of the following:  
+ An inline buildspec definition. For more information, see the syntax reference at [Buildspec syntax](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax).
+ The path to an alternate buildspec file relative to the value of the built-in `CODEBUILD_SRC_DIR` environment variable or the path to an S3 bucket. The bucket must be in the same AWS Region as the build project. Specify the buildspec file using its ARN (for example, `arn:aws:s3:::my-codebuild-sample2/buildspec.yml`). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information about adding a path, see [Buildspec File Name and Storage Location](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-name-storage).
Since this property allows you to change the build commands that will run in the container, you should note that an IAM principal with the ability to call this API and set this parameter can override the default settings. Moreover, we encourage that you use a trustworthy buildspec location like a file in your source repository or a Amazon S3 bucket.

**CombineArtifacts**  
Required: No  
The Boolean value of the `CombineArtifacts` parameter combines all build artifacts from a batch build into a single artifact file for the build action.  
To use this option, the `BatchEnabled` parameter must be enabled.

**EnvironmentVariables**  
Required: No  
The value of this parameter is used to set environment variables for the CodeBuild action in your pipeline. The value for the `EnvironmentVariables` parameter takes the form of a JSON array of environment variable objects. See the example parameter in [Action declaration (CodeBuild example)](#action-reference-CodeBuild-example).  
Each object has three parts, all of which are strings:  
+ `name`: The name or key of the environment variable. 
+ `value`: The value of the environment variable. When using the `PARAMETER_STORE` or `SECRETS_MANAGER` type, this value must be the name of a parameter you have already stored in AWS Systems Manager Parameter Store or a secret you have already stored in AWS Secrets Manager, respectively.
**Note**  
We strongly discourage the use of environment variables to store sensitive values, especially AWS credentials. When you use the CodeBuild console or AWS CLI, environment variables are displayed in plain text. For sensitive values, we recommend that you use the `SECRETS_MANAGER` type instead. 
+ `type`: (Optional) The type of environment variable. Valid values are `PARAMETER_STORE`, `SECRETS_MANAGER`, or `PLAINTEXT`. When not specified, this defaults to `PLAINTEXT`.
When you enter the `name`, `value`, and `type` for your environment variables configuration, especially if the environment variable contains CodePipeline output variable syntax, do not exceed the 1000-character limit for the configuration’s value field. A validation error is returned when this limit is exceeded.
For more information, see [ EnvironmentVariable](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_EnvironmentVariable.html) in the AWS CodeBuild API Reference. For an example CodeBuild action with an environment variable that resolves to the GitHub branch name, see [Example: Use a BranchName variable with CodeBuild environment variables](actions-variables.md#actions-variables-examples-env-branchname).

## Input artifacts
<a name="action-reference-CodeBuild-input"></a>
+ **Number of artifacts:** `1 to 5`
+ **Description:** CodeBuild looks for the buildspec file and runs the buildspec commands from the directory of the primary source artifact. When either a single input source is specified or when more than one input source is specified for the CodeBuild action, the single artifact, or the primary artifact in the case of multiple input sources, must be set using the `PrimarySource` action configuration parameter in CodePipeline. 

  Each input artifact is extracted to its own directory, the locations of which are stored in environment variables. The directory for the primary source artifact is made available with `$CODEBUILD_SRC_DIR`. The directories for all other input artifacts are made available with `$CODEBUILD_SRC_DIR_yourInputArtifactName`.
**Note**  
The artifact configured in your CodeBuild project becomes the input artifact used by the CodeBuild action in your pipeline.

## Output artifacts
<a name="action-reference-CodeBuild-output"></a>
+ **Number of artifacts:** `0 to 5` 
+ **Description:** These can be used to make the artifacts that are defined in the CodeBuild buildspec file available to subsequent actions in the pipeline. When only one output artifact is defined, this artifact can be defined directly under the `artifacts` section of the buildspec file. When more than one output artifact is specified, all artifacts referenced must be defined as secondary artifacts in the buildspec file. The names of the output artifacts in CodePipeline must match the artifact identifiers in the buildspec file.
**Note**  
The artifact configured in your CodeBuild project becomes the CodePipeline input artifact in your pipeline action.

  If the `CombineArtifacts` parameter is selected for batch builds, the output artifact location contains the combined artifacts from multiple builds that were run in the same execution.

## Output variables
<a name="action-reference-CodeBuild-variables"></a>

This action will produce as variables all environment variables that were exported as part of the build. For more details on how to export environment variables, see [ EnvironmentVariable](https://docs.aws.amazon.com/codebuild/latest/APIReference/API_EnvironmentVariable.html) in the *AWS CodeBuild API Guide*.

For more information about using CodeBuild environment variables in CodePipeline, see the examples in [CodeBuild action output variables](reference-variables.md#reference-variables-list-configured-codebuild). For a list of the environment variables you can use in CodeBuild, see [ Environment variables in build environments](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html) in the *AWS CodeBuild User Guide*.

## Service role permissions: CodeBuild action
<a name="edit-role-codebuild"></a>

For CodeBuild support, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "codebuild:BatchGetBuilds",
                "codebuild:StartBuild",
                "codebuild:BatchGetBuildBatches",
                "codebuild:StartBuildBatch"
            ],
            "Resource": [
                "arn:aws:codebuild:*:111122223333:project/[[ProjectName]]"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

## Action declaration (CodeBuild example)
<a name="action-reference-CodeBuild-example"></a>

------
#### [ YAML ]

```
Name: Build
Actions:
  - Name: PackageExport
    ActionTypeId:
      Category: Build
      Owner: AWS
      Provider: CodeBuild
      Version: '1'
    RunOrder: 1
    Configuration:
      BatchEnabled: 'true'
      CombineArtifacts: 'true'
      ProjectName: my-build-project
      PrimarySource: MyApplicationSource1
      EnvironmentVariables: '[{"name":"TEST_VARIABLE","value":"TEST_VALUE","type":"PLAINTEXT"},{"name":"ParamStoreTest","value":"PARAMETER_NAME","type":"PARAMETER_STORE"}]'
    OutputArtifacts:
      - Name: MyPipeline-BuildArtifact
    InputArtifacts:
      - Name: MyApplicationSource1
      - Name: MyApplicationSource2
```

------
#### [ JSON ]

```
{
    "Name": "Build",
    "Actions": [
        {
            "Name": "PackageExport",
            "ActionTypeId": {
                "Category": "Build",
                "Owner": "AWS",
                "Provider": "CodeBuild",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "BatchEnabled": "true",
                "CombineArtifacts": "true",
                "ProjectName": "my-build-project",
                "PrimarySource": "MyApplicationSource1",
                "EnvironmentVariables": "[{\"name\":\"TEST_VARIABLE\",\"value\":\"TEST_VALUE\",\"type\":\"PLAINTEXT\"},{\"name\":\"ParamStoreTest\",\"value\":\"PARAMETER_NAME\",\"type\":\"PARAMETER_STORE\"}]"
            },
            "OutputArtifacts": [
                {
                    "Name": "MyPipeline-BuildArtifact"
                }
            ],
            "InputArtifacts": [
                {
                    "Name": "MyApplicationSource1"
                },
                {
                    "Name": "MyApplicationSource2"
                }
            ]
        }
    ]
}
```

------

## See also
<a name="action-reference-CodeBuild-links"></a>

The following related resources can help you as you work with this action.
+ [AWS CodeBuild User Guide](https://docs.aws.amazon.com/codebuild/latest/userguide/) – For an example pipeline with a CodeBuild action, see [Use CodePipeline with CodeBuild to Test Code and Run Builds](https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html). For examples of projects with multiple input and output CodeBuild artifacts, see [CodePipeline Integration with CodeBuild and Multiple Input Sources and Output Artifacts Sample ](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-pipeline-multi-input-output.html) and [Multiple Input Sources and Output Artifacts Sample](https://docs.aws.amazon.com/codebuild/latest/userguide/sample-multi-in-out.html) .
+ [Tutorial: Create a pipeline that builds and tests your Android app with AWS Device Farm](tutorials-codebuild-devicefarm.md) – This tutorial provides a sample buildspec file and sample application to create a pipeline with a GitHub source that builds and tests an Android app with CodeBuild and AWS Device Farm.
+ [Build Specification Reference for CodeBuild ](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html) – This reference topic provides definitions and examples for understanding CodeBuild buildspec files. For a list of the environment variables you can use in CodeBuild, see [ Environment variables in build environments](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html) in the *AWS CodeBuild User Guide*.

# AWS CodePipeline invoke action reference
<a name="action-reference-PipelineInvoke"></a>

You use a CodePipeline invoke action to simplify triggering downstream pipeline executions and passing pipeline variables and source revisions between pipelines.

**Note**  
This action is only supported for V2 type pipelines.

**Topics**
+ [Action type](#action-reference-PipelineInvoke-type)
+ [Configuration parameters](#action-reference-PipelineInvoke-parameters)
+ [Input artifacts](#action-reference-PipelineInvoke-input)
+ [Output artifacts](#action-reference-PipelineInvoke-output)
+ [Service role policy permissions for the CodePipeline invoke action](#action-reference-PipelineInvoke-permissions-action)
+ [Action declaration](#action-reference-PipelineInvoke-example)
+ [See also](#action-reference-PipelineInvoke-links)

## Action type
<a name="action-reference-PipelineInvoke-type"></a>
+ Category: `Invoke`
+ Owner: `AWS`
+ Provider: `CodePipeline`
+ Version: `1`

## Configuration parameters
<a name="action-reference-PipelineInvoke-parameters"></a>

**PipelineName**  
Required: Yes  
The name of the pipeline that will, upon running, start the current target pipeline. You must have already created the invoking pipeline. The action will start the `s3-pipeline-test` (target) pipeline when the (invoking) pipeline named `my-s3-pipeline` starts an execution.

**SourceRevisions**  
Required: No  
The source revisions that you want the target pipeline to use when it is started by the invoking pipeline. For example, an S3 source action provides output variables such as the S3 Version ID and Object Key. You can specify a revision value to be used when the pipeline is invoked.   
For the CLI, you specify source revisions as a serialized JSON string. For more information about using source revision overrides, see [SourceRevisionOverride](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_SourceRevisionOverride.html) in the *CodePipeline API Guide*.  
The mapping uses a string format as shown in the following example:  

```
[{"actionName":"Source","revisionType":"S3_OBJECT_VERSION_ID","revision
Value":"zq8mjNEXAMPLE"}]
```

**Variables**  
Required: No  
The names and values of variables that you want the action to support.  
For the CLI, you specify variables as a serialized JSON string. For more information about using pipeline variables, see [PipelineVariable](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PipelineVariable.html) in the *CodePipeline API Guide*.  
The mapping uses a string format as shown in the following example:  

```
[{"name":"VAR1","value":"VALUE1"}]
```

The following image shows an example of the action added to a pipeline in the console. 

![\[A pipeline with an S3 source and a build stage that includes the pipeline invoke action\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/example-pipeline-invoke-run.png)


The following image shows an example of the **Edit** page for the action. In the following example, the pipeline named `s3-pipeline-test` has a pipeline invoke action configured as shown for the console. The action will start the `s3-pipeline-test` pipeline when the pipeline named `my-s3-pipeline` completes an execution. The example shows that source revision override for the S3\$1OBJECT\$1VERSION\$1ID source override with specified revision value of `zq8mjNYEexample`.

![\[The Edit action page for a new pipeline with the pipeline invoke action\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/example-pipeline-invoke-edit.png)


## Input artifacts
<a name="action-reference-PipelineInvoke-input"></a>
+ **Number of artifacts:** `0`
+ **Description:** Input artifacts do not apply for this action type.

## Output artifacts
<a name="action-reference-PipelineInvoke-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role policy permissions for the CodePipeline invoke action
<a name="action-reference-PipelineInvoke-permissions-action"></a>

When CodePipeline runs the action, the CodePipeline service role policy requires the `codepipeline:StartPipelineExecution` permission, appropriately scoped down to the pipeline resource ARN in order to maintain access with least privilege.

```
 {
            "Sid": "StatementForPipelineInvokeAction",
            "Effect": "Allow",
            "Action": "codepipeline:StartPipelineExecution",
            "Resource": [
                "arn:aws:codepipeline:{{region}}:{{AccountId}}:{{pipelineName}}"
            ]
        }
```

## Action declaration
<a name="action-reference-PipelineInvoke-example"></a>

------
#### [ YAML ]

```
name: Invoke-pipeline
actionTypeId:
  category: Invoke
  owner: AWS
  provider: CodePipeline
  version: '1'
runOrder: 2
configuration:
  PipelineName: my-s3-pipeline
  SourceRevisions: '[{"actionName":"Source","revisionType":"S3_OBJECT_VERSION_ID","revision
Value":"zq8mjNEXAMPLE"}]'
  Variables: '[{"name":"VAR1","value":"VALUE1"}]'
```

------
#### [ JSON ]

```
{
    "name": "Invoke-pipeline",
    "actionTypeId": {
        "category": "Invoke",
        "owner": "AWS",
        "provider": "CodePipeline",
        "version": "1"
    },
    "runOrder": 2,
    "configuration": {
        "PipelineName": "my-s3-pipeline",
        "SourceRevisions": "[{\"actionName\":\"Source\",\"revisionType\":\"S3_OBJECT_VERSION_ID\",\"revisionValue\":\"zq8mjNEXAMPLE"}]",
        "Variables": "[{\"name\":\"VAR1\",\"value\":\"VALUE1\"}]"
    }
},
```

------

## See also
<a name="action-reference-PipelineInvoke-links"></a>

The following related resources can help you as you work with this action.
+  [Start a pipeline with a source revision override](pipelines-trigger-source-overrides.md) – This section describes starting a pipeline with source revisions manually or through the EventBridge event input transformer.

# CodeCommit source action reference
<a name="action-reference-CodeCommit"></a>

Starts the pipeline when a new commit is made on the configured CodeCommit repository and branch.

If you use the console to create or edit the pipeline, CodePipeline creates an EventBridge rule that starts your pipeline when a change occurs in the repository.

**Note**  
For Amazon ECR, Amazon S3, or CodeCommit sources, you can also create a source override using input transform entry to use the `revisionValue` in EventBridge for your pipeline event, where the `revisionValue` is derived from the source event variable for your object key, commit, or image ID. For more information, see the optional step for input transform entry included in the procedures under [Amazon ECR source actions and EventBridge resources](create-cwe-ecr-source.md), [Connecting to Amazon S3 source actions with a source enabled for events](create-S3-source-events.md), or [CodeCommit source actions and EventBridge](triggering.md).

You must have already created a CodeCommit repository before you connect the pipeline through a CodeCommit action.

After a code change is detected, you have the following options for passing the code to subsequent actions:
+ **Default** – Configures the CodeCommit source action to output a ZIP file with a shallow copy of your commit.
+ **Full clone** – Configures the source action to output a Git URL reference to the repository for subsequent actions.

  Currently, the Git URL reference can only be used by downstream CodeBuild actions to clone the repo and associated Git metadata. Attempting to pass a Git URL reference to non-CodeBuild actions results in an error.

**Topics**
+ [Action type](#action-reference-CodeCommit-type)
+ [Configuration parameters](#action-reference-CodeCommit-config)
+ [Input artifacts](#action-reference-CodeCommit-input)
+ [Output artifacts](#action-reference-CodeCommit-output)
+ [Output variables](#action-reference-CodeCommit-variables)
+ [Service role permissions: CodeCommit action](#edit-role-codecommit)
+ [Example action configuration](#action-reference-CodeCommit-example)
+ [See also](#action-reference-CodeCommit-links)

## Action type
<a name="action-reference-CodeCommit-type"></a>
+ Category: `Source`
+ Owner: `AWS`
+ Provider: `CodeCommit`
+ Version: `1`

## Configuration parameters
<a name="action-reference-CodeCommit-config"></a>

**RepositoryName**  
Required: Yes  
The name of the repository where source changes are to be detected.

**BranchName**  
Required: Yes  
The name of the branch where source changes are to be detected.

**PollForSourceChanges**  
Required: No  
`PollForSourceChanges` controls whether CodePipeline polls the CodeCommit repository for source changes. We recommend that you use CloudWatch Events to detect source changes instead. For more information about configuring CloudWatch Events, see [Migrate polling pipelines (CodeCommit source) (CLI)](update-change-detection.md#update-change-detection-cli-codecommit) or [Migrate polling pipelines (CodeCommit source) (CloudFormation template)](update-change-detection.md#update-change-detection-cfn-codecommit).  
If you intend to configure a CloudWatch Events rule, you must set `PollForSourceChanges` to `false` to avoid duplicate pipeline executions.
Valid values for this parameter:  
+ `true`: If set, CodePipeline polls your repository for source changes.
**Note**  
If you omit `PollForSourceChanges`, CodePipeline defaults to polling your repository for source changes. This behavior is the same as if `PollForSourceChanges` is included and set to `true`.
+ `false`: If set, CodePipeline does not poll your repository for source changes. Use this setting if you intend to configure a CloudWatch Events rule to detect source changes.

****OutputArtifactFormat****  
Required: No  
The output artifact format. Values can be either `CODEBUILD_CLONE_REF` or `CODE_ZIP`. If unspecified, the default is `CODE_ZIP`.  
The `CODEBUILD_CLONE_REF` option can only be used by CodeBuild downstream actions.  
If you choose this option, you need to add the `codecommit:GitPull` permission to your CodeBuild service role as shown in [Add CodeBuild GitClone permissions for CodeCommit source actions](troubleshooting.md#codebuild-role-codecommitclone). You also need to add the `codecommit:GetRepository` permission to your CodePipeline service role as shown in [Add permissions to the CodePipeline service role](how-to-custom-role.md#how-to-update-role-new-services). For a tutorial that shows you how to use the **Full clone** option, see [Tutorial: Use full clone with a CodeCommit pipeline source](tutorials-codecommit-gitclone.md).

## Input artifacts
<a name="action-reference-CodeCommit-input"></a>
+ **Number of artifacts:** `0`
+ **Description:** Input artifacts do not apply for this action type.

## Output artifacts
<a name="action-reference-CodeCommit-output"></a>
+ **Number of artifacts:** `1` 
+ **Description:** The output artifact of this action is a ZIP file that contains the contents of the configured repository and branch at the commit specified as the source revision for the pipeline execution. The artifacts generated from the repository are the output artifacts for the CodeCommit action. The source code commit ID is displayed in CodePipeline as the source revision for the triggered pipeline execution.

## Output variables
<a name="action-reference-CodeCommit-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions.

For more information, see [Variables reference](reference-variables.md).

**CommitId**  
The CodeCommit commit ID that triggered the pipeline execution. Commit IDs are the full SHA of the commit.

**CommitMessage**  
The description message, if any, associated with the commit that triggered the pipeline execution.

**RepositoryName**  
The name of the CodeCommit repository where the commit that triggered the pipeline was made.

**BranchName**  
The name of the branch for the CodeCommit repository where the source change was made.

**AuthorDate**  
The date when the commit was authored, in timestamp format.

**CommitterDate**  
The date when the commit was committed, in timestamp format.

## Service role permissions: CodeCommit action
<a name="edit-role-codecommit"></a>

When CodePipeline runs the action, the CodePipeline service role policy requires the following permissions, appropriately scoped down to the pipeline resource ARN in order to maintain access with least privilege. For example, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codecommit:CancelUploadArchive",
                "codecommit:GetBranch",
                "codecommit:GetCommit",
                "codecommit:GetRepository",
                "codecommit:GetUploadArchiveStatus",
                "codecommit:UploadArchive"
            ],
            "Resource": [
                "arn:aws:codecommit:*:111122223333:[[codecommitRepostories]]"
            ]
        }
    ]
}
```

------



## Example action configuration
<a name="action-reference-CodeCommit-example"></a>

### Example for default output artifact format
<a name="w2aac56c49c29b3"></a>

------
#### [ YAML ]

```
name: Source
actionTypeId:
  category: Source
  owner: AWS
  provider: CodeCommit
  version: '1'
runOrder: 1
configuration:
  BranchName: main
  PollForSourceChanges: 'false'
  RepositoryName: MyWebsite
outputArtifacts:
  - name: Artifact_MyWebsiteStack
inputArtifacts: []
region: us-west-2
namespace: SourceVariables
```

------
#### [ JSON ]

```
{
    "name": "Source",
    "actionTypeId": {
        "category": "Source",
        "owner": "AWS",
        "provider": "CodeCommit",
        "version": "1"
    },
    "runOrder": 1,
    "configuration": {
        "BranchName": "main",
        "PollForSourceChanges": "false",
        "RepositoryName": "MyWebsite"
    },
    "outputArtifacts": [
        {
            "name": "Artifact_MyWebsiteStack"
        }
    ],
    "inputArtifacts": [],
    "region": "us-west-2",
    "namespace": "SourceVariables"
}
```

------

### Example for full clone output artifact format
<a name="w2aac56c49c29b5"></a>

------
#### [ YAML ]

```
name: Source
actionTypeId:
  category: Source
  owner: AWS
  provider: CodeCommit
  version: '1'
runOrder: 1
configuration:
  BranchName: main
  OutputArtifactFormat: CODEBUILD_CLONE_REF
  PollForSourceChanges: 'false'
  RepositoryName: MyWebsite
outputArtifacts:
  - name: SourceArtifact
inputArtifacts: []
region: us-west-2
namespace: SourceVariables
```

------
#### [ JSON ]

```
{
    "name": "Source",
    "actionTypeId": {
        "category": "Source",
        "owner": "AWS",
        "provider": "CodeCommit",
        "version": "1"
    },
    "runOrder": 1,
    "configuration": {
        "BranchName": "main",
        "OutputArtifactFormat": "CODEBUILD_CLONE_REF",
        "PollForSourceChanges": "false",
        "RepositoryName": "MyWebsite"
    },
    "outputArtifacts": [
        {
            "name": "SourceArtifact"
        }
    ],
    "inputArtifacts": [],
    "region": "us-west-2",
    "namespace": "SourceVariables"
}
```

------

## See also
<a name="action-reference-CodeCommit-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a simple pipeline (CodeCommit repository)](tutorials-simple-codecommit.md) – This tutorial provides a sample app spec file and sample CodeDeploy application and deployment group. Use this tutorial to create a pipeline with a CodeCommit source that deploys to Amazon EC2 instances.

# AWS CodeDeploy deploy action reference
<a name="action-reference-CodeDeploy"></a>

You use an AWS CodeDeploy action to deploy application code to your deployment fleet. Your deployment fleet can consist of Amazon EC2 instances, on-premises instances, or both.

**Note**  
This reference topic describes the CodeDeploy deployment action for CodePipeline where the deployment platform is Amazon EC2. For reference information about Amazon Elastic Container Service to CodeDeploy blue/green deployment actions in CodePipeline, see [Amazon Elastic Container Service and CodeDeploy blue-green deploy action reference](action-reference-ECSbluegreen.md).

**Topics**
+ [Action type](#action-reference-CodeDeploy-type)
+ [Configuration parameters](#action-reference-CodeDeploy-config)
+ [Input artifacts](#action-reference-CodeDeploy-input)
+ [Output artifacts](#action-reference-CodeDeploy-output)
+ [Service role permissions: AWS CodeDeploy action](#edit-role-codedeploy)
+ [Action declaration](#action-reference-CodeDeploy-example)
+ [See also](#action-reference-CodeDeploy-links)

## Action type
<a name="action-reference-CodeDeploy-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `CodeDeploy`
+ Version: `1`

## Configuration parameters
<a name="action-reference-CodeDeploy-config"></a>

**ApplicationName**  
Required: Yes  
The name of the application that you created in CodeDeploy.

**DeploymentGroupName**  
Required: Yes  
The deployment group that you created in CodeDeploy.

## Input artifacts
<a name="action-reference-CodeDeploy-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The AppSpec file that CodeDeploy uses to determine:
  + What to install onto your instances from your application revision in Amazon S3 or GitHub.
  + Which lifecycle event hooks to run in response to deployment lifecycle events.

  For more information about the AppSpec file, see the [CodeDeploy AppSpec File Reference](https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html).

  

## Output artifacts
<a name="action-reference-CodeDeploy-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: AWS CodeDeploy action
<a name="edit-role-codedeploy"></a>

For AWS CodeDeploy support, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codedeploy:CreateDeployment",
                "codedeploy:GetApplication",
                "codedeploy:GetDeployment",
                "codedeploy:RegisterApplicationRevision",
                "codedeploy:ListDeployments",
                "codedeploy:ListDeploymentGroups",
                "codedeploy:GetDeploymentGroup"
            ],
            "Resource": [
                "arn:aws:codedeploy:*:111122223333:application:[[codedeployApplications]]",
                "arn:aws:codedeploy:*:111122223333:deploymentgroup:[[codedeployApplications]]/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "codedeploy:GetDeploymentConfig"
            ],
            "Resource": [
                "arn:aws:codedeploy:*:111122223333:deploymentconfig:[[deploymentConfigs]]"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "codedeploy:ListDeploymentConfigs"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
```

------

## Action declaration
<a name="action-reference-CodeDeploy-example"></a>

------
#### [ YAML ]

```
Name: Deploy
Actions:
  - Name: Deploy
    ActionTypeId:
      Category: Deploy
      Owner: AWS
      Provider: CodeDeploy
      Version: '1'
    RunOrder: 1
    Configuration:
      ApplicationName: my-application
      DeploymentGroupName: my-deployment-group
    OutputArtifacts: []
    InputArtifacts:
      - Name: SourceArtifact
    Region: us-west-2
    Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "Deploy",
    "Actions": [
        {
            "Name": "Deploy",
            "ActionTypeId": {
                "Category": "Deploy",
                "Owner": "AWS",
                "Provider": "CodeDeploy",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "ApplicationName": "my-application",
                "DeploymentGroupName": "my-deployment-group"
            },
            "OutputArtifacts": [],
            "InputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "Region": "us-west-2",
            "Namespace": "DeployVariables"
        }
    ]
},
```

------

## See also
<a name="action-reference-CodeDeploy-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a simple pipeline (S3 bucket)](tutorials-simple-s3.md) – This tutorial walks you through the creation of a source bucket, EC2 instances, and CodeDeploy resources to deploy a sample application. You then build your pipeline with a CodeDeploy deployment action that deploys code maintained in your S3 bucket to your Amazon EC2 instance.
+ [Tutorial: Create a simple pipeline (CodeCommit repository)](tutorials-simple-codecommit.md) – This tutorial walks you through the creation of your CodeCommit source repository, EC2 instances, and CodeDeploy resources to deploy a sample application. You then build your pipeline with a CodeDeploy deployment action that deploys code from your CodeCommit repository to your Amazon EC2 instance.
+ [CodeDeploy AppSpec File Reference](https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html) – This reference chapter in the *AWS CodeDeploy User Guide* provides reference information and examples for CodeDeploy AppSpec files.

# CodeStarSourceConnection for Bitbucket Cloud, GitHub, GitHub Enterprise Server, GitLab.com, and GitLab self-managed actions
<a name="action-reference-CodestarConnectionSource"></a>

Source actions for connections are supported by AWS CodeConnections. CodeConnections allows you to create and manage connections between AWS resources and third-party repositories such as GitHub. Starts a pipeline when a new commit is made on a third-party source code repository. The source action retrieves code changes when a pipeline is manually run or when a webhook event is sent from the source provider. 

You can configure actions in your pipeline to use a Git configuration that allows you to start your pipeline with triggers. To configure the pipeline trigger configuration to filter with triggers, see more details in [Add trigger with code push or pull request event types](pipelines-filter.md).

**Note**  
This feature is not available in the Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Osaka), Africa (Cape Town), Middle East (Bahrain), Middle East (UAE), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), or AWS GovCloud (US-West) Regions. To reference other available actions, see [Product and service integrations with CodePipeline](integrations.md). For considerations with this action in the Europe (Milan) Region, see the note in [CodeStarSourceConnection for Bitbucket Cloud, GitHub, GitHub Enterprise Server, GitLab.com, and GitLab self-managed actions](#action-reference-CodestarConnectionSource).

Connections can associate your AWS resources with the following third-party repositories:
+ Bitbucket Cloud (through the **Bitbucket** provider option in the CodePipeline console or the `Bitbucket` provider in the CLI)
**Note**  
You can create connections to a Bitbucket Cloud repository. Installed Bitbucket provider types, such as Bitbucket Server, are not supported. 
+ 
**Note**  
If you are using a Bitbucket workspace, you must have administrator access to create the connection.
+ GitHub and GitHub Enterprise Cloud (through the **GitHub (via GitHub App)** provider option in the CodePipeline console or the `GitHub` provider in the CLI)
**Note**  
If your repository is in a GitHub organization, you must be the organization owner to create the connection. If you are using a repository that is not in an organization, you must be the repository owner.
+ GitHub Enterprise Server (through the **GitHub Enterprise Server** provider option in the CodePipeline console or the `GitHub Enterprise Server` provider in the CLI)
+ GitLab.com (through the **GitLab** provider option in the CodePipeline console or the `GitLab` provider in the CLI)
**Note**  
You can create connections to a repository where you have the **Owner** role in GitLab, and then the connection can be used with the repository with resources such as CodePipeline. For repositories in groups, you do not need to be the group owner.
+ Self-managed installation for GitLab (Enterprise Edition or Community Edition) (through the **GitLab self-managed** provider option in the CodePipeline console or the `GitLabSelfManaged` provider in the CLI)

**Note**  
Each connection supports all of the repositories you have with that provider. You only need to create a new connection for each provider type.

Connections allow your pipeline to detect source changes through the third-party provider's installation app. For example, webhooks are used to subscribe to GitHub event types and can be installed on an organization, a repository, or a GitHub App. Your connection installs a repository webhook on your GitHub App that subscribes to GitHub push type events.

After a code change is detected, you have the following options for passing the code to subsequent actions:
+ Default: Like other existing CodePipeline source actions, `CodeStarSourceConnection` can output a ZIP file with a shallow copy of your commit.
+ Full clone: `CodeStarSourceConnection` can also be configured to output a URL reference to the repo for subsequent actions.

  Currently, the Git URL reference can only be used by downstream CodeBuild actions to clone the repo and associated Git metadata. Attempting to pass a Git URL reference to non-CodeBuild actions results in an error.

CodePipeline prompts you to add the AWS Connector installation app to your third-party account when you create a connection. You must have already created your third-party provider account and repository before you can connect through the `CodeStarSourceConnection` action.

**Note**  
To create or attach a policy to your role with the permissions required to use AWS CodeStar connections, see [Connections permissions reference](https://docs.aws.amazon.com/dtconsole/latest/userguide/security-iam.html#permissions-reference-connections). Depending on when your CodePipeline service role was created, you might need to update its permissions to support AWS CodeStar connections. For instructions, see [Add permissions to the CodePipeline service role](how-to-custom-role.md#how-to-update-role-new-services).

**Note**  
To use connections in the Europe (Milan) AWS Region, you must:   
Install a Region-specific app
Enable the Region
This Region-specific app supports connections in the Europe (Milan) Region. It is published on the third-party provider site, and it is separate from the existing app supporting connections for other Regions. By installing this app, you authorize third-party providers to share your data with the service for this Region only, and you can revoke the permissions at any time by uninstalling the app.  
The service will not process or store your data unless you enable the Region. By enabling this Region, you grant our service permissions to process and store your data.  
Even if the Region is not enabled, third-party providers can still share your data with our service if the Region-specific app remains installed, so make sure to uninstall the app once you disable the Region. For more information, see [ Enabling a Region](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html#rande-manage-enable).

**Topics**
+ [Action type](#action-reference-CodestarConnectionSource-type)
+ [Configuration parameters](#action-reference-CodestarConnectionSource-config)
+ [Input artifacts](#action-reference-CodestarConnectionSource-input)
+ [Output artifacts](#action-reference-CodestarConnectionSource-output)
+ [Output variables](#action-reference-CodestarConnectionSource-variables)
+ [Service role permissions: CodeConnections action](#edit-role-connections)
+ [Action declaration](#action-reference-CodestarConnectionSource-example)
+ [Installing the installation app and creating a connection](#action-reference-CodestarConnectionSource-auth)
+ [See also](#action-reference-CodestarConnectionSource-links)

## Action type
<a name="action-reference-CodestarConnectionSource-type"></a>
+ Category: `Source`
+ Owner: `AWS`
+ Provider: `CodeStarSourceConnection`
+ Version: `1`

## Configuration parameters
<a name="action-reference-CodestarConnectionSource-config"></a>

****ConnectionArn****  
Required: Yes  
The connection ARN that is configured and authenticated for the source provider.

****FullRepositoryId****  
Required: Yes  
The owner and name of the repository where source changes are to be detected.  
Example: `some-user/my-repo`  
You must maintain the correct case for the **FullRepositoryId** value. For example, if your user name is `some-user` and repo name is `My-Repo`, the recommended value of **FullRepositoryId** is `some-user/My-Repo`.

****BranchName****  
Required: Yes  
The name of the branch where source changes are to be detected.

****OutputArtifactFormat****  
Required: No  
Specifies the output artifact format. Can be either `CODEBUILD_CLONE_REF` or `CODE_ZIP`. If unspecified, the default is `CODE_ZIP`.  
The `CODEBUILD_CLONE_REF` option can only be used by CodeBuild downstream actions.  
If you choose this option, you will need to update the permissions for your CodeBuild project service role as shown in [Add CodeBuild GitClone permissions for connections to Bitbucket, GitHub, GitHub Enterprise Server, or GitLab.com](troubleshooting.md#codebuild-role-connections). For a tutorial that shows you how to use the **Full clone** option, see [Tutorial: Use full clone with a GitHub pipeline source](tutorials-github-gitclone.md).

**DetectChanges**  
 Required: No  
Controls automatically starting your pipeline when a new commit is made on the configured repository and branch. If unspecified, the default value is `true`, and the field does not display by default. Valid values for this parameter:  
+ `true`: CodePipeline automatically starts your pipeline on new commits.
+ `false`: CodePipeline does not start your pipeline on new commits.

## Input artifacts
<a name="action-reference-CodestarConnectionSource-input"></a>
+ **Number of artifacts:** `0`
+ **Description:** Input artifacts do not apply for this action type.

## Output artifacts
<a name="action-reference-CodestarConnectionSource-output"></a>
+ **Number of artifacts:** `1` 
+ **Description:** The artifacts generated from the repository are the output artifacts for the `CodeStarSourceConnection` action. The source code commit ID is displayed in CodePipeline as the source revision for the triggered pipeline execution. You can configure the output artifact of this action in:
  + A ZIP file that contains the contents of the configured repository and branch at the commit specified as the source revision for the pipeline execution.
  + A JSON file that contains a URL reference to the repository so that downstream actions can perform Git commands directly.
**Important**  
This option can only be used by CodeBuild downstream actions.  
If you choose this option, you will need to update the permissions for your CodeBuild project service role as shown in [Troubleshooting CodePipeline](troubleshooting.md). For a tutorial that shows you how to use the **Full clone** option, see [Tutorial: Use full clone with a GitHub pipeline source](tutorials-github-gitclone.md).

## Output variables
<a name="action-reference-CodestarConnectionSource-variables"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions.

For more information, see [Variables reference](reference-variables.md).

AuthorDate  
The date when the commit was authored, in timestamp format.

BranchName  
The name of the branch for the repository where the source change was made.

CommitId  
The commit ID that triggered the pipeline execution.

CommitMessage  
The description message, if any, associated with the commit that triggered the pipeline execution.

ConnectionArn  
The connection ARN that is configured and authenticated for the source provider.

FullRepositoryName  
The name of the repository where the commit that triggered the pipeline was made.

## Service role permissions: CodeConnections action
<a name="edit-role-connections"></a>

For CodeConnections, the following permission is required to create pipelines with a source that uses a connection, such as Bitbucket Cloud.

```
{
    "Effect": "Allow",
    "Action": [
        "codeconnections:UseConnection"
    ],
    "Resource": "resource_ARN"
},
```

For more information about the IAM permissions for connections, see [Connections permissions reference](https://docs.aws.amazon.com/dtconsole/latest/userguide/security-iam.html#permissions-reference-connections).

## Action declaration
<a name="action-reference-CodestarConnectionSource-example"></a>

In the following example, the output artifact is set to the default ZIP format of `CODE_ZIP` for the connection with ARN `arn:aws:codestar-connections:region:account-id:connection/connection-id`.

------
#### [ YAML ]

```
Name: Source
Actions:
  - InputArtifacts: []
    ActionTypeId:
      Version: '1'
      Owner: AWS
      Category: Source
      Provider: CodeStarSourceConnection
    OutputArtifacts:
      - Name: SourceArtifact
    RunOrder: 1
    Configuration:
      ConnectionArn: "arn:aws:codestar-connections:region:account-id:connection/connection-id"
      FullRepositoryId: "some-user/my-repo"
      BranchName: "main"
      OutputArtifactFormat: "CODE_ZIP"
    Name: ApplicationSource
```

------
#### [ JSON ]

```
{
    "Name": "Source",
    "Actions": [
        {
            "InputArtifacts": [],
            "ActionTypeId": {
                "Version": "1",
                "Owner": "AWS",
                "Category": "Source",
                "Provider": "CodeStarSourceConnection"
            },
            "OutputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "RunOrder": 1,
            "Configuration": {
                "ConnectionArn": "arn:aws:codestar-connections:region:account-id:connection/connection-id",
                "FullRepositoryId": "some-user/my-repo",
                "BranchName": "main",
                "OutputArtifactFormat": "CODE_ZIP"
            },
            "Name": "ApplicationSource"
        }
    ]
},
```

------

## Installing the installation app and creating a connection
<a name="action-reference-CodestarConnectionSource-auth"></a>

The first time you use the console to add a new connection to a third-party repository, you must authorize CodePipeline access to your repositories. You choose or create an installation app that helps you connect to the account where you have created your third-party code repository.

 When you use the AWS CLI or an CloudFormation template, you must provide the connection ARN of a connection that has already gone through the installation handshake. Otherwise, the pipeline is not triggered. 

**Note**  
For a `CodeStarSourceConnection` source action, you do not have to set up a webhook or default to polling. The connections action manages your source change detection for you.

## See also
<a name="action-reference-CodestarConnectionSource-links"></a>

The following related resources can help you as you work with this action.
+ [AWS::CodeStarConnections::Connection](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codestarconnections-connection.html) – The CloudFormation template reference for the AWS CodeStar Connections resource provides parameters and examples for connections in CloudFormation templates.
+ [AWS CodeStar Connections API Reference](https://docs.aws.amazon.com/codestar-connections/latest/APIReference/Welcome.html) – The *AWS CodeStar Connections API Reference* provides reference information for the available connections actions.
+ To view the steps for creating a pipeline with source actions supported by connections, see the following:
  + For Bitbucket Cloud, use the **Bitbucket** option in the console or the `CodestarSourceConnection` action in the CLI. See [Bitbucket Cloud connections](connections-bitbucket.md).
  + For GitHub and GitHub Enterprise Cloud, use the **GitHub** provider option in the console or the `CodestarSourceConnection` action in the CLI. See [GitHub connections](connections-github.md).
  + For GitHub Enterprise Server, use the **GitHub Enterprise Server** provider option in the console or the `CodestarSourceConnection` action in the CLI. See [GitHub Enterprise Server connections](connections-ghes.md).
  + For GitLab.com, use the **GitLab** provider option in the console or the `CodestarSourceConnection` action with the `GitLab` provider in the CLI. See [GitLab.com connections](connections-gitlab.md).
+ To view a Getting Started tutorial that creates a pipeline with a Bitbucket source and a CodeBuild action, see [Getting started with connections](https://docs.aws.amazon.com/dtconsole/latest/userguide/getting-started-connections.html).
+ For a tutorial that shows you how to connect to a GitHub repository and use the **Full clone** option with a downstream CodeBuild action, see [Tutorial: Use full clone with a GitHub pipeline source](tutorials-github-gitclone.md).

# Commands action reference
<a name="action-reference-Commands"></a>

The Commands action allows you to run shell commands in a virtual compute instance. When you run the action, commands specified in the action configuration are run in a separate container. All artifacts that are specified as input artifacts to a CodeBuild action are available inside of the container running the commands. This action allows you to specify commands without first creating a CodeBuild project. For more information, see [ActionDeclaration](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ActionDeclaration.html) and [OutputArtifact](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_OutputArtifact.html) in the *AWS CodePipeline API Reference*.

**Important**  
This action uses CodePipeline managed CodeBuild compute to run commands in a build environment. Running the commands action will incur separate charges in AWS CodeBuild.

**Note**  
The Commands action is only available for V2 type pipelines.

**Topics**
+ [Considerations for the Commands action](#action-reference-Commands-considerations)
+ [Service role policy permissions](#action-reference-Commands-policy)
+ [Action type](#action-reference-Commands-type)
+ [Configuration parameters](#action-reference-Commands-config)
+ [Input artifacts](#action-reference-Commands-input)
+ [Output artifacts](#action-reference-Commands-output)
+ [Environment variables](#action-reference-Commands-envvars)
+ [Service role permissions: Commands action](#edit-role-Commands)
+ [Action declaration (example)](#action-reference-Commands-example)
+ [See also](#action-reference-Commands-links)

## Considerations for the Commands action
<a name="action-reference-Commands-considerations"></a>

The following considerations apply for the Commands action.
+ The commands action uses CodeBuild resources similar to the CodeBuild action, while allowing shell environment commands in a virtual compute instance without the need to associate or create a build project.
**Note**  
Running the commands action will incur separate charges in AWS CodeBuild.
+ Because the Commands action in CodePipeline uses CodeBuild resources, the builds run by the action will be attributed to the build limits for your account in CodeBuild. Builds run by the Commands action will count toward the concurrent build limits as configured for that account.
+ The timeout for builds with the Commands action is 55 minutes, as based on CodeBuild builds.
+ The compute instance uses an isolated build environment in CodeBuild. 
**Note**  
Because the isolated build environment is used at the account level, an instance might be reused for another pipeline execution.
+ All formats are supported except multi-line formats. You must use single-line format when entering commands.
+ The commands action is supported for cross-account actions. To add a cross-account commands action, add `actionRoleArn` from your target account in the action declaration.
+ For this action, CodePipeline will assume the pipeline service role and use that role to allow access to resources at runtime. It is recommended to configure the service role so that the permissions are scoped down to the action level.
+ The permissions added to the CodePipeline service role are detailed in [Add permissions to the CodePipeline service role](how-to-custom-role.md#how-to-update-role-new-services) .
+ The permission needed to view logs in the console is detailed in [Permissions required to view compute logs in the console](security-iam-permissions-console-logs.md) .
+ Unlike other actions in CodePipeline, you do not set fields in the action configuration; you set the action configuration fields outside of the action configuration.

## Service role policy permissions
<a name="action-reference-Commands-policy"></a>

When CodePipeline runs the action, CodePipeline creates a log group using the name of the pipeline as follows. This enables you to scope down permissions to log resources using the pipeline name.

```
/aws/codepipeline/MyPipelineName
```

If you are using an existing service role, to use the Commands action, you will need to add the following permissions for the service role.
+ logs:CreateLogGroup
+ logs:CreateLogStream
+ logs:PutLogEvents

In the service role policy statement, scope down the permissions to the pipeline level as shown in the following example.

```
{
    "Effect": "Allow",
    "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
    ],
    "Resource": [
        "arn:aws:logs:*:YOUR_AWS_ACCOUNT_ID:log-group:/aws/codepipeline/YOUR_PIPELINE_NAME",
        "arn:aws:logs:*:YOUR_AWS_ACCOUNT_ID:log-group:/aws/codepipeline/YOUR_PIPELINE_NAME:*"
   ]
}
```

To view logs in the console using the action details dialog page, the permission to view logs must be added to the console role. For more information, see the console permissions policy example in [Permissions required to view compute logs in the console](security-iam-permissions-console-logs.md).

## Action type
<a name="action-reference-Commands-type"></a>
+ Category: `Compute`
+ Owner: `AWS`
+ Provider: `Commands`
+ Version: `1`

## Configuration parameters
<a name="action-reference-Commands-config"></a>

**Commands**  
Required: Yes  
You can provide shell commands for the `Commands` action to run. In the console, commands are entered on separate lines. In the CLI, commands are entered as separate strings.  
Multi-line formats are not supported and will result in an error message. Single-line format must be used for entering commands in the **Commands** field.
The EnvironmentType and ComputeType values match those in CodeBuild. We support a subset of the available types. For more information, see [Build Environment Compute Types](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html).

**EnvironmentType**  
Required: No  
The OS image for the build environment that supports the Commands action. The following are valid values for build environments:  
+ LINUX\$1CONTAINER
+ WINDOWS\$1SERVER\$12022\$1CONTAINER
The selection for **EnvironmentType** will then allow the compute type for that OS in the **ComputeType** field. For more information about the CodeBuild compute types available for this action, see the [Build environment compute modes and types](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html) reference in the CodeBuild User Guide.  
If not specified, the compute defaults to the following for the build environment:  
+ **Compute type: **BUILD\$1GENERAL1\$1SMALL
+ **Environment type:** LINUX\$1CONTAINER

**ComputeType**  
Required: No  
Based on the selection for EnvironmentType, the compute type can be provided. The following are available values for compute; however, note that the options available can vary by OS.  
+ BUILD\$1GENERAL1\$1SMALL
+ BUILD\$1GENERAL1\$1MEDIUM
+ BUILD\$1GENERAL1\$1LARGE
Some compute types are not compatible with certain environment types. For example, WINDOWS\$1SERVER\$12022\$1CONTAINER is not compatible with BUILD\$1GENERAL1\$1SMALL. Using incompatible combinations causes the action to fail and generates a runtime error.

**outputVariables**  
Required: No  
Specify the names of the variables in your environment that you want to export. For a reference of CodeBuild environment variables, see [Environment variables in build environments](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-env-vars.html) in the *CodeBuild User Guide*. 

**Files**  
Required: No  
You can provide files that you want to export as output artifacts for the action.  
The supported format for files is the same as for CodeBuild file patterns. For example, enter `**/` for all files. For more information, see [Build specification reference for CodeBuild](https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.artifacts.files) in the *CodeBuild User Guide*.  

![\[The Edit action page for a new pipeline with the Commands action\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/commands-edit-screen.png)


**VpcId**  
Required: No  
The VPC ID for your resources.

**Subnets**  
Required: No  
The subnets for the VPC. This field is needed when your commands need to connect to resources in a VPC.

**SecurityGroupIds**  
Required: No  
The security groups for the VPC. This field is needed when your commands need to connect to resources in a VPC.

The following is a JSON example of the action with configuration fields shown for environment and compute type, along with example environment variable.

```
 {
            "name": "Commands1",
            "actionTypeId": {
              "category": "Compute",
              "owner": "AWS",
              "provider": "Commands",
              "version": "1"
            },
            "inputArtifacts": [
              {
                "name": "SourceArtifact"
              }
            ],
            "commands": [
              "ls",
              "echo hello",
              "echo $BEDROCK_TOKEN",
            ],
            "configuration": {
              "EnvironmentType": "LINUX_CONTAINER",
              "ComputeType": "BUILD_GENERAL1_MEDIUM"
            },
            "environmentVariables": [
              {
                "name": "BEDROCK_TOKEN",
                "value": "apiTokens:bedrockToken",
                "type": "SECRETS_MANAGER"
              }
            ],
            "runOrder": 1
          }
```

## Input artifacts
<a name="action-reference-Commands-input"></a>
+ **Number of artifacts:** `1 to 10`

## Output artifacts
<a name="action-reference-Commands-output"></a>
+ **Number of artifacts:** `0 to 1` 

## Environment variables
<a name="action-reference-Commands-envvars"></a>

**Key**  
The key in a key-value environment variable pair, such as `BEDROCK_TOKEN`.

**Value**  
The value for the key-value pair, such as `apiTokens:bedrockToken`. The value can be parameterized with output variables from pipeline actions or pipeline variables.  
When using the `SECRETS_MANAGER` type, this value must be the name of a secret you have already stored in AWS Secrets Manager.

**Type**  
Specifies the type of use for the environment variable value. The value can be either `PLAINTEXT` or `SECRETS_MANAGER`. If the value is `SECRETS_MANAGER`, provide the Secrets reference in the `EnvironmentVariable` value. When not specified, this defaults to `PLAINTEXT`.  
We strongly discourage the use of *plaintext* environment variables to store sensitive values, especially AWS credentials. When you use the CodeBuild console or AWS CLI, *plaintext* environment variables are displayed in plain text. For sensitive values, we recommend that you use the `SECRETS_MANAGER` type instead.

**Note**  
When you enter the `name`, `value`, and `type` for your environment variables configuration, especially if the environment variable contains CodePipeline output variable syntax, do not exceed the 1000-character limit for the configuration’s value field. A validation error is returned when this limit is exceeded.

For an example action declaration showing an environment variable, see [Configuration parameters](#action-reference-Commands-config).

**Note**  
The `SECRETS_MANAGER` type is only supported for the Commands action.
Secrets referenced in the Commands action will be redacted in the build logs similar to CodeBuild. But pipeline users who have **Edit** access to the pipeline can still potentially access these secret values by modifying the commands.
To use the SecretsManager, you must add the following permissions to your pipeline service role:  

  ```
  {
              "Effect": "Allow",
              "Action": [
                  "secretsmanager:GetSecretValue"
              ],
              "Resource": [
                  "SECRET_ARN"
              ]
          }
  ```

## Service role permissions: Commands action
<a name="edit-role-Commands"></a>

For Commands support, add the following to your policy statement:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:iam::*:role/Service*",
                "arn:aws:iam::*:role/Service*"
            ]
        }
    ]
}
```

------

## Action declaration (example)
<a name="action-reference-Commands-example"></a>

------
#### [ YAML ]

```
name: Commands_action
actionTypeId:
  category: Compute
  owner: AWS
  provider: Commands
  version: '1'
runOrder: 1
configuration: {}
commands:
- ls
- echo hello
- 'echo pipeline Execution Id is #{codepipeline.PipelineExecutionId}'
outputArtifacts:
- name: BuildArtifact
  files:
  - **/
inputArtifacts:
- name: SourceArtifact
outputVariables:
- AWS_DEFAULT_REGION
region: us-east-1
namespace: compute
```

------
#### [ JSON ]

```
{
    "name": "Commands_action",
    "actionTypeId": {
        "category": "Compute",
        "owner": "AWS",
        "provider": "Commands",
        "version": "1"
    },
    "runOrder": 1,
    "configuration": {},
    "commands": [
        "ls",
        "echo hello",
        "echo pipeline Execution Id is #{codepipeline.PipelineExecutionId}"
    ],
    "outputArtifacts": [
        {
            "name": "BuildArtifact",
            "files": [
                "**/"
            ]
        }
    ],
    "inputArtifacts": [
        {
            "name": "SourceArtifact"
        }
    ],
    "outputVariables": [
        "AWS_DEFAULT_REGION"
    ],
    "region": "us-east-1",
    "namespace": "compute"
}
```

------

## See also
<a name="action-reference-Commands-links"></a>

The following related resources can help you as you work with this action.
+ [Tutorial: Create a pipeline that runs commands with compute (V2 type)](tutorials-commands.md) – This tutorial provides a sample pipeline with the Commands action.

# AWS Device Farm test action reference
<a name="action-reference-DeviceFarm"></a>

In your pipeline, you can configure a test action that uses AWS Device Farm to run and test your application on devices. Device Farm uses test pools of devices and testing frameworks to test applications on specific devices. For information about the types of testing frameworks supported by the Device Farm action, see [Working with Test Types in AWS Device Farm](https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types.html).

**Topics**
+ [Action type](#action-reference-DeviceFarm-type)
+ [Configuration parameters](#action-reference-DeviceFarm-config)
+ [Input artifacts](#action-reference-DeviceFarm-input)
+ [Output artifacts](#action-reference-DeviceFarm-output)
+ [Service role permissions: AWS Device Farm action](#edit-role-devicefarm)
+ [Action declaration](#action-reference-DeviceFarm-example)
+ [See also](#action-reference-DeviceFarm-links)

## Action type
<a name="action-reference-DeviceFarm-type"></a>
+ Category: `Test`
+ Owner: `AWS`
+ Provider: `DeviceFarm`
+ Version: `1`

## Configuration parameters
<a name="action-reference-DeviceFarm-config"></a>

**AppType**  
Required: Yes  
The OS and type of application you are testing. The following is a list of valid values:  
+ `iOS`
+ `Android`
+ `Web`

**ProjectId**  
Required: Yes  
The Device Farm project ID.   
To find your project ID, in the Device Farm console, choose your project. In the browser, copy the URL of your new project. The URL contains the project ID. The project ID is the value in the URL after `projects/`. In the following example, the project ID is `eec4905f-98f8-40aa-9afc-4c1cfexample`.  

```
https://<region-URL>/devicefarm/home?region=us-west-2#/projects/eec4905f-98f8-40aa-9afc-4c1cfexample/runs
```

**App**  
Required: Yes  
The name and location of the application file in your input artifact. For example: `s3-ios-test-1.ipa`

**TestSpec**  
Conditional: Yes  
The location of the test spec definition file in your input artifact. This is required for custom mode test.

**DevicePoolArn**  
Required: Yes  
The Device Farm device pool ARN.   
To get the available device pool ARNs for the project, including the ARN for Top Devices, use the AWS CLI to enter the following command:   

```
aws devicefarm list-device-pools --arn arn:aws:devicefarm:us-west-2:account_ID:project:project_ID
```

**TestType**  
Required: Yes  
Specifies the supported testing framework for your test. The following is a list of valid values for `TestType`:  
+ **APPIUM\$1JAVA\$1JUNIT**
+ **APPIUM\$1JAVA\$1TESTNG**
+ **APPIUM\$1NODE**
+ **APPIUM\$1RUBY**
+ **APPIUM\$1PYTHON**
+ **APPIUM\$1WEB\$1JAVA\$1JUNIT**
+ **APPIUM\$1WEB\$1JAVA\$1TESTNG**
+ **APPIUM\$1WEB\$1NODE**
+ **APPIUM\$1WEB\$1RUBY**
+ **APPIUM\$1WEB\$1PYTHON**
+ **BUILTIN\$1FUZZ**
+ **INSTRUMENTATION**
+ **XCTEST**
+ **XCTEST\$1UI**
The following test types are not supported by the action in CodePipeline: `WEB_PERFORMANCE_PROFILE`, `REMOTE_ACCESS_RECORD`, and `REMOTE_ACCESS_REPLAY`.
For information about Device Farm test types, see [Working with Test Types in AWS Device Farm](https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types.html).

**RadioBluetoothEnabled**  
Required: No  
A Boolean value that indicates whether to enable Bluetooth at the beginning of the test.

**RecordAppPerformanceData**  
Required: No  
A Boolean value that indicates whether to record device performance data such as CPU, FPS, and memory performance during the test.

**RecordVideo**  
Required: No  
A Boolean value that indicates whether to record video during the test.

**RadioWifiEnabled**  
Required: No  
A Boolean value that indicates whether to enable Wi-Fi at the beginning of the test.

**RadioNfcEnabled**  
Required: No  
A Boolean value that indicates whether to enable NFC at the beginning of the test.

**RadioGpsEnabled**  
Required: No  
A Boolean value that indicates whether to enable GPS at the beginning of the test.

**Test**  
Required: No  
The name and path of the test definition file in your source location. The path is relative to the root of the input artifact for your test.

**FuzzEventCount**  
Required: No  
The number of user interface events for the fuzz test to perform, between 1 and 10,000.

**FuzzEventThrottle**  
Required: No  
The number of milliseconds for the fuzz test to wait before performing the next user interface event, between 1 and 1,000.

**FuzzRandomizerSeed**  
Required: No  
A seed for the fuzz test to use for randomizing user interface events. Using the same number for subsequent fuzz tests results in identical event sequences.

**CustomHostMachineArtifacts**  
Required: No  
The location on the host machine where custom artifacts will be stored.

**CustomDeviceArtifacts**  
Required: No  
The location on the device where custom artifacts will be stored.  


**UnmeteredDevicesOnly**  
Required: No  
A Boolean value that indicates whether to only use your unmetered devices when running tests in this step.

**JobTimeoutMinutes**  
Required: No  
The number of minutes a test run will execute per device before it times out.

**Latitude**  
Required: No  
The latitude of the device expressed in geographic coordinate system degrees.

**Longitude**  
Required: No  
The longitude of the device expressed in geographic coordinate system degrees.

## Input artifacts
<a name="action-reference-DeviceFarm-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The set of artifacts to be made available to the test action. Device Farm looks for the built application and test definitions to use.

## Output artifacts
<a name="action-reference-DeviceFarm-output"></a>
+ **Number of Artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: AWS Device Farm action
<a name="edit-role-devicefarm"></a>

When CodePipeline runs the action, the CodePipeline service role policy requires the following permissions, appropriately scoped down to the pipeline resource ARN in order to maintain access with least privilege. For example, add the following to your policy statement:

```
{
    "Effect": "Allow",
    "Action": [
        "devicefarm:ListProjects",
        "devicefarm:ListDevicePools",
        "devicefarm:GetRun",
        "devicefarm:GetUpload",
        "devicefarm:CreateUpload",
        "devicefarm:ScheduleRun"
    ],
    "Resource": "resource_ARN"
},
```

## Action declaration
<a name="action-reference-DeviceFarm-example"></a>

------
#### [ YAML ]

```
Name: Test
Actions:
  - Name: TestDeviceFarm
    ActionTypeId: null
    category: Test
    owner: AWS
    provider: DeviceFarm
    version: '1'
RunOrder: 1
Configuration:
  App: s3-ios-test-1.ipa
  AppType: iOS
  DevicePoolArn: >-
    arn:aws:devicefarm:us-west-2::devicepool:0EXAMPLE-d7d7-48a5-ba5c-b33d66efa1f5
  ProjectId: eec4905f-98f8-40aa-9afc-4c1cfEXAMPLE
  TestType: APPIUM_PYTHON
  TestSpec: example-spec.yml
OutputArtifacts: []
InputArtifacts:
  - Name: SourceArtifact
Region: us-west-2
```

------
#### [ JSON ]

```
{
    "Name": "Test",
    "Actions": [
        {
            "Name": "TestDeviceFarm",
            "ActionTypeId": null,
            "category": "Test",
            "owner": "AWS",
            "provider": "DeviceFarm",
            "version": "1"
        }
    ],
    "RunOrder": 1,
    "Configuration": {
        "App": "s3-ios-test-1.ipa",
        "AppType": "iOS",
        "DevicePoolArn": "arn:aws:devicefarm:us-west-2::devicepool:0EXAMPLE-d7d7-48a5-ba5c-b33d66efa1f5",
        "ProjectId": "eec4905f-98f8-40aa-9afc-4c1cfEXAMPLE",
        "TestType": "APPIUM_PYTHON",
        "TestSpec": "example-spec.yml"
    },
    "OutputArtifacts": [],
    "InputArtifacts": [
        {
            "Name": "SourceArtifact"
        }
    ],
    "Region": "us-west-2"
},
```

------

## See also
<a name="action-reference-DeviceFarm-links"></a>

The following related resources can help you as you work with this action.
+ [Working with Test Types in Device Farm](https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-types.html) – This reference chapter in the *Device Farm Developer Guide* provides more description about the Android, iOS, and Web Application testing frameworks supported by Device Farm.
+ [Actions in Device Farm](https://docs.aws.amazon.com/devicefarm/latest/APIReference/Welcome.html) – The API calls and parameters in the *Device Farm API Reference* can help you work with Device Farm projects.
+ [Tutorial: Create a pipeline that builds and tests your Android app with AWS Device Farm](tutorials-codebuild-devicefarm.md) – This tutorial provides a sample build spec file and sample application to create a pipeline with a GitHub source that builds and tests an Android app with CodeBuild and Device Farm.
+ [Tutorial: Create a pipeline that tests your iOS app with AWS Device Farm](tutorials-codebuild-devicefarm-S3.md) – This tutorial provides a sample application to create a pipeline with an Amazon S3 source that tests a built iOS app with Device Farm.

# Elastic Beanstalk deploy action reference
<a name="action-reference-Beanstalk"></a>

Elastic Beanstalk is a platform within AWS that is used for deploying and scaling web applications. You use an Elastic Beanstalk action to deploy application code to your deployment environment.

**Topics**
+ [Action type](#action-reference-Beanstalk-type)
+ [Configuration parameters](#action-reference-Beanstalk-config)
+ [Input artifacts](#action-reference-Beanstalk-input)
+ [Output artifacts](#action-reference-Beanstalk-output)
+ [Service role permissions: `ElasticBeanstalk` deploy action](#edit-role-beanstalk)
+ [Action declaration](#action-reference-Beanstalk-example)
+ [See also](#action-reference-Beanstalk-links)

## Action type
<a name="action-reference-Beanstalk-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `ElasticBeanstalk`
+ Version: `1`

## Configuration parameters
<a name="action-reference-Beanstalk-config"></a>

**ApplicationName**  
Required: Yes  
The name of the application that you created in Elastic Beanstalk. 

**EnvironmentName**  
Required: Yes  
The name of the environment that you created in Elastic Beanstalk. An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, however, you can run the same application version or different application versions in many environments simultaneously.

## Input artifacts
<a name="action-reference-Beanstalk-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The input artifact for the action.

## Output artifacts
<a name="action-reference-Beanstalk-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: `ElasticBeanstalk` deploy action
<a name="edit-role-beanstalk"></a>

For Elastic Beanstalk, the following are the minimum permissions needed to create pipelines with an `ElasticBeanstalk` deploy action.

```
{
    "Effect": "Allow",
    "Action": [
        "elasticbeanstalk:*",
        "ec2:*",
        "elasticloadbalancing:*",
        "autoscaling:*",
        "cloudwatch:*",
        "s3:*",
        "sns:*",
        "cloudformation:*",
        "rds:*",
        "sqs:*",
        "ecs:*"
    ],
    "Resource": "resource_ARN"
},
```

**Note**  
You should replace wildcards in the resource policy with the resources for the account you want to limit access to. For more information about creating a policy that grants least-privilege access, see [https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege).

## Action declaration
<a name="action-reference-Beanstalk-example"></a>

------
#### [ YAML ]

```
Name: Deploy
Actions:
  - Name: Deploy
    ActionTypeId:
      Category: Deploy
      Owner: AWS
      Provider: ElasticBeanstalk
      Version: '1'
    RunOrder: 1
    Configuration:
      ApplicationName: my-application
      EnvironmentName: my-environment
    OutputArtifacts: []
    InputArtifacts:
      - Name: SourceArtifact
    Region: us-west-2
    Namespace: DeployVariables
```

------
#### [ JSON ]

```
{
    "Name": "Deploy",
    "Actions": [
        {
            "Name": "Deploy",
            "ActionTypeId": {
                "Category": "Deploy",
                "Owner": "AWS",
                "Provider": "ElasticBeanstalk",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "ApplicationName": "my-application",
                "EnvironmentName": "my-environment"
            },
            "OutputArtifacts": [],
            "InputArtifacts": [
                {
                    "Name": "SourceArtifact"
                }
            ],
            "Region": "us-west-2",
            "Namespace": "DeployVariables"
        }
    ]
},
```

------

## See also
<a name="action-reference-Beanstalk-links"></a>

The following related resources can help you as you work with this action.
+ [Deploying a Flask application to Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-flask.html) – This tutorial walks you through the creation of your application and environment resources in Elastic Beanstalk using a sample Flask application. You can then build your pipeline with an Elastic Beanstalk deployment action that deploys your application from your source repository to your Elastic Beanstalk environment.

# Amazon Inspector `InspectorScan` invoke action reference
<a name="action-reference-InspectorScan"></a>

Amazon Inspector is a vulnerability management service that automatically discovers workloads and continually scans them for software vulnerabilities and unintended network exposure. The `InspectorScan` action in CodePipeline automates detecting and fixing security vulnerabilities in your open source code. The action is a managed compute action with security scanning capabilities. You can use InspectorScan with application source code in your third-party repository, such as GitHub or Bitbucket Cloud, or with images for container applications. Your action will scan and report on vulnerability levels and alerts that you configure. 

**Important**  
This action uses CodePipeline managed CodeBuild compute to run commands in a build environment. Running the action will incur separate charges in AWS CodeBuild.

**Topics**
+ [Action type ID](#action-reference-InspectorScan-type)
+ [Configuration parameters](#action-reference-InspectorScan-parameters)
+ [Input artifacts](#action-reference-InspectorScan-input)
+ [Output artifacts](#action-reference-InspectorScan-output)
+ [Output variables](#w2aac56c62c19)
+ [Service role permissions: `InspectorScan` action](#edit-role-InspectorScan)
+ [Action declaration](#w2aac56c62c23)
+ [See also](#action-reference-InspectorScan-links)

## Action type ID
<a name="action-reference-InspectorScan-type"></a>
+ Category: `Invoke`
+ Owner: `AWS`
+ Provider: `InspectorScan`
+ Version: `1`

Example:

```
            {
                "Category": "Invoke",
                "Owner": "AWS",
                "Provider": "InspectorScan",
                "Version": "1"
            },
```

## Configuration parameters
<a name="action-reference-InspectorScan-parameters"></a>

**InspectorRunMode**  
(Required) The string that indicates the mode of the scan. Valid values are `SourceCodeScan | ECRImageScan`.

**ECRRepositoryName**  
The name of the Amazon ECR repository where the image was pushed.

**ImageTag**  
The tag used for the image.

The parameters for this action scan for levels of vulnerability that you specify. The following levels for vulnerability thresholds are available:

**CriticalThreshold **  
The number of critical severity vulnerabilities found in your source beyond which CodePipeline should fail the action.

**HighThreshold **  
The number of high severity vulnerabilities found in your source beyond which CodePipeline should fail the action.

**MediumThreshold**  
The number of medium severity vulnerabilities found in your source beyond which CodePipeline should fail the action.

**LowThreshold **  
The number of low severity vulnerabilities found in your source beyond which CodePipeline should fail the action. 

![\[\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/inspectorscan-edit.png)


## Input artifacts
<a name="action-reference-InspectorScan-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** The source code to scan for vulnerabilities. If the scan is for an ECR repository, this input artifact is not needed.

## Output artifacts
<a name="action-reference-InspectorScan-output"></a>
+ **Number of artifacts:** `1`
+ **Description:** Vulnerability details of your source in the form of a Software Bill of Materials (SBOM) file.

## Output variables
<a name="w2aac56c62c19"></a>

When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace. You configure an action with a namespace to make those variables available to the configuration of downstream actions.

For more information, see [Variables reference](reference-variables.md).

**HighestScannedSeverity **  
The highest severity output from the scan. Valid values are `medium | high | critical`.

## Service role permissions: `InspectorScan` action
<a name="edit-role-InspectorScan"></a>

For the `InspectorScan` action support, add the following to your policy statement:

```
{
        "Effect": "Allow",
        "Action": "inspector-scan:ScanSbom",
        "Resource": "*"
    },
    {
        "Effect": "Allow",
        "Action": [
            "ecr:GetDownloadUrlForLayer",
            "ecr:BatchGetImage",
            "ecr:BatchCheckLayerAvailability"
        ],
        "Resource": "resource_ARN"
    },
```

In addition, if not already added for the Commands action, add the following permissions to your service role in order to view CloudWatch logs.

```
{
    "Effect": "Allow",
    "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream", 
        "logs:PutLogEvents"
    ],
    "Resource": "resource_ARN"
},
```

**Note**  
Scope down the permissions to the pipeline resource level by using resource-based permissions in the service role policy statement.

## Action declaration
<a name="w2aac56c62c23"></a>

------
#### [ YAML ]

```
name: Scan
actionTypeId:
  category: Invoke
  owner: AWS
  provider: InspectorScan
  version: '1'
runOrder: 1
configuration:
  InspectorRunMode: SourceCodeScan
outputArtifacts:
- name: output
inputArtifacts:
- name: SourceArtifact
region: us-east-1
```

------
#### [ JSON ]

```
{
                        "name": "Scan",
                        "actionTypeId": {
                            "category": "Invoke",
                            "owner": "AWS",
                            "provider": "InspectorScan",
                            "version": "1"
                        },
                        "runOrder": 1,
                        "configuration": {
                            "InspectorRunMode": "SourceCodeScan"
                        },
                        "outputArtifacts": [
                            {
                                "name": "output"
                            }
                        ],
                        "inputArtifacts": [
                            {
                                "name": "SourceArtifact"
                            }
                        ],
                        "region": "us-east-1"
                    },
```

------

## See also
<a name="action-reference-InspectorScan-links"></a>

The following related resources can help you as you work with this action.
+ For more information about Amazon Inspector, see the [Amazon Inspector](http://aws.amazon.com/inspector/) User Guide.

# AWS Lambda invoke action reference
<a name="action-reference-Lambda"></a>

Allows you to execute a Lambda function as an action in your pipeline. Using the event object that is an input to this function, the function has access to the action configuration, input artifact locations, output artifact locations, and other information required to access the artifacts. For an example event passed to a Lambda invoke function, see [Example JSON event](#action-reference-Lambda-event). As part of the implementation of the Lambda function, there must be a call to either the `[PutJobSuccessResult API](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PutJobSuccessResult.html)` or `[PutJobFailureResult API](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PutJobFailureResult.html)`. Otherwise, the execution of this action hangs until the action times out. If you specify output artifacts for the action, they must be uploaded to the S3 bucket as part of the function implementation.

**Important**  
Do not log the JSON event that CodePipeline sends to Lambda because this can result in user credentials being logged in CloudWatch Logs. The CodePipeline role uses a JSON event to pass temporary credentials to Lambda in the `artifactCredentials` field. For an example event, see [Example JSON event](actions-invoke-lambda-function.md#actions-invoke-lambda-function-json-event-example).

## Action type
<a name="action-reference-Lambda-type"></a>
+ Category: `Invoke`
+ Owner: `AWS`
+ Provider: `Lambda`
+ Version: `1`

## Configuration parameters
<a name="action-reference-Lambda-config"></a>

**FunctionName**  
Required: Yes  
`FunctionName` is the name of the function created in Lambda.

**UserParameters**  
Required: No  
A string that can be processed as input by the Lambda function.

## Input artifacts
<a name="action-reference-Lambda-input"></a>
+ **Number of Artifacts:** `0 to 5`
+ **Description:** The set of artifacts to be made available to the Lambda function.

## Output artifacts
<a name="action-reference-Lambda-output"></a>
+ **Number of Artifacts:** `0 to 5` 
+ **Description:** The set of artifacts produced as output by the Lambda function.

## Output variables
<a name="action-reference-Lambda-variables"></a>

This action will produce as variables all key-value pairs that are included in the `outputVariables` section of the [PutJobSuccessResult API](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_PutJobSuccessResult.html) request.

For more information about variables in CodePipeline, see [Variables reference](reference-variables.md).

## Example action configuration
<a name="action-reference-Lambda-example"></a>

------
#### [ YAML ]

```
Name: Lambda
Actions:
  - Name: Lambda
    ActionTypeId:
      Category: Invoke
      Owner: AWS
      Provider: Lambda
      Version: '1'
    RunOrder: 1
    Configuration:
      FunctionName: myLambdaFunction
      UserParameters: 'http://192.0.2.4'
    OutputArtifacts: []
    InputArtifacts: []
    Region: us-west-2
```

------
#### [ JSON ]

```
{
    "Name": "Lambda",
    "Actions": [
        {
            "Name": "Lambda",
            "ActionTypeId": {
                "Category": "Invoke",
                "Owner": "AWS",
                "Provider": "Lambda",
                "Version": "1"
            },
            "RunOrder": 1,
            "Configuration": {
                "FunctionName": "myLambdaFunction",
                "UserParameters": "http://192.0.2.4"
            },
            "OutputArtifacts": [],
            "InputArtifacts": [],
            "Region": "us-west-2"
        }
    ]
},
```

------

## Example JSON event
<a name="action-reference-Lambda-event"></a>

The Lambda action sends a JSON event that contains the job ID, the pipeline action configuration, input and output artifact locations, and any encryption information for the artifacts. The job worker accesses these details to complete the Lambda action. For more information, see [job details](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_JobDetails.html). The following is an example event.

```
{
    "CodePipeline.job": {
        "id": "11111111-abcd-1111-abcd-111111abcdef",
        "accountId": "111111111111",
        "data": {
            "actionConfiguration": {
                "configuration": {
                    "FunctionName": "MyLambdaFunction",
                    "UserParameters": "input_parameter"
                }
            },
            "inputArtifacts": [
                {
                    "location": {
                        "s3Location": {
                            "bucketName": "bucket_name",
                            "objectKey": "filename"
                        },
                        "type": "S3"
                    },
                    "revision": null,
                    "name": "ArtifactName"
                }
            ],
            "outputArtifacts": [],
            "artifactCredentials": {
                "secretAccessKey": "secret_key",
                "sessionToken": "session_token",
                "accessKeyId": "access_key_ID"
            },
            "continuationToken": "token_ID",
            "encryptionKey": { 
              "id": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
              "type": "KMS"
            }
        }
    }
}
```

The JSON event provides the following job details for the Lambda action in CodePipeline:
+ `id`: The unique system-generated ID of the job.
+ `accountId`: The AWS account ID associated with the job.
+ `data`: Other information required for a job worker to complete the job. 
  + `actionConfiguration`: The action parameters for the Lambda action. For definitions, see [Configuration parameters](#action-reference-Lambda-config).
  + `inputArtifacts`: The artifact supplied to the action.
    + `location`: The artifact store location.
      + `s3Location`: The input artifact location information for the action.
        + `bucketName`: The name of the pipeline artifact store for the action (for example, an Amazon S3 bucket named codepipeline-us-east-2-1234567890).
        + `objectKey`: The name of the application (for example, `CodePipelineDemoApplication.zip`).
      + `type`: The type of artifact in the location. Currently, `S3` is the only valid artifact type.
    + `revision`: The artifact's revision ID. Depending on the type of object, this can be a commit ID (GitHub) or a revision ID (Amazon Simple Storage Service). For more information, see [ArtifactRevision](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactRevision.html).
    + `name`: The name of the artifact to be worked on, such as `MyApp`.
  + `outputArtifacts`: The output of the action.
    + `location`: The artifact store location.
      + `s3Location`: The output artifact location information for the action.
        + `bucketName`: The name of the pipeline artifact store for the action (for example, an Amazon S3 bucket named codepipeline-us-east-2-1234567890).
        + `objectKey`: The name of the application (for example, `CodePipelineDemoApplication.zip`).
      + `type`: The type of artifact in the location. Currently, `S3` is the only valid artifact type.
    + `revision`: The artifact's revision ID. Depending on the type of object, this can be a commit ID (GitHub) or a revision ID (Amazon Simple Storage Service). For more information, see [ArtifactRevision](https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactRevision.html).
    + `name`: The name of the output of an artifact, such as `MyApp`.
  + `artifactCredentials`: The AWS session credentials used to access input and output artifacts in the Amazon S3 bucket. These credentials are temporary credentials that are issued by AWS Security Token Service (AWS STS).
    + `secretAccessKey`: The secret access key for the session.
    + `sessionToken`: The token for the session.
    + `accessKeyId`: The secret access key for the session.
  + `continuationToken`: A token generated by the action. Future actions use this token to identify the running instance of the action. When the action is complete, no continuation token should be supplied.
  + `encryptionKey`: The encryption key used to encrypt the data in the artifact store, such as an AWS KMS key. If this is undefined, the default key for Amazon Simple Storage Service is used. 
    + `id`: The ID used to identify the key. For an AWS KMS key, you can use the key ID, the key ARN, or the alias ARN. 
    + `type`: The type of encryption key, such as an AWS KMS key.

## See also
<a name="action-reference-Lambda-links"></a>

The following related resources can help you as you work with this action.
+ [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/) – For more information about Lambda actions and CloudFormation artifacts for pipelines, see [Using Parameter Override Functions with CodePipeline Pipelines](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html), [Automating Deployment of Lambda-based Applications](https://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html), and [AWS CloudFormation Artifacts](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-cfn-artifacts.html).
+ [Invoke an AWS Lambda function in a pipeline in CodePipeline](actions-invoke-lambda-function.md) – This procedure provides a sample Lambda function and shows you how to use the console to create a pipeline with a Lambda invoke action.

# AWS OpsWorks deploy action reference
<a name="action-reference-OpsWorks"></a>

You use an AWS OpsWorks action to deploy with OpsWorks using your pipeline.

## Action type
<a name="action-reference-StepFunctions-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `OpsWorks`
+ Version: `1`

## Configuration parameters
<a name="action-reference-OpsWorks-config"></a>

**App**  
Required: Yes  
The OpsWorks stack. A stack is a container for your application infrastructure.

**Stack**  
Required: Yes  
The OpsWorks app. The app represents the code you want to deploy and run.

**Layer**  
Required: No  
The OpsWorks stack. A layer specifies the configuration and resources for a set of instances.

## Input artifacts
<a name="action-reference-OpsWorks-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** This is the input artifact for your action.

## Output artifacts
<a name="action-reference-OpsWorks-output"></a>
+ **Number of artifacts:** `0 to 1` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: AWS OpsWorks action
<a name="edit-role-opsworks"></a>

For AWS OpsWorks support, add the following to your policy statement:

```
{
    "Effect": "Allow",
    "Action": [
        "opsworks:CreateDeployment",
        "opsworks:DescribeApps",
        "opsworks:DescribeCommands",
        "opsworks:DescribeDeployments",
        "opsworks:DescribeInstances",
        "opsworks:DescribeStacks",
        "opsworks:UpdateApp",
        "opsworks:UpdateStack"
    ],
    "Resource": "resource_ARN"
},
```

## Example action configuration
<a name="action-reference-OpsWorks-example"></a>

------
#### [ YAML ]

```
Name: ActionName
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Version: 1
  Provider: OpsWorks
InputArtifacts:
  - Name: myInputArtifact
Configuration:
  Stack: my-stack
  App: my-app
```

------
#### [ JSON ]

```
{
    "Name": "ActionName",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Version": 1,
        "Provider": "OpsWorks"
    },
    "InputArtifacts": [
        {
            "Name": "myInputArtifact"
        }
    ],
    "Configuration": {
        "Stack": "my-stack",
        "App": "my-app"
    }
}
```

------

## See also
<a name="action-reference-OpsWorks-links"></a>

The following related resources can help you as you work with this action.
+ [AWS OpsWorks User Guide](https://docs.aws.amazon.com/step-functions/latest/dg/) – For information about deploying with AWS OpsWorks, see the *AWS OpsWorks User Guide*.

# AWS Service Catalog deploy action reference
<a name="action-reference-ServiceCatalog"></a>

You use an AWS Service Catalog action to deploy templates using your pipeline. These are resource templates that you have created in Service Catalog.

## Action type
<a name="action-reference-ServiceCatalog-type"></a>
+ Category: `Deploy`
+ Owner: `AWS`
+ Provider: `ServiceCatalog`
+ Version: `1`

## Configuration parameters
<a name="action-reference-ServiceCatalog-config"></a>

**TemplateFilePath**  
Required: Yes  
The file path for your resource template in your source location.

**ProductVersionName**  
Required: Yes  
The product version in Service Catalog.

**ProductType**  
Required: Yes  
The product type in Service Catalog.

**ProductId**  
Required: Yes  
The product ID in Service Catalog.

**ProductVersionDescription**  
Required: No  
The product version description in Service Catalog.

## Input artifacts
<a name="action-reference-ServiceCatalog-input"></a>
+ **Number of artifacts:** `1`
+ **Description:** This is the input artifact for your action.

## Output artifacts
<a name="action-reference-ServiceCatalog-output"></a>
+ **Number of artifacts:** `0` 
+ **Description:** Output artifacts do not apply for this action type.

## Service role permissions: Service Catalog action
<a name="edit-role-servicecatalog"></a>

For Service Catalog support, add the following to your policy statement:

```
{
    "Effect": "Allow",
    "Action": [
        "servicecatalog:ListProvisioningArtifacts",
        "servicecatalog:CreateProvisioningArtifact",
        "servicecatalog:DescribeProvisioningArtifact",
        "servicecatalog:DeleteProvisioningArtifact",
        "servicecatalog:UpdateProduct"
    ],
    "Resource": "resource_ARN"
},
{
    "Effect": "Allow",
    "Action": [
        "cloudformation:ValidateTemplate"
    ],
    "Resource": "resource_ARN"
}
```

## Example action configurations by type of configuration file
<a name="action-reference-ServiceCatalog-example"></a>

The following example shows a valid configuration for a deploy action that uses Service Catalog, for a pipeline that was created in the console without a separate configuration file:

```
"configuration": {
  "TemplateFilePath": "S3_template.json",
  "ProductVersionName": "devops S3 v2",
  "ProductType": "CLOUD_FORMATION_TEMPLATE",
  "ProductVersionDescription": "Product version description",
  "ProductId": "prod-example123456"
}
```

The following example shows a valid configuration for a deploy action that uses Service Catalog, for a pipeline that was created in the console with a separate `sample_config.json` configuration file:

```
"configuration": {
  "ConfigurationFilePath": "sample_config.json",
  "ProductId": "prod-example123456"
}
```

### Example action configuration
<a name="action-reference-ServiceCatalog-example-default"></a>

------
#### [ YAML ]

```
Name: ActionName
ActionTypeId:
  Category: Deploy
  Owner: AWS
  Version: 1
  Provider: ServiceCatalog
OutputArtifacts:
- Name: myOutputArtifact
Configuration:
  TemplateFilePath: S3_template.json
  ProductVersionName: devops S3 v2
  ProductType: CLOUD_FORMATION_TEMPLATE
  ProductVersionDescription: Product version description
  ProductId: prod-example123456
```

------
#### [ JSON ]

```
{
    "Name": "ActionName",
    "ActionTypeId": {
        "Category": "Deploy",
        "Owner": "AWS",
        "Version": 1,
        "Provider": "ServiceCatalog"
    },
    "OutputArtifacts": [
        {
            "Name": "myOutputArtifact"
        }
    ],
    "Configuration": {
        "TemplateFilePath": "S3_template.json",
        "ProductVersionName": "devops S3 v2",
        "ProductType": "CLOUD_FORMATION_TEMPLATE",
        "ProductVersionDescription": "Product version description",
        "ProductId": "prod-example123456"
    }
}
```

------

## See also
<a name="action-reference-ServiceCatalog-links"></a>

The following related resources can help you as you work with this action.
+ [Service Catalog User Guide](https://docs.aws.amazon.com/servicecatalog/latest/userguide/) – For information about resources and templates in Service Catalog, see the *Service Catalog User Guide*.
+ [Tutorial: Create a pipeline that deploys to Service Catalog](tutorials-S3-servicecatalog.md) – This tutorial tutorial shows you how to create and configure a pipeline to deploy your product template to Service Catalog and deliver changes you have made in your source repository.

# AWS Step Functions invoke action reference
<a name="action-reference-StepFunctions"></a>

An AWS CodePipeline action that does the following:
+ Starts an AWS Step Functions state machine execution from your pipeline.
+ Provides an initial state to the state machine through either a property in the action configuration or a file located in a pipeline artifact to be passed as input.
+ Optionally sets an execution ID prefix for identifying executions originating from the action.
+ Supports [Standard and Express](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html) state machines.

**Note**  
The Step Functions action runs on Lambda, and it therefore has artifact size quotas that are the same as the artifact size quotas for Lambda functions. For more information, see [Lambda quotas](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html) in the Lambda Developer Guide.

## Action type
<a name="action-reference-StepFunctions-type"></a>
+ Category: `Invoke`
+ Owner: `AWS`
+ Provider: `StepFunctions`
+ Version: `1`

## Configuration parameters
<a name="action-reference-StepFunctions-config"></a>

**StateMachineArn**  
Required: Yes  
The Amazon Resource Name (ARN) for the state machine to be invoked.

**ExecutionNamePrefix**  
Required: No  
By default, the action execution ID is used as the state machine execution name. If a prefix is provided, it is prepended to the action execution ID with a hyphen and together used as the state machine execution name.  

```
myPrefix-1624a1d1-3699-43f0-8e1e-6bafd7fde791
```
For an express state machine, the name should only contain 0-9, A-Z, a-z, - and \$1.

**InputType**  
Required: No  
+ **Literal** (default): When specified, the value in the **Input** field is passed directly to the state machine input.

  Example entry for the **Input** field when **Literal** is selected:

  ```
  {"action": "test"}
  ```
+ **FilePath**: The contents of a file in the input artifact specified by the **Input** field is used as the input for the state machine execution. An input artifact is required when **InputType** is set to **FilePath**.

  Example entry for the **Input** field when **FilePath** is selected:

  ```
  assets/input.json
  ```

**Input**  
Required: Conditional  
+ **Literal**: When **InputType** is set to **Literal** (default), this field is optional. 

  If provided, the **Input** field is used directly as the input for the state machine execution. Otherwise, the state machine is invoked with an empty JSON object `{}`.
+ **FilePath**: When **InputType** is set to **FilePath**, this field is required.

  An input artifact is also required when **InputType** is set to **FilePath**.

  The contents of the file in the input artifact specified are used as the input for the state machine execution.

## Input artifacts
<a name="action-reference-StepFunctions-input"></a>
+ **Number of artifacts:** `0 to 1`
+ **Description:** If **InputType** is set to **FilePath**, this artifact is required and is used to source the input for the state machine execution.

## Output artifacts
<a name="action-reference-StepFunctions-output"></a>
+ **Number of artifacts:** `0 to 1` 
+ **Description:**
  + **Standard State Machines**: If provided, the output artifact is populated with the output of the state machine. This is obtained from the `output` property of the [Step Functions DescribeExecution API](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) response after the state machine execution completes successfully.
  + **Express State Machines**: Not supported.

## Output variables
<a name="action-reference-StepFunctions-variables"></a>

This action produces output variables that can be referenced by the action configuration of a downstream action in the pipeline.

For more information, see [Variables reference](reference-variables.md).

**StateMachineArn**  
The ARN of the state machine.

**ExecutionArn**  
The ARN of the execution of the state machine. Standard state machines only.

## Service role permissions: `StepFunctions` action
<a name="edit-role-stepfunctions"></a>

For the `StepFunctions` action, the following are the minimum permissions needed to create pipelines with a Step Functions invoke action.

```
{
    "Effect": "Allow",
    "Action": [
        "states:DescribeStateMachine",
        "states:DescribeExecution",
        "states:StartExecution"
    ],
    "Resource": "resource_ARN"
},
```

## Example action configuration
<a name="action-reference-StepFunctions-example"></a>

### Example for default input
<a name="action-reference-StepFunctions-example-default"></a>

------
#### [ YAML ]

```
Name: ActionName
ActionTypeId:
  Category: Invoke
  Owner: AWS
  Version: 1
  Provider: StepFunctions
OutputArtifacts:
  - Name: myOutputArtifact
Configuration:
  StateMachineArn: arn:aws:states:us-east-1:111122223333:stateMachine:HelloWorld-StateMachine
  ExecutionNamePrefix: my-prefix
```

------
#### [ JSON ]

```
{
    "Name": "ActionName",
    "ActionTypeId": {
        "Category": "Invoke",
        "Owner": "AWS",
        "Version": 1,
        "Provider": "StepFunctions"
    },
    "OutputArtifacts": [
        {
            "Name": "myOutputArtifact"
        }
    ],
    "Configuration": {
        "StateMachineArn": "arn:aws:states:us-east-1:111122223333:stateMachine:HelloWorld-StateMachine",
        "ExecutionNamePrefix": "my-prefix"
    }
}
```

------

### Example for literal input
<a name="action-reference-StepFunctions-example-literal"></a>

------
#### [ YAML ]

```
Name: ActionName
ActionTypeId:
  Category: Invoke
  Owner: AWS
  Version: 1
  Provider: StepFunctions
OutputArtifacts:
  - Name: myOutputArtifact
Configuration:
  StateMachineArn: arn:aws:states:us-east-1:111122223333:stateMachine:HelloWorld-StateMachine
  ExecutionNamePrefix: my-prefix
  Input: '{"action": "test"}'
```

------
#### [ JSON ]

```
{
    "Name": "ActionName",
    "ActionTypeId": {
        "Category": "Invoke",
        "Owner": "AWS",
        "Version": 1,
        "Provider": "StepFunctions"
    },
    "OutputArtifacts": [
        {
            "Name": "myOutputArtifact"
        }
    ],
    "Configuration": {
        "StateMachineArn": "arn:aws:states:us-east-1:111122223333:stateMachine:HelloWorld-StateMachine",
        "ExecutionNamePrefix": "my-prefix",
        "Input": "{\"action\": \"test\"}"
    }
}
```

------

### Example for input file
<a name="action-reference-StepFunctions-example-filepath"></a>

------
#### [ YAML ]

```
Name: ActionName
InputArtifacts:
  - Name: myInputArtifact
ActionTypeId:
  Category: Invoke
  Owner: AWS
  Version: 1
  Provider: StepFunctions
OutputArtifacts:
  - Name: myOutputArtifact
Configuration:
  StateMachineArn: 'arn:aws:states:us-east-1:111122223333:stateMachine:HelloWorld-StateMachine'
  ExecutionNamePrefix: my-prefix
  InputType: FilePath
  Input: assets/input.json
```

------
#### [ JSON ]

```
{
    "Name": "ActionName",
    "InputArtifacts": [
        {
            "Name": "myInputArtifact"
        }
    ],
    "ActionTypeId": {
        "Category": "Invoke",
        "Owner": "AWS",
        "Version": 1,
        "Provider": "StepFunctions"
    },
    "OutputArtifacts": [
        {
            "Name": "myOutputArtifact"
        }
    ],
    "Configuration": {
        "StateMachineArn": "arn:aws:states:us-east-1:111122223333:stateMachine:HelloWorld-StateMachine",
        "ExecutionNamePrefix": "my-prefix",
        "InputType": "FilePath",
        "Input": "assets/input.json"
    }
}
```

------

## Behavior
<a name="action-reference-StepFunctions-types"></a>

During a release, CodePipeline executes the configured state machine using the input as specified in the action configuration.

When **InputType** is set to **Literal**, the content of the **Input** action configuration field is used as the input for the state machine. When literal input is not provided, the state machine execution uses an empty JSON object `{}`. For more information about running a state machine execution without input, see the [Step Functions StartExecution API](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html).

When **InputType** is set to **FilePath**, the action unzips the input artifact and uses the content of the file specified in the **Input** action configuration field as the input for the state machine. When **FilePath** is specified, the **Input** field is required and an input artifact must exist; otherwise, the action fails.

After a successful start execution, behavior will diverge for the two state machine types, *standard* and *express*.

### Standard state machines
<a name="action-reference-StepFunctions-types-standard"></a>

If the standard state machine execution was successfully started, CodePipeline polls the `DescribeExecution` API until the execution reaches a terminal status. If the execution completes successfully, the action succeeds; otherwise, it fails.

If an output artifact is configured, the artifact will contain the return value of the state machine. This is obtained from the `output` property of the [Step Functions DescribeExecution API](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) response after the state machine execution completes successfully. Note that there are output length constraints enforced on this API.

#### Error handling
<a name="action-reference-StepFunctions-types-standard-handling"></a>
+ If the action fails to start a state machine execution, the action execution fails.
+ If the state machine execution fails to reach a terminal status before the CodePipeline Step Functions action reaches its timeout (default of 7 days), the action execution fails. The state machine might continue despite this failure. For more information about state machine execution timeouts in Step Functions, see [Standard vs. Express Workflows](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-standard-vs-express.html).
**Note**  
You can request a quota increase for the invoke action timeout for the account with the action. However, the quota increase applies to all actions of this type in all Regions for that account.
+ If the state machine execution reaches a terminal status of FAILED, TIMED\$1OUT, or ABORTED, the action execution fails.

### Express state machines
<a name="action-reference-StepFunctions-types-express"></a>

If the express state machine execution was successfully started, the invoke action execution completes successfully.

Considerations for actions configured for express state machines:
+ You cannot designate an output artifact.
+ The action does not wait for the state machine execution to complete.
+ After the action execution is started in CodePipeline, the action execution succeeds even if the state machine execution fails.

#### Error handling
<a name="action-reference-StepFunctions-types-express-handling"></a>
+ If CodePipeline fails to start a state machine execution, the action execution fails. Otherwise, the action succeeds immediately. The action succeeds in CodePipeline regardless of how long the state machine execution takes to complete or its outcome.

## See also
<a name="action-reference-StepFunctions-links"></a>

The following related resources can help you as you work with this action.
+ [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/) – For information about state machines, executions, and inputs for state machines, see the *AWS Step Functions Developer Guide*.
+ [Tutorial: Use an AWS Step Functions invoke action in a pipeline](tutorials-step-functions.md) – This tutorial gets you started with a sample standard state machine and shows you how to use the console to update a pipeline by adding a Step Functions invoke action.