

AWS Migration Hub is no longer open to new customers as of November 7, 2025. For capabilities similar to AWS Migration Hub, explore [AWS Transform](https://aws.amazon.com/transform).

# Migration Hub Orchestrator templates
Templates

Migration Hub Orchestrator offers the following templates to configure your migration workflows:
+ **Migrate SAP NetWeaver applications to AWS** – A template to migrate SAP NetWeaver-based applications (S/4HANA, BW4HANA, and ECC on HANA) running on SAP HANA database to AWS.
+ **Rehost applications on Amazon EC2** – A template to rehost applications on Amazon EC2 using AWS Application Migration Service (AWS MGN).
+ ** Rehost SQL Server on Amazon EC2 ** – A template to rehost SQL Server on Amazon EC2 using automated SQL Server backup and restore.
+ **Replatform SQL Server on Amazon Relational Database Service (Amazon RDS)** – A template to replatform SQL Server on Amazon RDS using native SQL Server backup and restore.
+ **Replatform applications to Amazon ECS** – A template to replatform applications to containers on Amazon Elastic Container Service (Amazon ECS).
+ **Import virtual machine images to AWS** – A template to import virtual machine (VM) images to AWS as an Amazon Machine Image (AMI) for Amazon EC2.
+ **Custom templates** – A template that you have created by modifying an existing AWS managed template and saving the changes.

Before you can run the workflow, some templates require the Migration Hub Orchestrator plugin to be configured on-premises. The plugin communicates with the source and target environments to orchestrate and automate migrations. To download and setup the Migration Hub Orchestrator plugin, see [Configure Migration Hub Orchestrator plugin](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/configure-plugin.html). The following table indicates which templates require the plugin setup.


| Template | Plugin setup required | 
| --- | --- | 
| [Migrate SAP NetWeaver applications to AWS](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/migrate-sap.html) | Yes | 
| [Rehost applications on Amazon EC2 ](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/rehost-on-ec2.html) | Yes | 
| [Rehost SQL server on Amazon EC2](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/rehost-sql-ec2.html) | Yes | 
| [Replatform SQL server on Amazon RDS](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/replatform-sql-rds.html) | Yes | 
| [Import virtual machine images to AWS](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/import-vm-images.html) | Optional | 
| [Replatform applications to Amazon ECS](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/replatform-to-ecs.html) | No | 

# Migrate SAP NetWeaver based applications and SAP HANA databases to AWS
Migrate SAP applications and databases

With this template, you can automate the migration of your SAP NetWeaver based applications along with SAP HANA databases, or SAP HANA databases only to AWS.

**Topics**
+ [

## Migration types
](#sap-migration-types)
+ [

## Prerequisites
](#prerequisites-migrate-sap)
+ [

## Target environment setup
](#target-env-setup-migrate-sap)
+ [

## Create a migration workflow
](#create-workflow-migrate-sap)
+ [

## Details
](#details-migrate-sap)
+ [

## Application
](#applications-migrate-sap)
+ [

## Source environment configuration
](#source-configurations-migrate-sap)
+ [

## Migration steps
](#sap-migration-steps)

## Migration types


The template offers the following migration types.
+ SAP NetWeaver on SAP HANA – central system installation
+ SAP NetWeaver on SAP HANA – distributed system installation
+ SAP NetWeaver on SAP HANA – high availability installation
+ SAP NetWeaver on SAP HANA – scale-out
+ SAP HANA database – single node
+ SAP HANA database – high availability
+ SAP HANA database – scale-out

## Prerequisites


You must meet the following requirements to create a migration workflow using this template.
+ Verify that your servers and applications are on a supported operating system. For more information, see [Version support for SAP deployments](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-versions.html).
+ Enable network connectivity between the source and target servers by opening the required ports on both servers.
+ Provide credentials of SAP HANA database instance running on your source server. These credentials are used by the Migration Hub Orchestrator plugin to communicate with the source server.

  1. Sign in to [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/).

  1. On the AWS Secrets Manager page, select **Store a new secret**.

  1. For Secret type, select **Other type of secret** and create the following key value pairs.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/migrate-sap.html)
**Note**  
 The `hana_systemdb_username` and `hana_saptenantdb_username` must have admin permissions to enable the SAP HANA System Replication and perform database backups.

  1. Select **Next** and enter a name beginning with `migrationhub-orchestrator-secretname123` in Secret name.
**Important**  
The Secret ID must begin with the prefix `migrationhub-orchestrator-` and must only be followed by an alphanumeric value.

  1. Select **Next** and then, select **Store**.
+ The following parameters must be the same on the source and target environments.
  + SAP SID
  + SAP HANA SID
  + PAS instance number
  + ASCS instance number
  + SAP HANA instance number
  + SAP HANA database password
+ You must disable SAP HANA system replication before migrating SAP environments with high availability setup.

## Target environment setup
Target environment setup

AWS Migration Hub Orchestrator guides you to create the target environment in AWS to host your SAP NetWeaver application using AWS Launch Wizard for SAP. For more information, see [Get started with AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-getting-started.html).

Create an SAP deployment using AWS Launch Wizard for SAP. For more information, see [Deploy an SAP application with AWS Launch Wizard for SAP](https://docs.aws.amazon.com/launchwizard/latest/userguide/launch-wizard-sap-deploying.html#deploy-console-launch-wizard-sap).

**Note**  
Migration Hub Orchestrator supports single node or multi node SAP NetWeaver stack deployment for target. You must choose to deploy the SAP NetWeaver software as part of target environment setup with Launch Wizard.
+ Create a private key in the Amazon EC2 console and store it in the AWS Secrets Manager. The plugin uses this private key associated with the target instance to perform migration tasks.

  **See the following steps to create a private key.**

  1. Sign in to the Amazon EC2 console.

  1. In the left navigation pane, under Network & Security, select **Key Pairs**.

  1. Select **Create key pair**.

  1. Enter a name for the key pair beginning with `migrationhub-orchestrator-keyname123`.
**Important**  
The Key Pair must begin with the prefix `migrationhub-orchestrator-` and must only be followed by an alphanumeric value.

  1. Select **RSA** as the Key pair type.

  1. Select **.pem** as the Private key file format.

  1. Select **Create key pair** and save the file.

  **See the following steps to store the private key.**

  1. Sign in to [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/).

  1. On the AWS Secrets Manager page, select **Store a new secret**.

  1. For Secret type, select **Other type of secret** and select **Plaintext** below.

  1. Copy and paste the Private key created in Amazon EC2 console and select Next.

  1. In Secret name, enter the same name (`migrationhub-orchestrator-keyname123`) that you used for creating the key pair.

  1. Select **Next** and then, **Store**.
+ To establish a connection between your source and target environments, we recommend creating a new security group with your source IP address while creating an SAP deployment with Launch Wizard.

  1. Under **Infrastructure - SAP landscape**, go to **Security groups**.

  1. Select **Create new security groups**.

  1. In Connection type, select **IP Address/CIDR**.

  1. In Value, enter your source IP address.
+ Launch Wizard attaches the `AmazonEC2RoleForLaunchWizard` instanceRole by default when creating the target environment. After creating the target instance with Launch Wizard, attach the `AWSMigrationHubOrchestratorInstanceRolePolicy` managed policy to `AmazonEC2RoleForLaunchWizard`. For more information, see [AWS managed policies for Migration Hub Orchestrator](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AWSMigrationHubOrchestratorInstanceRolePolicy).
+ Migration Hub Orchestrator uses the same secret to connect to databases on source and target servers for validation. For your target server, ensure that you provide the same SAP HANA database sign-in credentials that you stored in AWS Secrets Manager following the steps in [Prerequisites](#prerequisites-migrate-sap).

## Create a migration workflow


1. Go to [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/), and select **Create migration workflow**.

1. On Choose a workflow template page, select **Migrate SAP NetWeaver on HANA applications** template.

1. Configure and submit your workflow to begin migration.
   + [Details](#details-migrate-sap)
   + [Application](#applications-migrate-sap)
   + [Source environment configuration](#source-configurations-migrate-sap)

**Note**  
You can customize the migration workflow once it has been created. For more information, see [Migration workflows for Migration Hub Orchestrator](migration-workflows.md).

## Details


Enter a name for your workflow. Optionally, you can enter a description and add tags. If you intend to run multiple migrations, we recommend adding tags to enhance searchability. For more information, see [Tagging AWS resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html).

## Application


Select the application you want to migrate. If you do not see the application in the list, you must define it in [AWS Application Discovery Service](https://console.aws.amazon.com/discovery/home).

### Define applications


Define applications by adding a data source and grouping the servers as applications.

**Topics**
+ [

#### Add data source
](#add-data-source)
+ [

#### Group servers
](#group-servers)

#### Add data source


Get metadata about the source servers and applications that you want to migrate to AWS. You can use one of the following methods to collect the data.
+ **Migration Hub import** – Import information about your on-premises servers and applications into Migration Hub. For more information, see [Migration Hub Import](https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-import.html) in the *Application Discovery Service User Guide*. 
+ **AWS Agentless Discovery Connector** – The Discovery Connector is a VMware appliance that collects information about VMware virtual machines (VMs). For more information, see [AWS Agentless Discovery Connector](https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html) in the *Application Discovery Service User Guide*.
+ **AWS Application Discovery Agent** – The Discovery Agent is AWS software that you install on your on-premises servers and VMs to capture system information, as well as information about the network connections between systems. For more information, see [AWS Application Discovery Agent](https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html) in the *Application Discovery Service User Guide*.

#### Group servers


To use Migration Hub Orchestrator, you must group servers as applications.

1. In AWS Migration Hub console, select **Discover**, **Servers**.

1. In the servers list, select each server that you want to group into a new or existing application.

1. To create your application, or add to an existing one, choose **Group as application**.

1. In the **Group as application** dialog box, choose **Group as a new application** or **Add to an existing application**.

1. Select **Group**.

To view and edit your applications in the AWS Migration Hub console, go to **Discover** > **Servers**.

## Source environment configuration


Enter the details of the SAP source environment that you want to migrate with the Migration Hub Orchestrator.

**SAP application server configuration**
+ SAPSID: Enter the system ID of the SAP application that you want to migrate.
+ SAP application hostname: Enter the hostname of the source SAP application. 
+ AWS Application Discovery Service server ID for SAP application server: Select the server ID where the central instance of your source SAP application is running. The IDs in the list are available based on the application configurations made in AWS Application Discovery Service. For more information, see [Define applications](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/orchestrate-migrations.html#define-applications).

**SAP HANA database configuration**
+ SAP HANA replication mode: Select from *synchronous* or *asynchronous* mode for database replication.
+ HANASID: Enter the system ID of your source SAP HANA database.
+ Instance number: Enter the instance number of your source SAP HANA database.
+ Database hostname: Enter the hostname of your source SAP HANA database. To find the hostname, run the `hostname` command on your database.
+ AWS Application Discovery Service server ID for SAP HANA database: Select the server ID where your SAP HANA database is running. The IDs in the list are available based on the application configurations made in Directory Service. For more information, see [Define applications](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/orchestrate-migrations.html#define-applications).
+ Credentials: Select the credentials you created for your source HANA database in [Prerequisites](#prerequisites-migrate-sap).
+ Version: Migration Hub Orchestrator only supports migrations for SAP HANA database 2.0 versions. Verify that the version of your SAP HANA database is 2.0 or higher with `HDB version` command.
+ Backup location: Enter the backup location of your SAP HANA database.

**SSL encryption**
+ If you do not want to use SSL encryption for database replication, select the box next to *I want to disable SSL encryption for database replication*.
+ If you want to use SSL encryption for database replication or leave the box unchecked, a manual step – * Enable SSL on source for replication* in step group 4, must be completed to proceed with your migration workflow.

  1. Open the `global.ini` file on your source SAP HANA system.

  1. Set the replication property as follows.

     ```
            [system_replication_communication]
            enable_ssl=on
     ```

  1. Restart the database.
+ 
**Note**  
SSL encryption is required for SAP NetWeaver on SAP HANA – scale-out and SAP HANA database – scale-out migration types.

For more information, see SAP help portal – [Configure Secure Communication (TLS/SSL) Between Primary and Secondary Sites](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ec50b815f5b740d7a9777d80f7104a2c.html).

## Migration steps


Migration Hub Orchestrator automates the migration process after you create the migration workflow. Some tasks require additional inputs and user interactions.
+ By default, Launch Wizard deploys the target SAP HANA database with baseline HANA components. If the source application that is being migrated has components that have been deployed after the initial installation, check and deploy those components on the target instance.
+ An SAP HANA system has several configuration `(*.ini )` files that contain properties for configuring the system as a whole and individual tenant databases, hosts, and services. SAP HANA's configuration files contain parameters for global system configuration `(global.ini)` and for each service in the system. For instance, `indexserver.ini`. Based on your application requirement, if any of these configuration files have been adjusted on the source, you need to update them on the newly deployed target system before cutover.
+ Before beginning cutover, verify that your source application has been migrated properly. Step group 7 of the **Migrate SAP NetWeaver to AWS** template guides you through the necessary steps.
  + **Stop source SAP production system**: Ensure that there are no end users logged in or accessing the application before stopping the source application.
  + **Stop source HANA production system**: Verify that the HANA System Replication has completed copying data to target and gracefully stopped the source HANA database.
  + **Cutover & Start SAP application**: Start the migrated SAP application servers on the target.
  + **Verify database records**: Verify database records to validate that the application has been migrated properly.
  + **Manual post processing**: Perform any manual post-migration tasks, such as attaching interface file systems or updating end user `SAPGUI`configuration to connect to the newly migrated applications on AWS.

# Rehost applications on Amazon EC2 template
Rehost on Amazon EC2

You can rehost your custom Windows and Linux applications on Amazon EC2 using the *Rehost applications on Amazon EC2* template.

## Prerequisites


You must meet the following requirements to create a migration workflow using this template.
+ Verify that your applications are on a supported operating system. For more information, see [Supported operating systems](https://docs.aws.amazon.com/mgn/latest/ug/Supported-Operating-Systems.html).
+ AWS Application Migration Service must be initialized by the IAM admin of the AWS account. For more information, see [Application Migration Service initialization and permissions](https://docs.aws.amazon.com/mgn/latest/ug/mandatory-setup.html).
+ Complete the replication settings for AWS Application Migration Service. For more information, see [Replication settings](https://docs.aws.amazon.com/mgn/latest/ug/replication-settings-template.html).
+ Users must have the permissions granted by the [https://docs.aws.amazon.com/mgn/latest/ug/security-iam-awsmanpol-AWSApplicationMigrationAgentPolicy.html](https://docs.aws.amazon.com/mgn/latest/ug/security-iam-awsmanpol-AWSApplicationMigrationAgentPolicy.html) policy.
+ Provide credentials in the AWS Secrets Manager to install the AWS Replication Agent on your remote server.

  1. Sign in to [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/).

  1. On the AWS Secrets Manager page, select **Store a new secret**.

  1. For Secret type, select **Other type of secret** and enter the following keys.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/rehost-on-ec2.html)

  1. Select **Next** and enter a name for the key pair beginning with `migrationhub-orchestrator-secretname123`.
**Important**  
The Secret ID must begin with the prefix `migrationhub-orchestrator-` and must only be followed by an alphanumeric value.

  1. Select **Next** and then, select **Store**.
+  Create an IAM role with the Amazon EC2 use case to run test scripts on migrated instances. Attach the [https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AWSMigrationHubOrchestratorInstanceRolePolicy](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AWSMigrationHubOrchestratorInstanceRolePolicy) and AmazonSSMManagedInstanceCore policies to this role. Once the role is created, update the trust policy to include SSM (` ssm.amazonaws.com`). For more information on updating a trust policy, see [Modifying a role trust policy (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-managingrole-editing-console.html#roles-managingrole_edit-trust-policy). 
+ The IAM user running the AWS Application Migration Service must have permissions to perform the `startTest` and `startCutoverInstance` tasks. Create an IAM user and attach the **AWSApplicationMigrationFullAccess**, **AWSApplicationMigrationEC2Access**, and **AmazonEC2FullAccess** policies along with the following inline policy.

  ```
         {
              "Effect": "Allow",
              "Action": [
                  "mgn:StartCutover",
                  "mgn:StartTest"
              ],
              "Resource": "*"
          },
          {
              "Effect": "Allow",
              "Action": "iam:PassRole",
              "Resource": "*",
              "Condition": {
                  "StringEquals": {
                      "iam:PassedToService": "ec2.amazonaws.com"
                  }
              }
          }
  ```

## Create a migration workflow


1. Go to [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/), and select **Create migration workflow**.

1. On Choose a workflow template page, select **Rehost on Amazon EC2 using AWS Application Migration Service** template.

1. Configure and submit your workflow to begin migration.
   + [Details](#details-rehost-on-ec2)
   + [Application](#applications-rehost-on-ec2)
   + [Target environment configuration](#target-env-config-rehost-on-ec2)

**Note**  
You can customize the migration workflow once it has been created. For more information, see [Migration workflows for Migration Hub Orchestrator](migration-workflows.md).

## Details


Enter a name for your workflow. Optionally, you can enter a description and add tags. If you intend to run multiple migrations, we recommend adding tags to enhance searchability. For more information, see [Tagging AWS resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html).

## Application


Select the application you want to migrate. If you do not see the application in the list, you must define it in [AWS Application Discovery Service](https://console.aws.amazon.com/discovery/home).

### Define applications


Define applications by adding a data source and grouping the servers as applications.

**Topics**
+ [

#### Add data source
](#add-data-source)
+ [

#### Group servers
](#group-servers)

#### Add data source


Get metadata about the source servers and applications that you want to migrate to AWS. You can use one of the following methods to collect the data.
+ **Migration Hub import** – Import information about your on-premises servers and applications into Migration Hub. For more information, see [Migration Hub Import](https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-import.html) in the *Application Discovery Service User Guide*. 
+ **AWS Agentless Discovery Connector** – The Discovery Connector is a VMware appliance that collects information about VMware virtual machines (VMs). For more information, see [AWS Agentless Discovery Connector](https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html) in the *Application Discovery Service User Guide*.
+ **AWS Application Discovery Agent** – The Discovery Agent is AWS software that you install on your on-premises servers and VMs to capture system information, as well as information about the network connections between systems. For more information, see [AWS Application Discovery Agent](https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html) in the *Application Discovery Service User Guide*.

#### Group servers


To use Migration Hub Orchestrator, you must group servers as applications.

1. In AWS Migration Hub console, select **Discover**, **Servers**.

1. In the servers list, select each server that you want to group into a new or existing application.

1. To create your application, or add to an existing one, choose **Group as application**.

1. In the **Group as application** dialog box, choose **Group as a new application** or **Add to an existing application**.

1. Select **Group**.

To view and edit your applications in the AWS Migration Hub console, go to **Discover** > **Servers**.

## Target environment configuration


If you want to run test scripts on migrated instances, check the box for *I want to run test scripts on the migrated instances*.

**Note**  
We recommend having separate workflows for Linux and Windows servers if you want to run validation tests on migrated instances.
+ Test script location: Specify the Amazon S3 bucket that contains your test script. For more information, see [Getting started with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html).
+ IAM role: Choose the IAM role you created in [Prerequisites](#prerequisites-rehost-on-ec2).
+ Script run command: Enter the **run** command for your script.

Credentials to install AWS Replication Agent: Select the credentials you created in [Prerequisites](#prerequisites-rehost-on-ec2).

# Rehost SQL Server on Amazon EC2 template
Rehost SQL Server on Amazon EC2

With **Rehost SQL server on Amazon EC2** template, you can rehost your SQL Server databases on an instance to Amazon EC2 using automated SQL Server backup and restore. You can also migrate databases that are encrypted with transparent data encryption (TDE). This template migrates User database items, Certificates, Logins and Agent Jobs that are associated with your SQL Server.

**Topics**
+ [

## Prerequisites
](#rehost-sql-ec2)
+ [

## Creating the migration workflow
](#w2aac16c15b9)
+ [

## Running the migration workflow
](#w2aac16c15c11)
+ [

## FAQ
](#w2aac16c15c13)

## Prerequisites


You must set up the source environment before creating a migration workflow.

**Topics**
+ [

### Source environment setup
](#w2aac16c15b7b7)

### Source environment setup

+ Ensure that PowerShell is enabled on the server that contains your SQL server instance.
+ Install AWS.Tools on the server that contains your SQL server instance, with the following command.

```
Install-Module -Name AWS.Tools.Installer
```
+ Install the `DBA.Tools` module on your Windows machine, with the following command.

```
Cmd: Install-Module dbatools
```

## Creating the migration workflow


1. Go to [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/)

1. Select **Create migration workflow** .

1. On Choose a workflow template page, select **Rehost SQL server on Amazon EC2** template.

1. Configure and submit your workflow to begin migration.

**Note**  
You can customize the migration workflow once it has been created. For more information, see [Migration workflows for Migration Hub Orchestrator](migration-workflows.md).

**Topics**
+ [

### Application
](#w2aac16c15b9b7b3)
+ [

### ServerId
](#w2aac16c15b9b7b5)
+ [

### Source Environment Configuration
](#w2aac16c15b9b7b7)
+ [

### Target Environment Configuration
](#w2aac16c15b9b7b9)

### Application


Select the application you want to migrate. If you do not see the application in the list, you must define it in [AWS Application Discovery Service](https://console.aws.amazon.com/discovery/home). An Application in this context is considered as a group of servers, and does not refer to applications running on top of your SQL server.

### ServerId


Within the Application you defined in the [AWS Application Discovery Service](https://console.aws.amazon.com/discovery/home), select the serverId of the server which hosts your SQL server.

### Source Environment Configuration


The details here help us to identify the details of your source SQL Server.

1. **TDE** - Check this checkbox if you have TDE enabled on your Databases. If you select this option, your certificates will be migrated to the target server.

1. **Migration Mode ** - This template offers 3 distinct migrations depending on your use-case. 

   1. “ *Use only Full backup* ” - The template will only create a full backup of your databases and restore it on your target.

   1. “ *Use Full backup and Differential backup for Cutover* ” - A full backup of your databases will be created and restored on the target, after which you can mark the databases readonly, and a differential backup and restore will be used to migrate the remainder of the data. 

   1. “ *Use Full backup, Differential backup for pre-cutover and T-Log backup for cutover* ” - A full backup of your databases will be created and restored on the target. When you are getting ready for cutover, a differential backup and restore will be used to migrate the remainder of the data. Lastly, after you mark your databases readonly, Tail-Log backups will be used to migrate the remainder of the data.

1. **Allow Migration Without Direct Connect ** - This template uploads backup files from your source instance to S3 using the AWS CLI. The database files are transmitted over an HTTPS to AWS S3. However, if you are not comfortable with the backup files travelling over the public Internet, we recommend using AWS Direct Connect with a Public VIF setup. If you are comfortable with this, please select this checkbox. The migration workflow will not create unless you check this checkbox or have the setup mentioned above.

1. **Source SQL Server database names** - The names of the SQL Databases that you would like to migrate.

1. **AWS ADS server ID for your application** - See “ServerId” section above.

1. **Source SQL Server instance name ** - The name of your SQL server instance.

1. **Backup location ** - As a part of the migration, this template needs to take backups of your SQL Server. The path specified here is where the backup files will be stored. Please ensure this is an absolute path and has enough space for a Full and Differential backup of your databases.

### Target Environment Configuration


The details here help us to identify the details of your migration to your target server.

1. **Restore Logins ** - Select this checkbox if you would like to migrate your SQL Server Logins to your target instance.

1. **Restore Agent Jobs ** - Select this checkbox if you would like to migrate your SQL Server Agent Jobs to your target instance.

## Running the migration workflow

+ When configuring the Migration Hub Orchestrator plugin, ensure that the username that is provided to connect to your Windows machine has the 

  `SYSAdmin` permission on the source SQL server instance.

### Create AWS Profile on Source Server

+ Create an IAM policy with the following permissions.

------
#### [ JSON ]

****  

  ```
  {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "kms:GenerateDataKey",
                    "kms:CreateKey"
                ],
                "Resource": "*"
            }
        ]
  }
  ```

------
+ Create an IAM user with the above policy attached.
+ Configure a named profile for AWS Command Line Interface that uses the preceding IAM user. For more information, see [Using AWS credentials](https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html) . The credentials stored in the profile are used to upload your backups to a S3 bucket located in your account. (You will need the name of this profile when creating the workflow)

### Create Target EC2 Instance


This template does not create your EC2 instance for you. To create this instance based on your requirements, we recommend one of the following:
+ ( *Optional* ) If you want to use BYOL for SQL server, use AWS VM Import/Export to import your VM image.
+ ( *Optional* ) Use AWS Launch Wizard to deploy your target SQL server.
  + Launch Wizard attaches the 

    `AmazonEC2RoleForLaunchWizard` instance role by default when creating the target environment.
  + After creating the target environment with Launch Wizard, attach the `AWSMigrationHubOrchestratorInstanceRolePolicy` managed policy to `AmazonEC2RoleForLaunchWizard` . For more information, see [AWS managed policies for Migration Hub Orchestrator](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/security-iam-awsmanpol.html#security-iam-awsmanpol-AWSMigrationHubOrchestratorInstanceRolePolicy) .
+ Connect to the target EC2 instance and note the following:
  + Name of the SQL Server
  + Path to store data for the SQL Server
  + Path to store logs for the SQL Server
  + Path to store the downloaded backup files for the restore procedure. Please ensure this is large enough to hold the backup files of your database.

### Configure Target Permissions


Once your EC2 instance is configured and your target SQL server is deployed, follow these steps:
+ If you are not using Launch Wizard to create your target environment, attach the `AWSMigrationHubOrchestratorInstanceRolePolicy` managed policy to your instance role.
+ Add the following permissions to your instance role.

------
#### [ JSON ]

****  

```
{
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "s3:GetObject",
                  "kms:Decrypt",
                  "s3:ListAllMyBuckets",
                  "s3:ListBucket"
              ],
              "Resource": "*"
          }
      ]
}
```

------

### Create Target SQL Server User

+ Create a username in your target SQL server with `SYSAdmin` permission.
+ Provide credentials in AWS Secrets Manager for the username created in your target SQL server.

  1. Sign in to [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/)

  1. On the AWS Secrets Manager page, select **Store a new secret** .

  1. For Secret type, select **Other type of secret** and enter the following keys.

     1. `username` - enter your username

     1. `password` - enter your password

  1. Select **Next** and enter a name for the key pair beginning with `migrationhub-orchestrator-secretname123` .

     1. The Secret ID must begin with the prefix `migrationhub-orchestrator-` and must only be followed by an alphanumeric value.

  1. Select **Next** and then, select **Store** .

  1. Copy the name of this secret and put the value into the manual step in the workflow.

## FAQ


Q. **What does this template do?**

A. This template migrates User Database Items, Certificates, Agent Jobs and Logins from a source SQL server to a target SQL Server environment on Amazon EC2 .

Q. **Do I need to create the target SQL Server?**

A. Yes. This template focuses on data migration. You need to setup the target SQL server before using this template. Based on your requirements, we recommend using AWS Launch Wizard or AWS VM Import/Export Service to accomplish this.

Q. **What kind of backups do you use for migration?**

A. Based on your input, we use either only a full backup, a combination of full and differential backups or a combination of full, differential and tail-log backups for migration.

Q. **When do I need to put my databases in ‘readonly’ mode?**

A. Based on the type of migration selected there are different points to do this -

1. For full backup only migrations set the databases to readonly before begging the migration workflow.

1. For full and differential backup migrations, set the databases to read only when instructed to do so on Step 4.1 in the workflow.

1. For full, differential and tail-log backups, set the databases to read only when instructed to do so on Step 4.4 in the workflow.

These different configurations help us ensure we capture all the changes in your SQL server when migrating, to create parity between your source and target.

Q. **What security measures do you take while migrating data?**

A. Our goal is to handle data securely at all times. Backup files are transferred over an HTTPS connection to AWS S3, and then to your target EC2 instance. Certificate files are handled specially, as the workflow creates a KMS key in your account, which is used to encrypt the certificate before transport, and de-crypt the certificate in your target environment.

Q. **What are the limitations of this template?**

A. This template will not do the following:

1. This template does not migrate System Databases or SQL Server properties.

1. This template can only migrate SQL logins. Any Windows level logins are not guaranteed to be migrated.

1. This template expects that while the workflow is running, you will not initiate a full-backup of the database yourself. If a full-backup is taken, it breaks the chain of backups used to restore your databases on the target server.

Q. **I ran into an error during a database connection step. What do I do?**

A. An error here indicates a problem with connecting to your SQL Server.

1. If this occurs on the source, ensure the user that was given to the plugin to connect to the machine has SYSADMIN permissions on your source SQL server.

1. If this happens on the target, ensure that the EC2 Instance ID provided during workflow creation is correct, and ensure the SQL credentials stored in your Secrets Manager Secret are correct.

Q. **I ran into an error during a database validation step. What do I do?**

A. An error here indicates an incompatibility between the inputs provided during workflow creation and the target environment. Look at the step logs located inside the S3 bucket shown in the error message to diagnose the issue and re-create the workflow with the appropriate inputs.

Q. **I ran into an error while a backup step was running. What do I do?**

A. If you run into an error during the backup steps, look at the step logs located inside the S3 bucket shown in the error message. Once you diagnose and fix the issue, please clean the appropriate backup directory on the source machine before re-trying this step.

Q. **I ran into an error while a restore step was running. What do I do?**

A. If you run into an error during the restore steps, look at the step logs located inside the S3 bucket shown in the error message. If you have taken a full backup after the workflow started, the backup chain is broken and hence this workflow cannot be recovered. You will have to delete the workflow, wipe your target SQL server and re-create the workflow.

# Replatform SQL on Amazon RDS template
Replatform SQL on Amazon RDS

 With **Replatform SQL server on Amazon RDS** template, you can replatform your SQL Server databases on an instance to Amazon RDS using native backup and restore. You can also migrate databases that are encrypted with transparent data encryption. This template migrates User database items, Certificates, Logins and Agent Jobs that are associated with your SQL Server.

**Topics**
+ [

## Prerequisites
](#w2aac16c17b7)
+ [

## Creating the migration workflow
](#w2aac16c17b9)
+ [

## Running the migration workflow
](#w2aac16c17c11)
+ [

## FAQ
](#w2aac16c17c13)

## Prerequisites


 You must set up the source environment before creating a migration workflow. 

**Topics**
+ [

### Source environment setup
](#w2aac16c17b7b7)

### Source environment setup

+  Ensure that PowerShell is enabled on the server that contains your SQL server instance. 
+  Install AWS.Tools on the server that contains your SQL server instance, with the following command. 

```
Install-Module -Name AWS.Tools.Installer
```
+  Install the `DBA.Tools` module on your Windows machine, with the following command. 

```
Cmd: Install-Module dbatools
```

 

## Creating the migration workflow

+  Go to [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/) 
+  Select **Create migration workflow**. 
+  On Choose a workflow template page, select **Replatform SQL server on Amazon RDS** template. 
+  Configure and submit your workflow to begin migration. 

**Note**  
You can customize the migration workflow once it has been created. For more information, see [Migration workflows for Migration Hub Orchestrator](migration-workflows.md).

**Topics**
+ [

### Application
](#w2aac16c17b9b9)
+ [

### ServerId
](#w2aac16c17b9c11)
+ [

### Source Environment Configuration
](#w2aac16c17b9c13)

### Application


 Select the application you want to migrate. If you do not see the application in the list, you must define it in [AWS Application Discovery Service](https://console.aws.amazon.com/discovery/home). An Application in this context is considered the unit of migration, and does not refer to applications running on top of your SQL server. 

### ServerId


 Within the Application you defined in the [AWS Application Discovery Service](https://console.aws.amazon.com/discovery/home), select the serverId of the server which hosts your SQL server. 

### Source Environment Configuration


 The details here help us to identify the details of your source SQL Server. 
+  **TDE** - Check this checkbox if you have TDE enabled on your Databases. If you select this option, your certificates will be migrated to the target server. 
+  **Migration Mode** - This template offers 3 distinct migrations depending on your use-case.  
+ 
  +  “*Use only Full backup*” - The template will only create a full backup of your databases and restore it on your target. 
  +  “*Use Full backup and Differential backup for Cutover*” - A full backup of your databases will be created and restored on the target, after which you can mark the databases readonly, and a differential backup and restore will be used to migrate the remainder of the data.  
  +  “*Use Full backup, Differential backup for pre-cutover and T-Log backup for cutover*” - A full backup of your databases will be created and restored on the target. When you are getting ready for cutover, a differential backup and restore will be used to migrate the remainder of the data.  Lastly, after you mark your databases readonly, Tail-Log backups will be used to migrate the remainder of the data. 
+  **Allow Migration Without Direct Connect** - This template uploads backup files from your source instance to S3 using the AWS CLI. The database files are transmitted over an HTTPS to AWS S3. However, if you are not comfortable with the backup files travelling over the public Internet, we recommend using AWS Direct Connect with a Public VIF setup. If you are comfortable with this, please select this checkbox. The migration workflow will not create unless you check this checkbox or have the setup mentioned above. 
+  **Source SQL Server database names** - The names of the SQL Databases that you would like to migrate. 
+  **AWS ADS server ID for your application** - See “ServerId” section above. 
+  **Source SQL Server instance name** - The name of your SQL server instance. 
+  **Backup location** - As a part of the migration, this template needs to take backups of your SQL Server. The path specified here is where the backup files will be stored. Please ensure this is an absolute path and has enough space for a Full and Differential backup of your databases. 

 

## Running the migration workflow

+  When configuring the Migration Hub Orchestrator plugin, ensure that the username that is provided to connect to your Windows machine has the `SYSAdmin` permission on the source SQL server instance. 

### Create AWS Profile on Source Server

+  Create an IAM policy with the following permissions. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "kms:GenerateDataKey",
                "kms:CreateKey"
            ],
            "Resource": "*"
        }
    ]
}
```

------
+  Create an IAM user with the above policy attached. 
+  Configure a named profile for AWS Command Line Interface that uses the preceding IAM user. For more information, see [Using AWS credentials](https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html). The credentials stored in the profile are used to upload your backups to a S3 bucket located in your account. Note the name of this profile and enter it into the step when prompted. 

 

### Create your RDS Database


 This template does not create your RDS instance for you.  
+  Deploy an Amazon RDS SQL server with the same version as the source SQL server. 
+  Configure the target Amazon RDS SQL server with the same parameter groups as the source SQL server. 
+  Configure the option group for backup/restore and transparent data encryption in Amazon RDS, and attach the following policies to the created IAM role. 

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "kms:Decrypt",
                  "s3:ListAllMyBuckets",
                  "kms:DescribeKey"
              ],
              "Resource": "*"
          },
          {
              "Sid": "VisualEditor1",
              "Effect": "Allow",
              "Action": [
                  "s3:ListBucket",
                  "s3:GetBucketAcl",
                  "s3:GetBucketLocation"
              ],
              "Resource": [
                  "*"
              ]
          },
          {
              "Sid": "VisualEditor2",
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject",
                  "s3:GetObject",
                  "s3:AbortMultipartUpload",
                  "s3:ListMultipartUploadParts"
              ],
              "Resource": [
                  "*"
              ]
          }
      ]
  }
  ```

------
+  The trust policy for this role should be: 

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Principal": {
                "Service": "rds.amazonaws.com"
              },
              "Action": "sts:AssumeRole"
          }
      ]
  }
  ```

------

 

### Create attached EC2 Instance

+  Deploy an Amazon EC2 instance and create an instance role. 
+ 
  +  Attach the `AWSMigrationHubOrchestratorInstanceRolePolicy` and `AmazonSSMManagedInstanceCore` managed policies to this role. 
  +  Add the following permissions to this role. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::migrationhub-orchestrator-*",
                "arn:aws:s3:::aws-migrationhub-orchestrator-*/*"
            ]
        }
    ]
}
```

------
+  Ensure that your Amazon RDS instance can be reached from the created Amazon EC2 instance. 
+  This instance is used to connect to your RDS instance and run restore procedures. 

 

### Create Target SQL Server User 

+  Provide credentials in AWS Secrets Manager for the username and password for the admin user for your RDS Server. 

  1.  Sign in to [https://console.aws.amazon.com/secretsmanager/](https://console.aws.amazon.com/secretsmanager/) 

  1.  On the AWS Secrets Manager page, select **Store a new secret**. 

  1.  For Secret type, select **Other type of secret** and enter the following keys. 

  1. 
     +  `username` - enter your username 
     +  `password` - enter your password 

  1.  Select **Next** and enter a name for the key pair beginning with `migrationhub-orchestrator-``secretname123`. 

  1. 
     +  The Secret ID must begin with the prefix `migrationhub-orchestrator-` and must only be followed by an alphanumeric value. 

  1.  Select **Next** and then, select **Store**. 

  1.  Copy the name of this secret and provide it to the workflow step when prompted. 

## FAQ


Q. **What does this template do?**

A. This template migrates User Database Items, Certificates, Agent Jobs and Logins from a source SQL server to a target SQL Server hosted on RDS.

Q. **Do I need to create the target SQL Server?**

A. Yes. This template focuses on data migration. You need to setup the target SQL server before using this template. 

Q. **What kind of backups do you use for migration?**

A. Based on your input, we use either only a full backup, a combination of full and differential backups or a combination of full, differential and tail-log backups for migration.

Q. **When do I need to put my databases in ‘readonly’ mode?**

A. Based on the type of migration selected there are different points to do this -
+  For full backup only migrations set the databases to readonly before begging the migration workflow. 
+  For full and differential backup migrations, set the databases to read only when instructed to do so on Step 4.1 in the workflow. 
+  For full, differential and tail-log backups, set the databases to read only when instructed to do so on Step 4.4 in the workflow. 

These different configurations help us ensure we capture all the changes in your SQL server when migrating, to create parity between your source and target.

Q.**What security measures do you take while migrating data?**

A. Our goal is to handle data securely at all times. Backup files are transferred over an HTTPS connection to AWS S3, and then to your EC2 instance, before being restored to RDS. Certificate files are handled specially, as the workflow creates a KMS key in your account, which is used to encrypt the certificate before transport, and de-crypt the certificate in your target environment.

Q.**Why do I need to create an EC2 instance?**

A. The EC2 instance you create is used to run the restore procedures on your RDS endpoint. It is designed this way so that your RDS endpoint does not need to be exposed to the public internet for the restore procedure.

Q.**What are the limitations of this template?**

A. This template will not do the following:
+  This template does not migrate System Databases or SQL Server properties. 
+  This template can only migrate SQL logins. Any Windows level logins are not guaranteed to be migrated. 
+  This template expects that while the workflow is running, you will not initiate a full-backup of the database yourself. If a full-backup is taken, it breaks the chain of backups used to restore your databases on the target server. 
+  This template can only migrate databases that have the “DBO” set as a sql user or AD user. If the database is owner is a Windows level user which is not available in the RDS environment, the database will be inaccessible when restored on RDS. 

Q. **I ran into an error during a database connection step. What do I do?**

A. An error here indicates a problem with connecting to your SQL Server.
+  If this occurs on the source, ensure the user that was given to the plugin to connect to the machine has SYSADMIN permissions on your source SQL server. 
+  If this happens on the target, ensure that the EC2 Instance ID provided during workflow creation has connectivity to your RDS Endpoint, and ensure the SQL credentials stored in your Secrets Manager Secret are correct. 

Q. **I ran into an error during a database validation step. What do I do?**

A. An error here indicates an incompatibility between the inputs provided during workflow creation and the target environment. Look at the step logs located inside the S3 bucket shown in the error message to diagnose the issue and re-create the workflow with the appropriate inputs.

Q. **I ran into an error while a backup step was running. What do I do?**

A. If you run into an error during the backup steps, look at the step logs located inside the S3 bucket shown in the error message. Once you diagnose and fix the issue, please clean the appropriate backup directory on the source machine before re-trying this step.

Q. **I ran into an error while a restore step was running. What do I do?**

A. If you run into an error during the restore steps, look at the step logs located inside the S3 bucket shown in the error message. If you have taken a full backup after the workflow started, the backup chain is broken and hence this workflow cannot be recovered. You will have to delete the workflow, wipe your target SQL server and re-create the workflow.

# Replatform applications to Amazon ECS template
Replatform applications to Amazon ECS

You can use the *Replatform applications to Amazon ECS* template in Migration Hub Orchestrator to replatform your .NET and Java applications to containers. The applications can be sourced from EC2 instances or application artifacts that are uploaded to Amazon S3. You can deploy containerized applications on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate using one application per container or with multiple applications in a single container.

**Topics**
+ [

## Prerequisites
](#replatform-to-ecs-prerequisites)
+ [

## Configuring a workflow
](#replatform-to-ecs-configure-workflow)
+ [

## Running a workflow
](#replatform-to-ecs-run-workflow)
+ [

# Combining multiple applications in one container
](replatform-to-ecs-combining-applications.md)
+ [

## Completing the required steps
](#replatform-to-ecs-complete-steps)

## Prerequisites


The prerequisites required to use this template depend on the source type that you will specify in the workflow. Your application source can be one or more Amazon EC2 instances or application artifacts that you uploaded to Amazon S3.

The following prerequisites must be met to successfully replatform your applications with this template.

### Source type of Amazon EC2


The following prerequisites apply when you specify the source type of Amazon EC2 while using this template.

**Topics**
+ [

#### Application support and compatibility
](#replatform-to-ecs-prerequisites-setup)
+ [

#### SSM agent
](#replatform-to-ecs-prerequisites-ssm-agent)
+ [

#### IAM instance profile for EC2 instances
](#replatform-to-ecs-prerequisites-permissions-instances)

#### Application support and compatibility


Before using this template on Amazon EC2 instances, ensure that your servers and applications are supported for App2Container. For more information, see [App2Container compatibility](https://docs.aws.amazon.com/app2container/latest/UserGuide/compatibility-a2c.html) and [Applications you can containerize using AWS App2Container](https://docs.aws.amazon.com/app2container/latest/UserGuide/supported-applications.html) in the *AWS App2Container User Guide*.

**Note**  
You don't need to install Docker on your application server to use this template.

#### SSM agent


To use this template with Amazon EC2 instances, they must be managed nodes in AWS Systems Manager (Systems Manager). The SSM agent is required for your instances to become managed nodes. Some AMIs have the SSM agent preinstalled, while others require manual installation. For more information on verifying if the SSM agent is installed, and how to manually install it if required, see [Amazon Machine Images (AMIs) with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ami-preinstalled-agent.html) in the *AWS Systems Manager User Guide*.

#### IAM instance profile for EC2 instances


This template requires that your EC2 instances have an instance profile role with the necessary permissions attached. The permissions provided by an instance profile are used by your EC2 instances. You can create a new IAM instance profile with the required permissions, or add them to an existing role used by the instance. An instance profile can only contain one IAM role. The IAM role can contain one or more policies. For more information, see [Instance profiles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#ec2-instance-profile) and [Work with IAM roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#working-with-iam-roles) in the *Amazon Elastic Compute Cloud User Guide*.

To configure the required Systems Manager core functionality for your EC2 instances, you can attach the AWS managed policy `AmazonSSMManagedInstanceCore` to your instance profile. For more information about instance permissions for Systems Manager, see [Step 1: Configure instance permissions for Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-permissions.html) in the *AWS Systems Manager User Guide*.

The following permissions must also be added to the IAM role used by your instance profile. You can create a new policy with the following JSON policy document and then attach the policy to your instance profile role. For more information, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *AWS Identity and Access Management User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3BucketAccess",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        },
        {
            "Sid": "S3ObjectAccess",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::*/application-transformation*"
            ]
        },
        {
            "Sid": "KmsAccess",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": [
                "arn:aws:kms:*:*:key/*"
            ],
            "Condition": {
                "StringLike": {
                    "kms:ViaService": [
                        "s3.*.amazonaws.com"
                    ]
                }
            }
        },
        {
            "Sid": "TelemetryAccess",
            "Effect": "Allow",
            "Action": [
                "application-transformation:PutMetricData",
                "application-transformation:PutLogData"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
```

------

### Source type of Amazon S3


The following prerequisites apply when you specify the source type of Amazon S3 while using this template.

**Topics**
+ [

#### Amazon S3 buckets
](#replatform-to-ecs-prerequisites-s3-bucket)
+ [

#### Application artifacts
](#replatform-to-ecs-prerequisites-application-artifacts)

#### Amazon S3 buckets


This template requires that you have an Amazon S3 bucket for the S3 input path and the Amazon S3 output path. You can create different buckets for the input and output S3 locations. The workflow requires that the application artifacts be uploaded to an Amazon S3 bucket beginning with the following prefix:

```
S3://bucket-name/application-transformation
```

For more information on creating an Amazon S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon Simple Storage Service User Guide*.

#### Application artifacts


This template requires that you have application artifacts available in an Amazon S3 bucket in the bucket prefix mentioned previously in order to replatform the application. App2Container has the `AWSApp2Container-ReplatformApplications` AWS Systems Manager Automation runbook for use on Amazon EC2 instances which generates the required application artifacts. For more information, see [App2Container Automation runbook](https://docs.aws.amazon.com/app2container/latest/UserGuide/automation-runbook.html) in the *AWS App2Container User Guide*.

When using Amazon S3 as the source type, you must upload these artifacts to the S3 bucket you created with the required application artifact files. The following files are required:
+ `replatform-definition.json`
+ `analysis.json`
+ `ContainerFiles.tar` or `ContainerFiles.zip`

The `replatform-definition.json` file should resemble the following:

```
{
    "version": "1.0",
    "workloads": [
        {
            "containers": [
                {
                    "applications": [
                        {
                            "applicationOverrideS3Uri": "s3://bucket-name/application-transformation/path-to-application-artifacts/"
                        }
                    ]
                }
            ]
        }
    ]
}
```

### Required IAM resources


Multiple resources must have the required permissions in order to use this template. Ensure that you have the following required policies and roles created.

**Topics**
+ [

#### IAM policy for users and roles
](#replatform-to-ecs-prerequisites-permissions-users-roles)
+ [

#### IAM policies and roles for Amazon ECS
](#replatform-to-ecs-prerequisites-permissions-ecs)
+ [

#### (Optional) KMS key policy
](#replatform-to-ecs-prerequisites-permissions-kms)

#### IAM policy for users and roles


Your user or role must have the required permissions to use this template. You can add this policy inline, or create and add this policy to your user, group, or role. For more information, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Choosing between managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies-choosing-managed-or-inline.html) in the *AWS Identity and Access Management User Guide*.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ApplicationTransformationAccess",
            "Effect": "Allow",
            "Action": [
                "application-transformation:StartRuntimeAssessment",
                "application-transformation:GetRuntimeAssessment",
                "application-transformation:PutLogData",
                "application-transformation:PutMetricData",
                "application-transformation:StartContainerization",
                "application-transformation:GetContainerization",
                "application-transformation:StartDeployment",
                "application-transformation:GetDeployment"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AssessmentEc2ReadAccess",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AssessmentIAMRoleAccess",
            "Effect": "Allow",
            "Action": [
                "iam:AttachRolePolicy",
                "iam:GetInstanceProfile",
                "iam:GetRole"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AsssessmentSSMSendCommandAccess",
            "Effect": "Allow",
            "Action": [
                "ssm:SendCommand"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:instance/*",
                "arn:aws:ssm:*::document/AWS-RunRemoteScript"
            ]
        },
        {
            "Sid": "AsssessmentSSMDescribeAccess",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeInstanceInformation",
                "ssm:ListCommandInvocations",
                "ssm:GetCommandInvocation"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:*"
            ]
        },
        {
            "Sid": "S3ObjectAccess",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::*/application-transformation*"
            ]
        },
        {
            "Sid": "S3ListAccess",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::*"
        },
        {
            "Sid": "KmsAccess",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:DescribeKey",
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:*::*"
        },
        {
            "Sid": "EcrAccess",
            "Effect": "Allow",
            "Action": [
                "ecr:CreateRepository",
                "ecr:GetLifecyclePolicy",
                "ecr:GetRepositoryPolicy",
                "ecr:ListImages",
                "ecr:ListTagsForResource",
                "ecr:TagResource",
                "ecr:UntagResource"
            ],
            "Resource": "arn:*:ecr:*:*:repository/*"
        },
        {
            "Sid": "EcrPushAccess",
            "Effect": "Allow",
            "Action": [
                "ecr:InitiateLayerUpload",
                "ecr:PutImage",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer"
            ],
            "Resource": "arn:*:ecr:*:*:repository/*"
        },
        {
            "Sid": "EcrAuthAccess",
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ContainerizeKmsCreateGrantAccess",
            "Effect": "Allow",
            "Action": [
                "kms:CreateGrant"
            ],
            "Resource": "arn:aws:kms:*::*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": true
                }
            }
        },
        {
            "Sid": "CloudformationExecutionAccess",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateStack",
                "cloudformation:UpdateStack"
            ],
            "Resource": [
                "arn:*:cloudformation:*:*:stack/application-transformation-*"
            ]
        },
        {
            "Sid": "GetECSSLR",
            "Effect": "Allow",
            "Action": "iam:GetRole",
            "Resource": "arn:aws:iam::*:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS"
        },
        {
            "Sid": "CreateEcsServiceLinkedRoleAccess",
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws:iam::*:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS",
            "Condition": {
                "StringLike": {
                    "iam:AWSServiceName": "ecs.amazonaws.com"
                }
            }
        },
         {
            "Sid": "CreateElbServiceLinkedRoleAccess",
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing",
            "Condition": {
                "StringLike": {
                    "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
                }
            }
        },
        {
            "Sid": "CreateSecurityGroupAccess",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateSecurityGroup"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Ec2CreateAccess",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateInternetGateway",
                "ec2:CreateKeyPair",
                "ec2:CreateRoute",
                "ec2:CreateRouteTable",
                "ec2:CreateSubnet",
                "ec2:CreateTags",
                "ec2:CreateVpc"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Ec2ModifyAccess",
            "Effect": "Allow",
            "Action": [
                "ec2:AssociateRouteTable",
                "ec2:AttachInternetGateway",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DeleteTags",
                "ec2:ModifySubnetAttribute",
                "ec2:ModifyVpcAttribute",
                "ec2:RevokeSecurityGroupIngress"
            ],
            "Resource": "*"
        },
        {
            "Sid": "IAMPassRoleAccess",
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::123456789012:role/my-role"
        },
        {
            "Sid": "EcsCreateAccess",
            "Effect": "Allow",
            "Action": [
                "ecs:CreateCluster",
                "ecs:CreateService",
                "ecs:RegisterTaskDefinition"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EcsModifyAccess",
            "Effect": "Allow",
            "Action": [
                "ecs:TagResource",
                "ecs:UntagResource",
                "ecs:UpdateService"
            ],
            "Resource": "*"
        },
        {
            "Sid": "EcsReadTaskDefinitionAccess",
            "Effect": "Allow",
            "Action": [
                "ecs:DescribeTaskDefinition"
            ],
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:CalledVia": "cloudformation.amazonaws.com"
                }
            }
        },
        {
            "Sid": "CloudwatchCreateAccess",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:TagResource",
                "logs:PutRetentionPolicy"
            ],
            "Resource": [
                "arn:aws:logs:*:*:log-group:/aws/ecs/containerinsights/*:*",
                "arn:aws:logs:*:*:log-group:/aws/ecs/container-logs/*:*"
            ]
        },
        {
            "Sid": "CloudwatchGetAccess",
            "Effect": "Allow",
            "Action": [
                "logs:GetLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:log-group:/aws/ecs/containerinsights/*:*",
                "arn:aws:logs:*:*:log-group:/aws/ecs/container-logs/*:*"
            ]
        },
        {
            "Sid": "ReadOnlyAccess",
            "Effect": "Allow",
            "Action": [
                "cloudformation:DescribeStacks",
                "cloudformation:ListStacks",
                "clouddirectory:ListDirectories",
                "ds:DescribeDirectories",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeImages",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeKeyPairs",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcs",
                "ecr:DescribeImages",
                "ecr:DescribeRepositories",
                "ecs:DescribeClusters",
                "ecs:DescribeServices",
                "ecs:DescribeTasks",
                "ecs:ListTagsForResource",
                "ecs:ListTasks",
                "iam:ListRoles",
                "s3:GetBucketLocation",
                "s3:GetBucketVersioning",
                "s3:ListAllMyBuckets",
                "secretsmanager:ListSecrets",
                "acm:DescribeCertificate",
                "acm:GetCertificate",
                "ssm:GetParameters"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ElasticLoadBalancingCreateAccess",
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:CreateListener",
                "elasticloadbalancing:CreateLoadBalancer",
                "elasticloadbalancing:CreateTargetGroup",
                "elasticloadbalancing:CreateRule"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ElasticLoadBalancingModifyAccess",
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:ModifyTargetGroup",
                "elasticloadbalancing:ModifyTargetGroupAttributes"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ElasticLoadBalancingGetAccess",
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:DescribeTags",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeRules",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeLoadBalancers"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Route53CreateAccess",
            "Effect": "Allow",
            "Action": [
                "route53:CreateHostedZone"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Route53ModifyAccess",
            "Effect": "Allow",
            "Action": [
                "route53:ChangeTagsForResource",
                "route53:ChangeResourceRecordSets",
                "route53:GetChange",
                "route53:GetHostedZone",
                "route53:ListResourceRecordSets",
                "route53:CreateHostedZone",
                "route53:ListHostedZonesByVPC"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SsmMessagesAccess",
            "Effect": "Allow",
            "Action": [
                "ssm:DescribeSessions",
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ServiceDiscoveryCreateAccess",
            "Effect": "Allow",
            "Action": [
                "servicediscovery:CreateService",
                "servicediscovery:CreatePrivateDnsNamespace",
                "servicediscovery:UpdatePrivateDnsNamespace",
                "servicediscovery:TagResource"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ServiceDiscoveryGetAccess",
            "Effect": "Allow",
            "Action": [
                "servicediscovery:GetNamespace",
                "servicediscovery:GetOperation",
                "servicediscovery:GetService",
                "servicediscovery:ListTagsForResource"
            ],
            "Resource": "*"
        }
    ]
}
```

------

#### IAM policies and roles for Amazon ECS


To deploy your containerized applications on Amazon ECS, you must create IAM policies and roles in your Amazon ECS tasks. For more information about these IAM resources for Amazon ECS and how to create them, see [Task execution IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html) and [Task IAM role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide*.

#### (Optional) KMS key policy


You can use AWS KMS to encrypt resources used by this template. If you create a KMS key to use with this template, we recommend that you use the following least-privilege permissions for your key policy. For more information, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

```
{
    "Sid": "KmsAccess",
    "Effect": "Allow",
    "Action": [
        "kms:GenerateDataKey",
        "kms:Decrypt"
    ],
    "Resource": [
        "arn:aws:kms:*:*:key/*"
    ],
    "Condition": {
        "StringLike": {
            "kms:ViaService": [
                "s3.*.amazonaws.com"
            ]
        }
    }
}
```

## Configuring a workflow


You must configure the workflow for the template in order to replatform your application.

**To create a workflow using the template**

1. Access the Migration Hub Orchestrator console at [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/).

1. In the left navigation pane, under **Orchestrate**, choose **Create workflow**. 

1. On the **Choose a workflow template** page, choose the **Replatform applications to Amazon ECS** template.

1. On the **Configure your workflow** page, enter values for the following:

   1. For **Workflow details**, enter values for the following:

      1. For **Name**, enter a name for your migration workflow.

      1. (Optional) For **Description**, enter a description for the workflow you are creating.

   1. For **Source environment configuration**, specify the following:

      1. For **Source Region**, choose the Region from the dropdown list in which you have EC2 instances hosting applications you want to replatform or the S3 bucket containing your application artifacts.

      1. For **Source type**, choose **EC2 instances** if your applications you want to replatform are in EC2 instances, or **S3 location** if your application artifacts are in an S3 bucket.

         1. If you chose **EC2 instances**, under **Select from EC2 instances**, select the instances which have the applications you want to replatform.

         1. If you chose **S3 location**, under **Specify input path in *Region***, enter the path to your `replatform-definition.json` file in the S3 bucket. Your other required application artifacts should also be in this bucket. You can also choose **Browse S3** to specify the path by navigating to it in the console. The path should resemble the following:

            ```
            S3://bucket-name/application-transformation/replatform-definition.json
            ```

   1. For **Specify S3 output path**, enter the path of your S3 bucket using `S3://` syntax. You can also choose **Browse S3** to specify the path by navigating to it in the console. The path should resemble the following example:

      ```
      S3://bucket-name/application-transformation
      ```

   1. (Optional) For **Tags**, choose **Add new tag** and enter any desired key-value pairs for your resources that are created by this workflow.

   1. Choose **Next**.

   1. On the Review and submit page, ensure the provided details for the workflow are correct, then choose **Create**.

Creating a migration workflow doesn't take action on your resources. You will need to run the workflow as detailed in the following section.

**Note**  
You can customize the migration workflow once it has been created. For more information, see [Migration workflows for Migration Hub Orchestrator](migration-workflows.md).

## Running a workflow


With the workflow created, you can now run it to replatform your applications.

**To run a workflow**

1. Access the Migration Hub Orchestrator console at [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/).

1. In the left navigation pane, under **Orchestrate**, choose **Workflows**.

1. On the **Workflows** page, choose your workflow and then choose **View details**.

1. Choose **Run** to run the workflow.
**Important**  
Some steps might require additional action to complete. All steps must be completed in order to replatform your application. The following section details this process.

# Combining multiple applications in one container
Combining applications

If you are combining multiple applications from your source server to one container, there are additional requirements for the workflow. You can specify this option when you are configuring your workflow for the template **Combine applications in one container** in [Completing the required steps](replatform-to-ecs.md#replatform-to-ecs-complete-steps).

**Note**  
If you are replatforming a single application to one container, the following process is not required.

**Python script**  
You can use the following content to create a Python script on your application server. The script helps you create the required configuration file to containerize multiple application to one container.
+ This script only supports applications running on Linux.
+ This script only supports Regions that are enabled by default.

```
import boto3
import json
import tarfile
import os
import subprocess
import shutil
from pathlib import Path
from argparse import ArgumentParser
from urllib.parse import urlparse


ANALYSIS_INFO_JSON = "analysis.json"
CONTAINER_FILES_TAR = "ContainerFiles.tar"
COMBINED_APPLICATION = "CombinedApplication"
TAR_BINARY_PATH = "/usr/bin/tar"

def get_bucket(s3path):
    o = urlparse(s3path, allow_fragments=False)
    return o.netloc

def get_key(s3path):
    o = urlparse(s3path, allow_fragments=False)
    key = o.path

    if key.startswith('/'):
        key = key[1:]

    if not key.endswith('/'):
        key += '/'

    return key

def format_path(path):
    if not path.endswith('/'):
        path += '/'
    return path

def upload_to_s3(s3_output_path, workflow_id, analysis_file, container_file):
    s3 = boto3.client('s3')

    bucket = get_bucket(s3_output_path)
    key = get_key(s3_output_path)

    analysis_object = key + workflow_id + "/" + COMBINED_APPLICATION +  "/" + ANALYSIS_INFO_JSON
    container_object = key + workflow_id + "/" +  COMBINED_APPLICATION + "/" + CONTAINER_FILES_TAR

    s3.upload_file(analysis_file, bucket, analysis_object) 
    s3.upload_file(container_file, bucket, container_object) 

def download_from_s3(region, s3_paths_list, workspace_s3_download_path):
    
    s3 = boto3.client('s3')

    dir_number=1
    workspace_s3_download_path = format_path(workspace_s3_download_path)
    
    for s3_path in s3_paths_list:
        download_path = workspace_s3_download_path + 'd' + str(dir_number)
        dir_number += 1
        Path(download_path).mkdir(parents=True, exist_ok=True)

        bucket = get_bucket(s3_path)
        key = get_key(s3_path)

        analysis_key = key + ANALYSIS_INFO_JSON
        container_files_key = key + CONTAINER_FILES_TAR

        download_analysis_path = download_path + '/' + ANALYSIS_INFO_JSON
        download_container_files_path = download_path + '/' + CONTAINER_FILES_TAR
        
        s3.download_file(bucket, analysis_key, download_analysis_path)
        s3.download_file(bucket, container_files_key, download_container_files_path)

def get_analysis_data(analysis_json):
    data = ""
    with open(analysis_json) as json_data:
        data = json.load(json_data)
        json_data.close()
    return data

def combine_container_files(workspace_path, count, output_path):
    if not workspace_path.endswith('/'):
        workspace_path += '/'

    for dir_number in range(1, count+1):
        container_files_path = workspace_path + 'd' + str(dir_number)
        container_file_tar = container_files_path + '/' + CONTAINER_FILES_TAR
        
        extract_tar(container_file_tar, output_path)
        
def tar_container_files(workspace_path, tar_dir):
    os.chdir(workspace_path)
    subprocess.call([TAR_BINARY_PATH, 'czf', "ContainerFiles.tar", "-C", tar_dir, "."])

def combine_analysis(workspace_path, count, analysis_output_path, script_output_path):
    if not workspace_path.endswith('/'):
        workspace_path += '/'
   
    #First analysis file is used as a template
    download_path = workspace_path + 'd' + str(1)
    analysis_json = download_path + '/' + ANALYSIS_INFO_JSON
    first_data = get_analysis_data(analysis_json)
    
    cmd_list = []
    ports_list = []

    for dir_number in range(1, count+1):
        download_path = workspace_path + 'd' + str(dir_number)
        analysis_json = download_path + '/' + ANALYSIS_INFO_JSON
        data = get_analysis_data(analysis_json)
        
        cmd = data['analysisInfo']['cmdline']
        cmd = " ".join(cmd)
        cmd_list.append(cmd)

        ports = data['analysisInfo']['ports']
        ports_list += ports

    start_script_path = create_startup_script(cmd_list, script_output_path)
    os.chmod(start_script_path, 0o754)
   
    start_script_filename = '/' + Path(start_script_path).name
    cmd_line_list = [start_script_filename]

    first_data['analysisInfo']['cmdline'] = cmd_line_list
    first_data['analysisInfo']['ports'] = ports_list 

    analysis_output_path = format_path(analysis_output_path)
    analysis_output_file = analysis_output_path + '/' + ANALYSIS_INFO_JSON
    write_analysis_json_data(first_data, analysis_output_file)

def write_analysis_json_data(data, output_path):
    with open(output_path, 'w') as f:
        json.dump(data, f)

def create_startup_script(cmd_list, output_path):
    start_script_path = output_path + '/start_script.sh';
    with open (start_script_path, 'w') as rsh:
        rsh.write('#! /bin/bash\n')
        for cmd in cmd_list:
            rsh.write('nohup ' + cmd + ' >> /dev/null 2>&1 &\n')
        rsh.close()
    return start_script_path

def extract_tar(tarFilePath, extractTo):
    os.chdir(extractTo)
    subprocess.call([TAR_BINARY_PATH, 'xvf', tarFilePath])

def validate_args(args):
    MIN_COUNT = 2
    MAX_COUNT = 5
    s3_paths_count = len(args.s3_input_path)
    if (s3_paths_count < MIN_COUNT):
        print("ERROR: input_s3_path needs atleast " + str(MIN_COUNT) +" s3 paths")
        exit(0)

    if (s3_paths_count > MAX_COUNT):
        print("ERROR: Max input_s3_paths is " + str(MAX_COUNT))
        exit(0)


def cleanup_workspace(temp_workspace):
    yes = "YES"
    ack = input("Preparing workspace. Deleting dir and it's contents '" + temp_workspace + "'. Please confirm with 'yes' to procced.\n")
    if (ack.casefold() == yes.casefold()): 
        if (os.path.exists(temp_workspace) and os.path.isdir(temp_workspace)):
            shutil.rmtree(temp_workspace)
    else:
        print("Please confirm with 'yes' to continue. Exiting.")
        exit(0)

def main():
    parser = ArgumentParser()
    parser.add_argument('--region', help='Region selected during A2C workflow creation', required=True)
    parser.add_argument('--workflow_id', help='Migration Hub Orchestrator workflowId', required=True)
    parser.add_argument('--s3_output_path', help='S3 output path given while creating the workflow', required=True)
    parser.add_argument('--s3_input_path', nargs='+', help='S3 paths which has application artifacts to combine', required=True)
    parser.add_argument('--temp_workspace', nargs='?', default='/tmp', type=str, help='Temp path for file downloads')
    args = parser.parse_args()

    validate_args(args)

    #prepare workspace
    temp_workspace = format_path(args.temp_workspace)
    temp_workspace += 'mho_workspace'    
    
    #cleanup tmp workspace
    cleanup_workspace(temp_workspace)

    #create workspace directories
    Path(temp_workspace).mkdir(parents=True, exist_ok=True)
    apps_count = len(args.s3_input_path)
    temp_output_container_files = temp_workspace + '/outputs/containerfiles'
    os.makedirs(temp_output_container_files, exist_ok=True)
    temp_workspace_output = temp_workspace + "/outputs"
    
    #download files
    download_from_s3(args.region, args.s3_input_path, temp_workspace)
    
    #combine files
    combine_container_files(temp_workspace, apps_count, temp_output_container_files)
    combine_analysis(temp_workspace, apps_count, temp_workspace_output, temp_output_container_files)
    tar_container_files(temp_workspace_output, temp_output_container_files)

    #prepare upload
    analysis_json_file_to_upload = temp_workspace_output + "/" + ANALYSIS_INFO_JSON
    container_files_to_upload = temp_workspace_output + "/" + CONTAINER_FILES_TAR
    upload_to_s3(args.s3_output_path, args.workflow_id, analysis_json_file_to_upload, container_files_to_upload)

if __name__=="__main__":
    main()
```

**To run the Python script**

1. Install Python 3.8 or later on your application server. For information on how to get the latest version of Python, see the official [Python documentation](https://www.python.org/downloads).

1. Install AWS SDK for Python (Boto3). For more information, see [AWS SDK for Python (Boto3)](https://aws.amazon.com/sdk-for-python).

1. Configure Boto3 credentials. For more information, see [Credentials](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html).

1. Run the `combine_applications.py` script while specifying values for the following parameters:

   1. **region** – The Region where your Amazon S3 bucket is located.

   1. **workflow\$1id** – The workflow ID.

   1. **s3\$1input\$1path** – The S3 path that has the S3 artifacts uploaded that need to be combined.

   1. **s3\$1output\$1path** – The output path given when creating the workflow.

   1. **temp\$1workspace** – The workspace directory to use. The default is `/tmp/`.

The following example demonstrates running the script with the required parameters:

```
python3 combine_applications.py --region us-west-2 \
    --workflow_id mw-abc123 \
    --s3_output_path s3://bucket-name/application-transformation/mw-abc123/CombinedApplications \
    --s3_input_path s3://bucket-name/application-transformation/appname1/ s3://bucket-name/application-transformation/appname2/
```

Once the script has completed, the application artifacts will be uploaded to Amazon S3 with a path similar to the following:

```
s3://bucket-name/application-transformation/mw-abc123/CombinedApplications
```

## Completing the required steps


The workflow will require additional input for certain steps in order to complete them. The workflow might take some time to reach this status before you can take action on the steps.

**To complete steps for a workflow**

1. Access the Migration Hub Orchestrator console at [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/).

1. In the left navigation pane, under **Orchestrate**, choose **Workflows**.

1. On the **Workflows** page, choose your workflow and then choose **View details**.

1. In the **Steps** tab, choose **Expand all**. Steps with a **Status** of **User attention required** need additional input to complete the step.

1. Choose the step which requires further input, choose **Actions**, **Change status**, and then choose **Completed**.

   1. The **Analyze** step requires the following input:

      1. For **Applications**, from the dropdown list, select the applications that you want to replatform.

      1. For **Containerization options**, choose either **One application per container** to provision one application per container, or **Combine applications in one container** to provision all applications in one container. For more information on the requirements to combine applications in one container, see [Combining multiple applications in one container](replatform-to-ecs-combining-applications.md).

      1. Choose **Confirm** to complete the step.

   1. The **Deploy** step requires the following input:

      1. For **VPC ID**, enter the ID of the VPC to use for deployment.

      1. For **ECS task execution IAM role ARN**, choose the ARN of the ECS task execution IAM role used to make AWS API calls on your behalf.

      1. (Optional) For **Task role ARN**, choose the ARN of the role to be assumed by Amazon ECS tasks.

      1. (Optional) For **Cluster name**, enter a name to use for the ECS cluster.

      1. (Optional) For **CPU**, choose the number of CPU units the Amazon ECS container agent should reserve for the container.

      1. (Optional) For **Memory**, enter amount of memory to allocate to the container, specified in GB.

   1. Choose **Confirm** to complete the step.

1. On the **Workflows** page, under **Migration workflows**, verify that the overall status of the workflow is **Complete**.

# Import virtual machine images to AWS template
Import virtual machine images

You can use the *Import virtual machine images to AWS* template to convert existing images to Amazon Machine Images (AMI) for Amazon EC2. 

## Prerequisites


You must meet the following requirements to create a VM import workflow using this template.

**AWS Identity and Access Management (IAM) requirements**  
You need to create resources in IAM for both Migration Hub Orchestrator and VM Import/Export:
+ Create an IAM user and attach the required policies to use Migration Hub Orchestrator. For more information, see [Create an IAM user](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/setting-up.html#setting-up-create-iam-user)
+ Create an IAM user and a service role, and attach required policies to use VM Import/Export. For more information, see [Required permissions](https://docs.aws.amazon.com/vm-import/latest/userguide/required-permissions.html).

**VM Import/Export requirements**  
You might need to perform additional tasks to prepare your AWS environment before importing your image. For more information, see [VM Import/Export Requirements](https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html).

**Upload images to Amazon S3**  
Create an Amazon S3 bucket, and add the image files you want to import into the bucket. For more information about creating an Amazon S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). 

The following considerations and limitations apply:
+ The Amazon S3 bucket must be in the same Region as the AWS account in which you are using Migration Hub Orchestrator.
+ You must have separate folders for each image file format you want to upload in your S3 bucket.
+ Migration Hub Orchestrator supports importing the following image file formats:
  + OVA
  + RAW
  + VHD
  + VHDX
  + VMDK

Each image file type has additional requirements for the S3 bucket, file name, and workflow.

------
#### [ OVA files ]

The following considerations apply when you import OVA files:
+ The folder must be named with the prefix `migrationhub-orchestrator-vmie-folder-name` and must only contain one OVA file.
+ The S3 object must end with `.ova`.
+ Only one OVA file can be added in one import task.
+ You can add up to five import tasks in the workflow.

------
#### [ RAW files ]

The following considerations apply when you import RAW files:
+ The folder must be named with the prefix `migrationhub-orchestrator-vmie-folder-name` and must only contain one RAW file.
+ The S3 object must end with `.raw`.
+ Only one RAW file can be added in one import task.
+ You can add up to five import tasks in the workflow.

------
#### [ VMDK files ]

The following considerations apply when you import VMDK files:
+ The folder must be named with the prefix `migrationhub-orchestrator-vmie-folder-name`.
+ The S3 object must end with `.vmdk`.
+ The folder must only contain VMDK files.
+ The folder can contain up to 21 VMDK files.

------
#### [ VHD files ]

The following considerations apply when you import VHD files:
+ The folder must be named with the prefix `migrationhub-orchestrator-vmie-folder-name`.
+ The S3 object must end with `.vhd`.
+ The folder must only contain VHD files.
+ The folder can contain up to 21 VHD files.

------
#### [ VHDX files ]

The following considerations apply when you import VHDX files:
+ The folder must be named with the prefix `migrationhub-orchestrator-vmie-folder-name`.
+ The S3 object must end with `.vhdx`.
+ The folder must only contain VHDX files.
+ The folder can contain up to 21 VHDX files.

------

## Create a workflow


1. Go to [https://console.aws.amazon.com/migrationhub/orchestrator/](https://console.aws.amazon.com/migrationhub/orchestrator/), select **Create migration workflow**.

1. On Choose a workflow template page, select **Import virtual images to AWS** template.

1. Configure and submit your workflow to begin the VM import.
   + [Details](#details-import-vm-images)
   + [Source environment configuration](#source-env-config-import-vm-images)
   + [Target environment configuration](#target-env-config-import-vm-images)

**Note**  
You can customize the migration workflow once it has been created. For more information, see [Migration workflows for Migration Hub Orchestrator](migration-workflows.md).

## Details


Enter a name for your workflow. Optionally, you can enter a description and add tags. If you intend to import multiple VM images, we recommend adding tags to enhance searchability. For more information, see [Tagging AWS resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html).

## Source environment configuration


You need to specify the following parameters to configure your workflow.
+ **Server IP** – This is an optional parameter where you can provide the IP address of the on-premises server that needs to be migrated. You must setup the Migration Hub Orchestrator plugin on providing the IP address. This enables Migration Hub Orchestrator to run a validation and detect any failure scenarios before import.
+ **Disk container** – You must specify the Amazon S3 path to your images that you set up in [Prerequisites](#prerequisites-import-vm-images). See the following examples for more details.

------
#### [ OVA files ]

  You can use either of the following path style examples for the disk container parameter.

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*/*file-name*.ova

------
#### [ RAW files ]

  You can use either of the following path style examples for the disk container parameter.

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*/*file-name*.raw

------
#### [ VMDK files ]

  You can use either of the following path style examples for the disk container parameter.

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*/*file-name*.vmdk

------
#### [ VHD files ]

  You can use either of the following path style examples for the disk container parameter.

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*/*file-name*.vhd

------
#### [ VHDX files ]

  You can use either of the following path style examples for the disk container parameter.

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*

  s3://*bucket-name*/migrationhub-orchestrator-vmie-*folder-name*/*file-name*.vhdx

------

  When more than one disk container is added, Migration Hub Orchestrator runs the workflow sequentially. If the first disk container fails, you must recover the failed container or create a new workflow.
+ **Add new item** – You can add up to five image tasks for the workflow.

## Target environment configuration


This section of the Import virtual machine images to AWS template workflow has optional parameters for licensing. For more information, refer to the following documentation.
+ [Licensing options](https://docs.aws.amazon.com/vm-import/latest/userguide/licensing.html)
+ [Boot modes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-boot.html)

# Custom templates for Migration Hub Orchestrator
Custom templates

You can customize the templates provided by AWS Migration Hub Orchestrator for your use case and save them for reuse. You must first use a template provided by Migration Hub Orchestrator to create a migration workflow. Once the migration workflow is created, you can proceed with making customizations to the workflow.

To help ensure that the custom template works as expected, we recommend that you run the customized workflow after making your changes. Once the updated migration workflow completes successfully, you can save your changes as a new custom template.

**Topics**
+ [

## Prerequisites
](#custom-templates-prerequisites)
+ [

## Creating custom templates
](#creating-custom-templates)
+ [

## Running custom templates
](#running-custom-templates)
+ [

## Updating custom templates
](#updating-custom-templates)

## Prerequisites


The following prerequisites must be met to create custom templates:
+ If the AWS managed template that you will customize requires the Migration Hub Orchestrator plugin, you must configure it before running the workflow. For more information, see [Run the workflow](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/how-mho-works.html#run-workflow).
+ To create a migration workflow, you must have the resources required by your desired AWS managed template available.
+ If you are going to create steps in the workflow of the **Automated** type, ensure that the scripts are accessible using one of the following methods:
  + If the scripts for your custom template will be sourced from an Amazon Simple Storage Service (Amazon S3) location, the script files must be uploaded to an Amazon S3 bucket with the prefix `migrationhub-orchestrator-`. For more information, see [Organizing objects using prefixes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html) in the *Amazon Simple Storage Service User Guide*.
  + If the scripts for your template will be uploaded to the AWS Management Console, you must be able to upload a copy of the script from the device that you are using.

## Creating custom templates


You can use the following steps to create a custom template using the Migration Hub Orchestrator Console or the AWS CLI.

------
#### [ Console ]

**To create a custom template using the Migration Hub Orchestrator console**

1. Sign in to the AWS Management Console and open the Migration Hub Orchestrator console at [https://console.aws.amazon.com/migrationhub/orchestrator/?region=us-east-1\$1/templates](https://console.aws.amazon.com/migrationhub/orchestrator/).

1. Choose the template from the list that you want to customize. For more information about the available templates, see [Templates](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/templates.html).

1. Choose **Create workflow**.

1. Customize the workflow steps and step groups of the template as necessary. For more information about how to modify a workflow, see [Migration workflows](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/migration-workflows.html).

1. (Recommended) Once you have finished modifying the workflow, choose **Run** to start the workflow and ensure it completes successfully. For more information, see [Running workflows](migration-workflows.md#running-migration-workflows).

1. Choose **Save as a template**.

1. For **Name**, enter a name for the custom template.

1. (Optional) For **Description**, enter a description for the custom template.

1. (Optional) Add tags to your custom template:

   1. Choose **Add new tag** for each tag that you'd like to associate with the custom template.

   1. Enter values in the **Key** and **Value** fields as necessary.

1. Choose **Save**.

Your custom template will now be available in the workflow templates list.

------
#### [ AWS CLI ]

You can use the [https://docs.aws.amazon.com/migrationhub-orchestrator/latest/APIReference/API_CreateTemplate.html](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/APIReference/API_CreateTemplate.html) Migration Hub Orchestrator API operation to create a custom template using the AWS CLI.

------

## Running custom templates


You can use the Migration Hub Orchestrator console or the AWS CLI to run custom templates.

**Tip**  
Each template might have prerequisites and manual steps to run the workflow successfully. You can refer to the documentation for the predefined templates provided by Migration Hub Orchestrator your custom template is based on. For more information, see [Templates](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/templates.html).

------
#### [ Console ]

**To run a custom template using the Migration Hub Orchestrator console**

1. Sign in to the AWS Management Console and open the Migration Hub Orchestrator console at [https://console.aws.amazon.com/migrationhub/orchestrator/?region=us-east-1\$1/templates](https://console.aws.amazon.com/migrationhub/orchestrator/).

1. Choose the custom template from the list that you want to run.

1. Choose **Create workflow**.

1. Choose **Run**.

1. In the **Steps** tab, choose **Expand all**. Steps with a **Status** of **User attention required** need additional input to complete the step.

1. Choose the step which requires further input, choose **Actions**, **Change status**, **Completed**.

1.  Take action on any steps which require your input for the workflow to proceed to completion.

------
#### [ AWS CLI ]

You can use the [https://docs.aws.amazon.com/migrationhub-orchestrator/latest/APIReference/API_CreateWorkflow.html](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/APIReference/API_CreateWorkflow.html) Migration Hub Orchestrator API operation to create a custom template using the AWS CLI.

------

## Updating custom templates


You can't directly update a custom template. Instead, you can create a workflow from a custom template and make updates to the migration workflow it creates. Once you have made your updates, you can save your updates to a new custom template.

------
#### [ Console ]

**To update a custom template and save them as a new template using the Migration Hub Orchestrator console**

1. Sign in to the AWS Management Console and open the Migration Hub Orchestrator console at [https://console.aws.amazon.com/migrationhub/orchestrator/?region=us-east-1\$1/templates](https://console.aws.amazon.com/migrationhub/orchestrator/).

1. Choose the custom template from the list that you want to update.

1. Choose **Create workflow**.

1. Customize the workflow steps and step groups of the template as necessary. For more information about how to modify a workflow, see [Migration workflows](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/userguide/migration-workflows.html).

1. (Recommended) Once you have finished modifying the workflow, choose **Run** to start the workflow. For more information, see [Running custom templates](#running-custom-templates).

1. Choose **Save as a template**.

1. For **Name**, enter a name for the custom template.

1. (Optional) For **Description**, enter a description for the custom template.

1. (Optional) Add tags to your custom template:

   1. Choose **Add new tag** for each tag that you'd like to associate with the custom template.

   1. Enter values in the **Key** and **Value** fields as necessary.

1. Choose **Save**.

Your new custom template will be available in the workflow templates list.

------
#### [ AWS CLI ]

You can use the [https://docs.aws.amazon.com/migrationhub-orchestrator/latest/APIReference/API_UpdateTemplate.html](https://docs.aws.amazon.com/migrationhub-orchestrator/latest/APIReference/API_UpdateTemplate.html) Migration Hub Orchestrator API operation to create a custom template using the AWS CLI.

------