

# Change management modes
<a name="using-change-management"></a>

AWS Managed Services (AMS) uses change management mode to guardrail changes in AMS Advanced. The change management modes help you maintain high operational standards for the environment, and to control risk and prevent adverse impact. AMS Advanced has different modes that provide different levels of control and risk. All modes, except for Customer-Managed mode, are managed by AMS. The following are the available change management modes:
+ RFC mode (formerly Standard CM mode): Provides a "request for change" (RFC) system and AMS-custom change types (CTs) 
+ Direct Change mode: Same as RFC mode plus use of AWS APIs and consoles to create AMS-managed resources
+ AWS Service Catalog on AMS: Similar to Direct Change mode, but instead of using the AMS change management system (RFCs), you use AWS Service Catalog to create resources that AMS then manages.
+ Developer mode: Same as Direct Change mode only the resources you create with AWS APIs and consoles are not AMS-managed—you are responsible for their management
+ Self Service Provisioning (SSP) mode: Same as Developer mode except there is no access to the AMS change management system (no RFCs)
+ Customer Managed mode: AMS provides you with a multi-account landing zone landing zone but all resource management is your responsibility

The AWS Managed Services (AMS) change management system, using the change management (CM) API, provides operations to create and manage requests for change (RFCs) for both multi-account landing zone (MALZ) and single-account landing zone (SALZ) accounts. 

A request for change (RFC) is a request created by either you or AMS through the AMS interface to make a change to your managed environment and includes a change type (CT) ID for a particular operation.

The AMS change management (CM) API provides operations to create and manage requests for change (RFCs). You can create, update, submit, approve, reject, and cancel RFCs. The AMS operators can create, update, submit, approve, reject, cancel, and mark RFCs as closed.

For a list of AMS reserved prefixes not to be used in tag or other names, see [Reserved prefixes](https://docs.aws.amazon.com/managedservices/latest/userguide/ams-reserved-prefixes.html).

For information on each change type, including schemas and examples, see the [AMS Change Type Reference](https://docs.aws.amazon.com/managedservices/latest/ctref/index.html).

**Note**  
All change management API calls are recorded in AWS CloudTrail. For more information, see [Accessing your logs](https://docs.aws.amazon.com/managedservices/latest/userguide/access-to-logs.html).

# Modes overview
<a name="ams-modes-ug"></a>

Use this information to help you select the appropriate AWS Managed Services (AMS) mode for hosting your applications, based on your desired combination of flexibility and prescriptive governance to achieve your business outcomes.

The intended audience for this information is:
+ Customer teams responsible for the strategy and governance of their landing zone. This information will help the team lay out the foundation of an AMS-managed landing zone, with the AMS modes they’d like to offer to their internal and external customers.
+ Business and application owners tasked with migrating their application to AMS. This information will help with planning application migration, with the appropriate AMS mode to migrate/host their application. Note, the same application can be hosted in more than one AMS mode during different phases of its Software Development Life Cycle (SDLC) lifecycle.
+ AMS partners tasked with guiding customers on the different options to build and migrate to AMS.

This information is most useful during the foundation phase of setting up your AMS-managed platform, and when you are transitioning from the foundation to the migration phase of your cloud adoption journey, just after onboarding to AMS is complete and you're focusing on application governance and operations.

# Types of modes and accounts in AMS
<a name="ams-modes-types"></a>

AWS Managed Services (AMS) modes can be defined as the ways of interacting with the AMS service under the specific governance framework for each mode. The landing zone differences, multi-account landing zone or MALZ and single-account landing zone or SALZ are noted. 

**Note**  
For details about application deployment and choosing the right AMS mode, see [AMS modes and applications or workloads](https://docs.aws.amazon.com/managedservices/latest/userguide/ams-modes-and-apps-ug.html).  
For real-world use cases of the different modes, see [Real world use cases for AMS modes](https://docs.aws.amazon.com/managedservices/latest/userguide/ams-modes-use-cases.html)

The following table provides descriptions of the modes per AMS service.


| AMS feature | RFC mode (formerly Standard CM mode) / OOD**\$1** | Direct Change mode | AWS Service Catalog | Self-service provisioning / Developer mode | Customer Managed | 
| --- | --- | --- | --- | --- | --- | 
| Landing Zone Configuration | MALZ and SALZ | MALZ and SALZ | MALZ and SALZ | 
| Change Management | Change scheduling, review of manual changes, and change record | Same as RFC mode for high-risk changes like IAM or security groups | None | 
| Logging, Monitoring, Guardrails, and Event Management | Yes (supported resources) | No | 
| Continuity management | Yes (supported resources) | Not applicable / No | No | 
| Security management | Instance level security controls and account level controls | Account level controls | AWS Org level controls | 
| Patch management | Yes | Not applicable / No | No | 
| Incident and problem management | Response and resolution SLA for AMS supported resources | Response SLA for resulting resources | No | 
| Reporting | Yes | No | 
| Service request management | Yes | Support requests only | No | 

**\$1**Operations On Demand (OOD) has an offering for customers using the RFC mode to manage their changes through dedicated resourcing. For more details, see the [ Operations on Demand catalog of offerings](https://docs.aws.amazon.com/managedservices/latest/userguide/ood-catalog.html) and talk to your cloud service delivery manager (CSDM).

**Note**  
[Self-Service Provisioning mode in AMS](self-service-provisioning-section.md) and [AMS Advanced Developer mode](developer-mode-section.md) may both appear to be a suitable fit for an application that has complex architecture rooted in native AWS Services. When architecting workloads, you make trade-offs between operational excellence and agility, based on your business context. This is a good way to think about selecting SSP mode or Developer mode for your application. The selection may also change based on the SDLC phase of the application. For example: When the application is production-ready, then SSP mode maybe a more appropriate option due to stricter AMS guardrails in this mode. The guardrails are enforced in the form of preventative controls like RFC-based change control for IAM updates and SCPs at the application OU level. These business decisions can drive your engineering priorities. You might optimize to increase flexibility for application owners in "pre-prod" phase at the expense of governance and operational support. 

## MALZ architecture and associated AMS modes
<a name="ams-modes-and-malz"></a>

AMS multi-account landing zone (MALZ) gives you the option to automatically provision application accounts (or resource accounts) under the default Organizational Units (OU): Customer Managed OU, Managed OU, or Development OU. The infrastructure provisioned in the application accounts created under each of these OUs is subject to the specific AMS mode offered by those foundational OUs. It is common to find a mix of two or more modes in the same application account. For example: RFC mode and SSP mode can coexist in an AMS managed account that hosts pipeline architecture consisting of API Gateway and Lambda for trigger functions, and EC2, S3, and SQS for ingestion and orchestration. In this case, SSP mode would apply to Lambda and API Gateway.

Figure 1 presents how different modes are offered through the foundational OUs in AMS. When requesting a new application account in AMS, you must select the OU for the account.

MALZ architecture and associated AMS modes

![\[Diagram showing AWS account structure with Management, Shared Services, Network, Security, and Log Archive accounts.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/MALZ-high-level-(Mar2021).png)


AMS leverages the foundational OUs based on AWS best practices as a way to logically manage accounts using Service Control Policies (SCPs). This serves as a way to enforce the governance framework with each AMS mode. Any governance and security guardrails (in the form of SCPs) applied to the foundational OUs also get applied to the custom/child OUs automatically. Additional SCPs can be requested for the child OUs. It is important to understand that application accounts are not the same as modes. Modes are applied to the infrastructure provisioned within the accounts and define the operational responsibilities between AMS and customers.

Figure 1: MALZ architecture and associated AMS modes

![\[Table comparing AMS modes, default governance controls, and support for customer-added controls.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/ams-modes-guardrails-dcm.png)


**Note**  
"Restrictive" implies that you can request custom policies for these OUs, they are approved by AMS on a case-by-case basis to ensure they don’t interfere in AMS's capabilities to provide operational excellence. For a detailed list of AMS guardrails see [AMS Guardrails](https://docs.aws.amazon.com/managedservices/latest/userguide/security-mgmt.html#detective-rules) in the user guide.

# AMS modes and applications or workloads
<a name="ams-modes-and-apps-ug"></a>

Consider operational and governance requirements for your applications when selecting the right mode, either by requesting a new application account or hosting the application in an existing application account. The selection of the appropriate AMS mode for each application or workload depends on the following factors:
+ The type of SDLC lifecycle function that the environment will provide (e.g., sandbox with unmoderated changes, UAT with some frequent changes, production with minimal changes and highly regulated)
+ The governance policies needed (enforced through SCPs at the OU level)
+ Operational Model (if you want to own the operational responsibility or want to outsource that to AMS)
+ The desired business outcomes, like time to operate in the cloud, and cost of operations. 

**Note**  
For a descriptions of the mode types per AMS service, see [Types of modes and accounts in AMS](https://docs.aws.amazon.com/managedservices/latest/userguide/ams-modes-types.html).  
For real-world use cases of the different modes, see [Real world use cases for AMS modes](https://docs.aws.amazon.com/managedservices/latest/userguide/ams-modes-and-use-cases.html)

The following table outlines key considerations for application owners to help decide on the most suitable AMS mode. Application owners should include an assessment phase ahead of application migration to fully understand which mode applies to their specific application. Example: For applications based on cloud-native services or serverless architecture, the best option could be to start building and iterating in Developer mode and deploy the final Infrastructure as Code using AMS Managed – SSP mode. In this case light re-factoring may be required to ensure that any CloudFormation templates created for automated deployment meet the ingest guidelines laid out by AMS. Additionally, any IAM permissions need to be approved by AMS Security to ensure they follow the least privilege model.

The AMS mode selected to host the application, can help enable you to build towards you desired cloud operating model.

**Note**  
More than one cloud operating model can existing in a single AMS Managed Landing Zone based on the different AMS modes selected to host the applications. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/ams-modes-and-apps-ug.html)

**\$1**Operations On Demand (OOD) has an offering for customers using the Standard CM mode to manage their changes through dedicated resourcing. For more details, see the [ Operations on Demand catalog of offerings](https://docs.aws.amazon.com/managedservices/latest/userguide/ood-catalog.html) and talk to your cloud service delivery manager (CSDM).

**Note**  
The price comparison between SSP mode and Developer mode assumes that the same AWS services are provisioned.

Comparing AMS Modes against business and IT objectives

![\[Comparison of AMS modes showing governance and flexibility against time to operationalize.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/ams-modes-choosing-dcm.png)


As shown, if you are looking for a highly controlled and standardized governance model for you applications, then AMS-managed Standard Change, AWS Service Catalog, or Direct Change modes are the best fit. If you require a bespoke governance model with a focus on application innovation without the need for operational readiness, select Customer Managed mode. With Customer Managed mode, it could take you a longer time to operationalize you applications as you bear the responsibility to establish people, processes, and tools to support operational capabilities such as Incident Management, Configuration Management, Provisioning Management, Security Management, Patch Management, etc.

# Real world use cases for AMS modes
<a name="ams-modes-use-cases"></a>

Examine these to help determine how to use AMS modes.
+ **Use Case 1, business imperative to lower costs with a time-sensitive data center exit**: An enterprise with a compelling business event, like a data center exit, is interested in re-hosting their on-prem applications on the cloud. Most of the on-prem inventory consists of Windows and Linux servers with a mix of operating system versions. In doing so, the customer also wants to take advantage of cost savings that moving to the cloud offers and improving the technical and security posture of their applications. The customer wants to move fast but does not have the in-house cloud operations expertise built out yet. The customer has to find a balance of refactoring, too much refactoring can be risky against a tight timeline. However, with some refactoring, like updating OS versions and optimizing databases, applications can achieve the next level of performance. In this example, the customer can select AMS-managed RFC mode to re-host most of their applications. AMS provides infrastructure operations, while also guiding the customer operations teams on best practices on securely operating in the cloud.

  AMS-managed AWS Service Catalog and AMS-managed Direct Change mode gives the customer an extra flexibility while achieving the same business outcomes and objectives. In addition, the customer can use the AMS Operations On Demand (OOD) offering to have dedicated AMS operations engineers to prioritize the execution of requests for change (RFCs).

  While offloading the undifferentiated infrastructure operational tasks (patching, backups, account management, etc) to AMS, the customer can continue to focus on optimizing their application and ramp-up their internal teams on cloud operations. AMS provides monthly reports to the customer on cost savings, and makes recommendations on resource optimizations. In this use case, if there were end-of-life applications hosted on legacy OS versions like Windows 2003 and 2008, that the customer decided not to re-factor, those can also be migrated to AMS and hosted in an account that leverages Customer Managed mode.
+ **Use Case 2, building a data lake with Lambda, Glue, Athena within the secure AMS boundary**: An enterprise is looking to set up a Data Lake to meet the reporting needs for multiple applications in AMS. The customer wants to use S3 buckets for the storage of datasets and AWS Athena to query against the dataset for each report. S3 and AWS Athena will be deployed in separate AMS Managed accounts. The account with S3 also has other services like Glue, Lambda, and Step Functions to build a data ingestion pipeline. Glue, Lambda, Athena, and Step Functions are considered Self-Service Provisioning (SSP) services in this case. The customer also deployed an EC2 instance in the account that acts as an ad hoc tooling/scripting server. The customer starts by requesting AMS to enable the SSP services in their AMS Managed account. AMS provisions an IAM role for each service that the customer can assume, once the role is onboarded to the customer's federation solution. For ease of management, the customer can also combine the policies for the separate IAM roles into one custom role, alleviating the need to switch roles when working between the AWS services. Once the role is enabled in the account, the customer is able to configure the services as per their requirements. However, the customer must work with the AMS change management system to request additional permissions, depending on their use case.

  For example, for access to Glue Crawlers, additional permissions are needed by Glue. Additional permissions will also be needed to create event sources for Lambda. The customer will work with AMS to update IAM roles to allow cross-account access for Athena to query S3 buckets. Updates to service roles or service-linked roles will also be needed through AMS change management for Lambda to call the Step Functions service, and Glue to read and write to all S3 buckets. AMS works with customers to ensure that the least-privilege access model is followed and the IAM changes requested are not overly permissive and opening up the environment to unnecessary risk. The customer’s data lake team spends time planning for all IAM permissions needed for the services specific to the customer’s architecture and requests AMS to enable them. This is because all IAM changes are processed manually and undergo review from the AMS Security team. Time to process these requests should be accounted for in the application deployment schedule.

  As the SSP services are operational in the account, the customer can request support and report issues through AMS incident management and service requests. However, AMS will not actively monitor performance and concurrency metrics for Lambda, or job metrics for Glue. It is the customer’s responsibility to ensure appropriate logging and monitoring is enabled for SSP services. The EC2 instance and S3 bucket in the account are fully managed by AMS. 
+ **Use Case 3, quick and flexible set up of a CICD deployment pipeline in AMS**: A customer is looking to set up a Jenkins-based CICD pipeline to deploy code pipeline to all application accounts in AMS. The customer may find it most suitable to host this CICD pipeline in the AMS-managed Direct Change mode (DCM) or AMS-managed Developer mode because it gives them flexibility to set up the Jenkins server with required custom configuration on EC2, with the desired IAM permissions to access CloudFormation and S3 buckets that host the artifact repository. While this can also be done in the AMS-managed RFC mode, the customer team would need to create multiple manual RFCs for IAM roles to iterate on the least permissive set of approved permissions, which are manually reviewed by AMS. DCM allows the customers to achieve their operational goals on AWS while avoiding the need to create multiple manual RFCs for IAM roles, when using AMS-managed RFC mode, to iterate on the least permissive set of approved permissions, which are manually reviewed by AMS. This would take time as well as education on the customer’s part to ramp up AMS processes and tools. Working with Developer mode, the customer can start with a "developer role" to provision infrastructure using native AWS APIs. The quickest and most flexible way to set up this pipeline would be to use AMS Managed-Developer mode. Developer mode gives the quickest and easiest way, while compromising on operational integration, while DCM is less flexible but does provide the same level of operational support as RFC mode. 
+ **Use Case 4, bespoke operating model within the AMS foundation**: A customer is looking at a deadline-driven data center exit and one of their enterprise applications is fully managed by a third party MSP, including application operations and infrastructure operations. Assuming that the customer does not have time in the schedule to re-factor this application so that it can be operated by AMS, Customer Managed mode is a suitable option. The customer can take advantage of the automated and quick set up of AMS managed Landing Zone. They can leverage the centralized account management that controls account vending and connectivity through the centralized networking account. It also simplifies their billing by consolidating charges for all customer managed accounts through the AMS Payer account. The customer has flexibility to set up their bespoke access management model with the MSP separate from standard access management used for AMS Managed accounts. This way, using Customer Managed mode, they can set up an AMS managed environment while meeting their business requirement of vacating their on-prem environment. In this case, if the customer also has Windows-based applications that they are migrating to the cloud, and choose to move them to a Customer Managed account, the customer is responsible for creating a cloud operating model. This can be complex, expensive, and time consuming depending on the customer's ability to transform traditional IT processes and train people. The customer can save time and cost by "lift and shift" of such workloads to an AMS Managed account and offload infrastructure operations to AMS.
**Note**  
Customers may sometimes feel the need to move application accounts between the governance framework of RFC or SSP mode and Developer mode. For example, customers may host an application in AMS-managed mode as part of initial lift and shift migration, but overtime want to re-write the application to optimize it for cloud-native AWS services. They could change the mode of the pre-prod account from AMS-managed RFC to AMS-managed Developer mode, giving them the flexibility and agility for provisioning infrastructure. However, once infrastructure provisioning changes have been made using the "developer role", the same infrastructure cannot be moved back to AMS-managed RFC mode. This is because AMS cannot guarantee operations of infrastructure that was provisioned outside of the AMS change management system. Customers may need to create a new application account that offers AMS-managed RFC mode and then re-deploy the "optimized" infrastructure configuration through CloudFormation templates or custom AMIs ingested into an AMS-managed account. This is a clean way to deploy a production ready configuration. Once deployed, the application will be under prescriptive AMS governance and operations. The same applies to switching modes between Customer Managed mode and AMS-managed.

# RFC mode
<a name="rfc-mode"></a>

RFC mode is the default mode for AMS Advanced operations plan customers. It includes a change management system with requests for change or RFCs and a catalog of change types to use to request the addition or change that you need to your accounts. This change management system provides a level of security in limiting who can make changes to your accounts.

For details on AMS Advanced change types, see [What Are AMS Change Types?](https://docs.aws.amazon.com/managedservices/latest/ctref/index.html).

For details about onboarding to AMS Advanced, see [AWS Managed Services Onboarding Introduction](https://docs.aws.amazon.com/managedservices/latest/onboardingguide/index.html).

For change type example walkthroughs, see the "Additional Information" section for the relevant change type in the *AMS Advanced Change Type Reference* [Change Types by Classification](https://docs.aws.amazon.com/managedservices/latest/ctref/classifications.html) section.

**Note**  
RFC mode was previously called "Change Management mode" or "Standard CM mode."

**Topics**
+ [Learn about RFCs](ex-rfc-works.md)
+ [What are change types?](understanding-cts.md)
+ [Troubleshooting RFC errors in AMS](rfc-troubleshoot.md)

# Learn about RFCs
<a name="ex-rfc-works"></a>

Requests for change, or RFCs, work in a two-fold manner. First, there are parameters required for the RFC itself. These are the options in the `CreateRfc` API. And second, there are parameters required for the action of the RFC (the execution parameters). To learn about the `CreateRfc` options, see the [CreateRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_CreateRfc.html) section of the *AMS API Reference*. These options typically appear in the **Additional configurations** area of the Create RFC pages.

You can create and submit an RFC with the `CreateRfc` API, `aws amscm create-rfc` CLI, or using the AMS console Create RFC pages. For a tutorial on creating an RFC, see [Create an RFC](ex-rfc-create-col.md).

**Topics**
+ [What are RFCs?](what-r-rfcs.md)
+ [Authenticate when using the AMS API/CLI](ex-rfc-authentication.md)
+ [Understand RFC security reviews](rfc-security.md)
+ [Understand RFC change type classifications](ex-rfc-csio.md)
+ [Understand RFC action and activity states](ex-rfc-action-state.md)
+ [Understand RFC status codes](ex-rfc-status-codes.md)
+ [Understand RFC update CTs and CloudFormation template drift detection](ex-rfc-updates-and-dd.md)
+ [Schedule RFCs](ex-rfc-scheduling.md)
+ [Approve or reject RFCs](ex-rfc-approvals.md)
+ [Request RFC restricted run periods](ex-rfc-restrict-execute.md)
+ [Create, clone, update, find, and cancel RFCs](ex-rfc-use-examples.md)
+ [Use the AMS console with RFCs](ex-rfc-gui.md)
+ [Learn about common RFC parameters](rfc-common-params.md)
+ [Sign up for the RFC daily email](rfc-digest.md)

# What are RFCs?
<a name="what-r-rfcs"></a>

A request for change, or RFC, is how you make a change in your AMS-managed environment, or ask AMS to make a change on your behalf. To create an RFC, you choose from AMS change types, choose RFC parameters (such as schedule), and then submit the request using either the AMS console or the API commands [CreateRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_CreateRfc.html) and [SubmitRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_SubmitRfc.html).

An RFC contain two specifications, one for the RFC itself, and one for the change type (CT) parameters. At the command line, you can use an Inline RFC command, or a standard CreateRfc template in JSON format, that you fill out and submit along with the CT JSON schema file that you create (based on the CT parameters). The CT name is an informal description of the CT. A CSIO (category, subcategory, item, operation) is a more formal description of a CT. Only the CT ID must be specified when creating an RFC.

RFCs go through two key stages: Validation and Execution.

1. In the Validation Stage, AMS reviews the RFC Request for completeness and correctness. AMS also evaluates the request for security in accordance with our [security technical standards](rfc-security.md#rfc-security.title). AMS validates that the requested change is valid and executable.

1. In the Execution Stage, AMS attempts the requested changes on your account.

AMS handles both stages through an automated process, manual process, or a combination of both. The manual process is handled by the AMS Operations team. For more information, see [Automated and manual CTs](ug-automated-or-manual.md).

AMS provides three execution modes for handling requests:
+ **(AMS Recommended) Execution mode: Automated**. These CTs use automation for RFC validations and executions, which is the quickest way to achieve your business outcomes.
+ **(AMS Suggested) Execution mode: Manual and Designation: Managed Automation**. These CTs utilize a combination of automated and manual processes for RFC validations and executions. If automation cannot execute your requested change, then the RFC is transferred (by either automated routing or by the creation of a replacement RFC) to the AMS Operations team for manual handling. Submission of these CTs allow for a more structured intake of your request, supplemented by AMS automation to improve the handling and execution outcome time frame.
+ **Execution mode: Manual and Designation: Review Required**. Changes requested through [ct-1e1xtak34nx76 Management \$1 Other \$1 Other \$1 Update (review required)](https://docs.aws.amazon.com/managedservices/latest/ctref/management-other-other-update-review-required.html) or [ct-0xdawir96cy7k Management \$1 Other \$1 Other \$1 Create (review required)](https://docs.aws.amazon.com/managedservices/latest/ctref/management-other-other-create-review-required.html). These CTs rely on manual handling for validations and executions. These CTs are dependent on manual interpretation of the change request.

AMS notifies you when the change has completed successfully (Success) or unsuccessfully (Failure).

**Note**  
For information about troubleshooting RFC failures, see [Troubleshooting RFC errors in AMS](rfc-troubleshoot.md).

The following graphic depicts the workflow of an RFC submitted by you.

![\[The workflow of a customer-submitted RFC.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/requestForChange-v5g.png)


# Authenticate when using the AMS API/CLI
<a name="ex-rfc-authentication"></a>

When you use the AMS API/CLI, you must authenticate with temporary credentials. To request temporary security credentials for federated users, cal [ GetFederationToken](https://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingFedTokens.html), [AssumeRole](https://docs.aws.amazon.com/STS/latest/UsingSTS/sts_delegate.html), [AssumeRoleWithSAML](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithsaml), or [ AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithwebidentity) AWS security token service (STS) APIs.

A common choice is SAML. After set up, you add an argument to each operation that you call. For example: `aws --profile saml amscm list-change-type-categories`.

A shortcut for SAML 2.0 profiles is to set the profile variable at the start of each API/CLI with `set AWS_DEFAULT_PROFILE=saml` (for Windows; for Linux it would be `export AWS_DEFAULT_PROFILE=saml`). For information about setting CLI environment variables, see [ Configuring the AWS Command Line Interface, Environment Variables](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment).

# Understand RFC security reviews
<a name="rfc-security"></a>

The AWS Managed Services (AMS) change management approval process ensures that we perform a security review of changes we make in your accounts.

AMS evaluates all the requests for change (RFCs) against AMS technical standards. Any change that might lower your account's security posture by deviating from the technical standards, goes through a security review. Duringthe security review, AMS highlights relevant risk and, in cases of high or very high security risk, your authorized security personnel accepts or rejects the RFC. All changes are also evaluated to assess for adverse impact on AMS's ability to operate. If potential adverse impacts are found, then additional reviews and approvals are required within AMS. 

## AMS technical standards
<a name="rfc-sec-tech-standards"></a>

AMS Technical Standards define the minimum security criteria, configurations, and processes to establish the baseline security of your accounts. These standards must be followed by both AMS and you.

Any change that could potentially lower the security posture of your account by deviating from the technical standards, goes through a Risk Acceptance process, where relevant risk is highlighted by AMS and accepted or rejected by the authorized security personnel from your end. All such changes are also evaluated to assess if there would be any adverse impact on AMS's ability to operate the account and, if so, additional reviews and approvals are required within AMS.

## RFC customer security risk management (CSRM) process
<a name="rfc-sec-risk"></a>

When someone from your organization requests a change to your managed environment, AMS reviews the change to determine whether the request might deteriorate the security posture of your account by falling outside the technical standards. If the request does lower the security posture of the account, AMS notifies your security team contact with the relevant risk, and executes the change; or, if the change introduces high or very high security risk in the environment, AMS seeks explicit approval from your security team contact in the form of risk acceptance (explained next). The AMS Customer Risk Acceptance process is designed to:
+ Ensure risks are clearly identified and communicated to the right owners
+ Minimize identified risks to your environment
+ Obtain and document approval from the designated security contacts who understand your organization's risk profile
+ Reduce ongoing operational overhead for identified risks

## How to access technical standards and high or very high risks
<a name="rfc-sec-tech-standards-access"></a>

We have made AMS Technical Standards documentation available for your reference in the [https://console.aws.amazon.com/artifact/](https://console.aws.amazon.com/artifact/) as a report. Use the AMS Technical Standards documentation to understand whether a change would require risk acceptance from your authorized security contact prior to submitting a request for change (RFC).

Find the Technical Standards report by searching on "AWS Managed Services (AMS) Technical Standards" in the AWS Artifact **Reports** tab search bar after logging in with the default **AWSManagedServicesChangeManagementRole**.

**Note**  
The AMS technical standard document is accessible for the Customer\$1ReadOnly\$1Role in single-account landing zone. In multi-account landing zone, the AWSManagedServicesAdminRole used by security admins and AWSManagedServicesChangeManagementRole used by application teams, can be used to access the document. If your team uses a custom role, create an Other \$1 Other RFC to request access and we will update the specified custom role.

# Understand RFC change type classifications
<a name="ex-rfc-csio"></a>

The change types that you use when submitting an RFC are divided into two broad categories:
+ **Deployment**: This classification is for creating resources.
+  **Management**: This classification is for updating or deleting resources. The **Management** category also contains change types for accessing instances, encrypting or sharing AMIs, and starting,stopping,rebooting, or deleting stacks.

# Understand RFC action and activity states
<a name="ex-rfc-action-state"></a>

`RfcActionState` (API) / **Activity State** (console) help you understand the status of human intervention, or action, on an RFC. Used primarily for manual RFCs, the `RfcActionState` helps you understand when there is action needed by either you or AMS operations, and helps you see when AMS Operations is actively working on your RFC. This provides increased transparency into the actions being taken on an RFC during its lifecycle.

`RfcActionState` (API) / **Activity State** (console) definitions:
+ **AwsOperatorAssigned**: An AWS operator is actively working on your RFC.
+ **AwsActionPending**: A response or action from AWS is expected.
+ **CustomerActionPending**: A response or action from the customer is expected.
+ **NoActionPending**: No action is required from either AWS or the customer.
+ **NotApplicable**: This state can't be set by AWS operators or customers, and is used only for RFCs that were created prior to this functionality being released.

RFC action states differ depending on whether the change type submitted requires manual review and has scheduling set to **ASAP** or not.
+ RFC **ActionState** changes during the review, approval, and start of a manual change type with deferred scheduling:
  + After you submit a manual, scheduled, RFC, the **ActionState** automatically changes to **AwsActionPending** to indicate that an operator needs to review and approve the RFC.
  + When an operator begins actively reviewing your RFC, the **ActionState** changes to **AwsOperatorAssigned**.
  + When the operator approves your RFC, the RFC Status changes to Scheduled, and the **ActionState** automatically changes to **NoActionPending**.
  + When the scheduled start time of the RFC is reached, the RFC Status changes to **InProgress**, and the **ActionState** automatically changes to **AwsActionPending** to indicate that an operator needs to be assigned for review of the RFC.
  + When an operator begins actively running the RFC, they change the **ActionState** to **AwsOperatorAssigned**.
  + Once completed, the Operator closes the RFC. This automatically changes the **ActionState** to **NoActionPending**.  
![\[RFC ActionState changes during the review, approval, and start of a manual change type with deferred scheduling\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/actionStateRfc.png)

**Important**  
Action states can't be set by you. They are either set automatically based on changes in the RFC, or set manually by AMS operators.
If you add correspondence to an RFC, the **ActionState** is automatically set to **AwsActionPending**.
When an RFC is created, the **ActionState** is automatically set to **NoActionPending**.
When an RFC is submitted, the **ActionState** is automatically set to **AwsActionPending**.
When an RFC is Rejected, Canceled, or completed with a status of Success or Failure, the **ActionState** is automatically reset to **NoActionPending**.
Action states are enabled for both automated and manual RFCs, but mostly matter for manual RFCs because those type of RFCs often require communications.

# Review RFC action states use case examples
<a name="ex-rfc-action-state-examples"></a>

**Use Case: Visibility on Manual RFC Process**
+ Once you submit a manual RFC, the RFC action state automatically changes to `AwsActionPending` to indicate that an operator needs to review and approve the RFC. When an operator begins actively reviewing your RFC, the RFC action state changes to `AwsOperatorAssigned`.
+ Consider a manual RFC that has been approved and scheduled and is ready to begin running. Once the RFC status changes to `InProgress`, the RFC action state automatically changes to `AwsActionPending`. It changes again to `AwsOperatorAssigned` once an operator starts actively running the RFC.
+ When a manual RFC is completed (closed as "Success" or "Failure"), the RFC Action state changes to `NoActionPending` to indicate that no further actions are necessary from either the customer or operator.

**Use case: RFC correspondence**
+ When a manual RFC is `Pending Approval`, an AMS Operator might need further information from you. Operators will post a correspondence to the RFC and change the RFC action state to `CustomerActionPending`. When you respond by adding a new RFC correspondence, the RFC action state automatically changes to `AwsActionPending`.
+ When an automated or manual RFC has failed, you can add a correspondence to the RFC details, asking the AMS Operator why the RFC failed. When your correspondence is added, the RFC action state is automatically set to `AwsActionPending`. When the AMS operator picks up the RFC to view your correspondence, the RFC action state changes to `AwsOperatorAssigned`. When the operator responds by adding a new RFC correspondence, the RFC action state may be set to `CustomerActionPending`, indicating that there is another response from the customer expected, or to `NoActionPending`, indicating that no response from the customer is needed or expected.

# Understand RFC status codes
<a name="ex-rfc-status-codes"></a>

RFC status codes help you track your requests. You can observe these status codes during an RFC run in the CLI output, or by refreshing the RFC list page in the console.

You can also see the codes for an RFC on the details page for that RFC, which might look like this:

![\[RFC status codes.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/guiRfcStatusCodes.png)


You might see an RFC in your list that you didn't submit. When AMS operators use an internal-only CT, they submit it in an RFC and it displays in your RFC list. For more information, see [Internal-only change types](ct-internals.md).

**Important**  
You can request notifications of RFC state changes. For details, see [RFC State Change Notifications](https://docs.aws.amazon.com/managedservices/latest/userguide/rfc-state-change-notices.html).


**RFC status codes**  

| Success | Failure | 
| --- | --- | 
|  Editing: the RFC has been created but not submitted PendingApproval / Submitted: The RFC has been submitted and the system is determining if it requires approval, and obtaining that approval, if required Approved by AWS / Approved by customer: the RFC has been approved. Automated RFCs are approved by AWS, manual RFCs are approved by Operators and, sometimes, customers Scheduled: the RFC has passed syntax and requirement checks and is scheduled for running InProgress: the RFC is being run, note that RFCs that provision multiple resources or have long-running UserData, take longer to run Executed: The RFC has been run Success / Succeeded: The RFC has been successfully completed  |  Rejected: RFCs are rejected typically because they fail validation; for example, an unusable resource, i.e. a subnet, is specified Canceled: RFCs are canceled typically because they do not pass validation before the configured start time has passed Failure: The RFC has failed; see the StatusReason in the output for failure reasons, and AMS operations automatically creates a trouble ticket and communicates with you as needed | 

**Note**  
Canceled or rejected RFCs can be re-submitted using [UpdateRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_UpdateRfc.html); see also [Update RFCs](ex-update-rfcs.md).

If the RFC passes all the necessary conditions (for example, all required parameters are specified), the status changes to `PendingApproval` (even automated CTs require approval, which happens automatically if syntax and parameter checks pass). If it does not pass, the status changes to `Rejected`. The `StatusReason` provides information about rejections; the `ExecutionOutput` fields provide information about approval and completion. Error codes include:
+ InvalidRfcStateException: The RFC is in a status that doesn't allow the operation that was called. For example, if the RFC has moved to the Submitted state, it can no longer be modified.
+ InvalidRfcScheduleException: The StartTime, EndTime, or TimeoutInMinutes parameters were breached.
+ InternalServerError: A difficulty with the system was encountered.
+ InvalidArgumentException: A parameter is incorrectly specified; for example, an unacceptable value is used.
+ ResourceNotFoundException: A value, such as the stack ID, cannot be found.

If the scheduled requested start and end times (also known as the change run window) occur before the change is approved, the RFC status changes to `Canceled`. If the change is approved, the RFC status changes to `Scheduled`. The change run window for ASAP RFCs is the submitted time plus the `ExpectedExecutionDuration` value for the CT.

At any time before the arrival of the change run window, a scheduled change (submitted with a `RequestedStartTime` in the CLI) can be modified or canceled. If the scheduled change is modified, it must then be re-submitted.

When the change start time arrives (scheduled or ASAP) and after approvals are complete, the status changes to `InProgress` and no modifications can be made. If the change is completed within the specified change run window, the status changes to `Success`. If any part of the change fails, or if the change is still in progress when the change run window ends, the status changes to `Failure`.

**Note**  
During the `InProgress`, `Success`, or `Failure` change states, the RFC cannot be modified or canceled.

The following diagram illustrates the RFC statuses from the CreateRFC call through to resolution.

![\[The RFC statuses from the CreateRFC call through to resolution.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/RfcStateFlow2.png)


# Understand RFC update CTs and CloudFormation template drift detection
<a name="ex-rfc-updates-and-dd"></a>

Resources provisioned in AMS use a modified CloudFormation template. If a resource has a parameter changed directly through a service's AWS Management Console, then the CloudFormation creation record of that resource becomes out of sync. If this happens and you attempt to use an AMS update change type to update the resource in AMS, then AMS references the original resource configuration and potentially resets changed parameters. This reset might be damaging, so AMS disallows RFCs with update change types if any extra AMS configuration changes are detected.

For a list of update change types, use the console filter.

## Drift remediation FAQs
<a name="drift-remeditate-faqs"></a>

Questions and answers on AMS drift remediation. There are two change types that you can use to initiate drift remediation, one is execution mode=manual or "managed automation," the other is execution mode=automated. 

### Drift remediation supported resources (ct-3kinq0u4l33zf)
<a name="drift-remeditate-faqs-sr"></a>

These are the resources that are supported by the drift remediation change type, (ct-3kinq0u4l33zf).   For remediation of any resource, use the "managed automation" (ct-34sxfo53yuzah) change type instead.

```
AWS::EC2::Instance
AWS::EC2::SecurityGroup
AWS::EC2::VPC
AWS::EC2::Subnet
AWS::EC2::NetworkInterface
AWS::EC2::EIP
AWS::EC2::InternetGateway
AWS::EC2::NatGateway
AWS::EC2::NetworkAcl
AWS::EC2::RouteTable
AWS::EC2::Volume
AWS::AutoScaling::AutoScalingGroup
AWS::AutoScaling::LaunchConfiguration
AWS::AutoScaling::LifecycleHook
AWS::AutoScaling::ScalingPolicy
AWS::AutoScaling::ScheduledAction
AWS::ElasticLoadBalancing::LoadBalancer
AWS::ElasticLoadBalancingV2::Listener
AWS::ElasticLoadBalancingV2::ListenerRule
AWS::ElasticLoadBalancingV2::LoadBalancer
AWS::CloudWatch::Alarm
```

### Drift remediation change types
<a name="drift-remeditate-faqs-cts"></a>

Questions and answers on using the AMS drift remediation change types.

For a list of supported resources for the drift remediation feature, see [Drift remediation supported resources (ct-3kinq0u4l33zf)](#drift-remeditate-faqs-sr).

**Important**  
Drift remediation modifies the stack template and/or parameters and it is mandatory to update your local template repositories or any automation that is updating these stacks to use the latest stack template and parameters. Using old template and/or parameters without syncing can cause damaging changes to underlying resources.  
The no managed automation, automated, CT (ct-3kinq0u4l33zf) supports remediating only 10 resources per RFC. To remediate remaining resources in batches of 10 create new RFCs until all resources are remediated.

Which drift remediation change type should I use?  
We recommend using the **no managed automation**, automated CT (ct-3kinq0u4l33zf) when:  
+ You attempt to perform an update to an existing stack resource using an automated CT and the RFC gets rejected as the stack is `DRIFTED`.
+ You used an Update CT in the past and it failed as the stack was DRIFTED. You do not need to attempt an update again and can use the managed automation, manual, CT instead.
We recommend using the **managed automation**, manual CT (ct-34sxfo53yuzah) only when drifted resource types are not supported by the drift remediation no managed automation, automated, CT (ct-3kinq0u4l33zf), or when the drift remediation no managed automation, automated, CT fails.

What changes are performed to the stack during remediation?  
Remediation requires updates to the stack template and/or parameters depending on the properties that are drifted. Remediation also updates the stack policy of the stack during remediation and restores the stack policy to its previous value once remediation is completed.

How can we see the changes performed to the stack template and/or parameters?  
In the response to the RFC, a change summary is provided with the following information:  
+ `ChangeSummaryJson`: Contains change summary of Stack Template and/or Parameters as part of drift remediation. Remediation is performed in multiple phases. This change summary consists of changes for individual phases. If Remediation is successful check changes of the last phase. See ExecutionPlan in the JSON for phases executed in order. For example, RestoreReferences section when present is always executed at the end and contains JSON for post remediation changes. If remediation is run in DryRun mode none of these changes would have been applied to the stack.
+ `PreRemediationStackTemplateAndConfigurationJson`: Contains configuration snapshot of CloudFormation Stack including Template, Parameters, Outputs, StackPolicyBody before remediation was triggered on the stack.

What do I need to do once remediation is performed?  
You need to update your local template repositories, or any automation, that would be updating the remediated stack, with the latest template and parameters provided in the RFC summary. It is very important to do this because using the old template and/or parameters can cause further destructive changes on the stack resources.

Will my application be effected during this remediation?  
Remediation is an offline process that is performed only on the CloudFormation stack configuration. No updates are performed on the underlying resource.

Can I continue using Management \$1 Other \$1 Other RFCs to perform updates to resources after remediation?  
We recommend that you always perform updates to stack resources using the available automated Update CTs. When the available Update CTs do not support your use case, use Management \$1 Other \$1 Other requests.

Does remediation create any new resources in the stack?  
Remediation does not create any new resources in the stack. However, remediation creates new outputs and updates the stack template [metadata](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html) section to store the remediation summary for your reference.

Will remediation always be successful?  
Remediation requires careful analysis and validation of the template configuration to determine if it can be performed. In scenarios where these validations fail, the remediation process is stopped and no changes are performed to the stack template or parameters. Also, remediation can only be performed on supported resource types.

How can I perform updates to stack resources if remediation is not successful?  
You can use the Management \$1 Other \$1 Other \$1 Update CT (ct-0xdawir96cy7k) to request changes. AMS monitors such scenarios and works towards improving the remediation solution.

Can I remediate stacks that have both supported and unsupported resource types?  
Yes. However, remediation is performed only if the supported resource types are found DRIFTED in the stack. If any unsupported resource types are DRIFTED, remediation does not continue.

Can I request remediation for stacks created through non-CFN Ingest CTs?  
Yes. Remediation can be performed on stacks irrespective of the change type used for creating the stack.

Can I know the changes that would be performed to the stack before remediation?  
Yes. Both change types provide a **DryRun** option that you can use to request changes that would be performed if the stack was remediated. However, the final remediation changes may differ depending on the drift present on the stack at the time of remediation.

# Schedule RFCs
<a name="ex-rfc-scheduling"></a>

The **Scheduling** feature allows you to choose a start time for RFCs. The following options are available in the **Scheduling** feature:
+ **Execute this change ASAP**: AMS runs the RFC as soon as it's approved. Most CTs are automatically approved. Use this option if don't want the RFC to start at a specific time.
+ **Schedule this change**: Set a day, time, and time zone for the RFC to run. For automated change types, it's a best practice to request a start time that's at least 10 minutes after you plan to submit the RFC. For managed automation change types, it's required that you request a start time that's at least 24 hours after you plan to submit the RFC. If the RFC isn't approved by the configured start time, then the RFC is rejected.

## Set an RFC schedule
<a name="ex-rfc-scheduling-schedule"></a>

To schedule an RFC, use one of the following methods:

**Execute this change ASAP**:
+ Console: Do nothing. This uses the default RFC schedule.
+ API or CLI: Remove the `RequestedStartTime` and `RequestedEndTime` options in the Create RFC operation.

**ASAP** "managed automation" RFCs are auto-rejected if they are not approved within thirty days of submission.

**Schedule this change**:
+ Console: Select the ** Schedule this change** radio button. A **Start time** area opens. Manually type in a day or use the calendar widget to pick a day. Enter a time, in UTC, expressed in ISO 8601 format, and use the drop-down list to pick a location. By default, AMS uses the ISO 8601 format YYYYMMDDThhmmssZ or YYYY-MM-DDThh:mm:ssZ, either format is accepted.
**Note**  
The **Default End Time** is 4 hours from the **Start time** that you enter. To set the **End Time** of your scheduled change beyond 4 hours, use the API or CLI to run the change.
+ API or CLI: Submit values for the `RequestedStartTime` and `RequestedEndTime` parameters in the Create RFC operation. Passing a configured `RequestedEndTime` doesn't stop the run for an automated change type that has already started. For a "managed automation" change type, if the `RequestedEndTime` is reached while AMS Operations research is still ongoing, and you're in communication with AMS, then you can request an extension, or you might be asked to re-submit the RFC. 
**Tip**  
For an example of a UTC time readout, see [UTC](https://time.is/UTC) on the Time-is website. Example ISO 8601 format for a date/time value of 2016-12-05 at 2:20pm: **2016-12-05T14:20:00Z** or **20161205T142000Z**.

If you provide...
+ only a `RequestedStartTime`, the RFC is considered scheduled and the `RequestedEndTime` is populated using the `ExecutionDurationInMinutes` value.
+ only a `RequestedEndTime`, we throw an InvalidArgumentException.
+ both `RequestedStartTime` and `RequestedEndTime`, we overwrite the `RequestedEndTime` with the specified start time plus the `ExecutionDurationInMinutes` value.
+ neither `RequestedStartTime` nor `RequestedEndTime`, we keep those values as null and the RFC is treated as an ASAP RFC.

**Note**  
For all scheduled RFCs, an unspecified end time is written to be the time of the specified `RequestedStartTime` plus the `ExpectedExecutionDurationInMinutes` attribute of the submitted change type. For example, if the `ExpectedExecutionDurationInMinutes` is "60" (minutes), and the specified `RequestedStartTime` is `2016-12-05T14:20:00Z` (December 5, 2016 at 4:20 AM), the actual end time would be set to December 5, 2016 at 5:20 AM. To find the `ExpectedExecutionDurationInMinutes` for a specific change type, run this command:  

```
aws amscm --profile saml get-change-type-version --change-type-id CHANGE_TYPE_ID --query "ChangeTypeVersion.{ExpectedDuration:ExpectedExecutionDurationInMinutes}"
```

## Use the RFC Priority option
<a name="ex-rfc-priority"></a>

Use the **Priority** option in `execution mode = manual` change types to alert AMS Operations to the urgency of the request.

**Priority** option in `execution mode = manual`:

Specify the priority of a manual RFC as **High**, **Medium**, or **Low**. RFCs classified as **High** are reviewed and approved prior to RFCs classified as **Medium**, subject to RFC service level objectives (SLOs) and their submission times. RFCs with **Low** priority or no priority specified are processed in the order they are submitted. 

# Approve or reject RFCs
<a name="ex-rfc-approvals"></a>

RFCs submitted with approval-required (manual) CTs must be approved by you or AMS. Pre-approved CTs are automatically processed. For more information, see [CT approval requirements](constrained-unconstrained-ctis.md).

**Note**  
When using manual CTs, AMS recommends that you use the ASAP **Scheduling** option (choose **ASAP** in the console, leave start and end time blank in the API/CLI) as these CTs require an AMS operator to examine the RFC, and possibly communicate with you before it can be approved and run. If you schedule these RFCs, be sure to allow at least 24 hours. If approval does not happen before the scheduled start time, the RFC is rejected automatically.

If an approval-required RFC is successfully submitted by AMS, then it must be explicitly approved by you. Or, iff you submit an approval-required RFC, then it must be approved by AMS. If you're required to approve an RFC that AMS submitted, then an email or other predetermined communication is sent to you requesting the approval. The communication includes the RFC ID. After the communication is sent, do one of the followings:
+ Console Approve or Reject: Use the RFC details page for the relevant RFC:  
![\[RFC details page.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/AMS_Console-App-Rej.png)
+ API / CLI Approve: [ApproveRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_ApproveRfc.html) marks a change as approved. The action must be taken by both the owner and operator, if both are required. The following is an example CLI approve command. In the following example, replace RFC\$1ID with the appropriate RFC ID.

  ```
  aws amscm approve-rfc --rfc-id RFC_ID
  ```
+ API / CLI Reject: [RejectRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_RejectRfc.html) marks a change as rejected. The following is an example CLI reject command. In the following example, replace RFC\$1ID with the appropriate RFC ID.

  ```
  aws amscm reject-rfc --rfc-id RFC_ID --reason "no longer relevant"
  ```

# Request RFC restricted run periods
<a name="ex-rfc-restrict-execute"></a>

Formerly known as blackout days, you can request to restrict certain time periods. No changes can be run during those times.

To set a restricted run period, use the [UpdateRestrictedExecutionTimes](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_UpdateRestrictedExecutionTimes.html) API operation and set a specific time period, in UTC. The period that you specify overrides any previous periods that were specified. If you submit an RFC during the specified restricted run time, submission fails with the error Invalid RFC Schedule. You can specify up to 200 restricted time periods. By default, no restricted period is set. The following is an example request command (with SAML authentication configured):

```
aws amscm  --profile saml update-restricted-execution-times --restricted-execution-times="[{\"TimeRange\":{\"StartTime\":\"2018-01-01T12:00:00Z\",\"EndTime\":\"2018-01-01T12:00:01Z\"}}]"
```

You can also view your current RestrictedExecutionTimes setting by running the [ListRestrictedExecutionTimes](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_ListRestrictedExecutionTimes.html) API operation. Example:

```
aws amscm  --profile saml list-restricted-execution-times
```

If you want to submit an RFC during a specified restricted execution time, then add the **RestrictedExecutionTimesOverrideId** with the value of **OverrideRestrictedTimeRanges**, and then submit the RFC as you normally would. It's a best practice to only use this method for a critical or emergency RFC. For more information, see the API reference for [SubmitRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_SubmitRfc.html).

# Create, clone, update, find, and cancel RFCs
<a name="ex-rfc-use-examples"></a>

The following examples walk you through various RFC operations.

**Topics**
+ [Create an RFC](ex-rfc-create-col.md)
+ [Clone RFCs (re-create) with the AMS console](ex-clone-rfcs.md)
+ [Update RFCs](ex-update-rfcs.md)
+ [Find RFCs](ex-rfc-find-col.md)
+ [Cancel RFCs](ex-cancel-rfcs.md)

# Create an RFC
<a name="ex-rfc-create-col"></a>

## Creating an RFC with the console
<a name="ex-rfc-create-con"></a>

The following is the first page of the RFC Create process in the AMS console, with **Quick cards** open and **Browse change types** active:

![\[Quick create section with options for common AWS stack operations and access management.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/quickCreate1.png)


The following is the first page of the RFC Create process in the AMS console, with **Select by category** active:

![\[Create RFC page with change type categorization options for managed services environment.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/guiRfcCreate1-2.png)


How it works:

1. Navigate to the **Create RFC** page: In the left navigation pane of the AMS console click **RFCs** to open the RFCs list page, and then click **Create RFC**.

1. Choose a popular change type (CT) in the default **Browse change types** view, or select a CT in the **Choose by category** view.
   + **Browse by change type**: You can click on a popular CT in the **Quick create** area to immediately open the **Run RFC** page. Note that you cannot choose an older CT version with quick create.

     To sort CTs, use the **All change types** area in either the **Card** or **Table** view. In either view, select a CT and then click **Create RFC** to open the **Run RFC** page. If applicable, a **Create with older version** option appears next to the **Create RFC** button.
   + **Choose by category**: Select a category, subcategory, item, and operation and the CT details box opens with an option to **Create with older version** if applicable. Click **Create RFC** to open the **Run RFC** page.

1. On the **Run RFC** page, open the CT name area to see the CT details box. A **Subject** is required (this is filled in for you if you choose your CT in the **Browse change types** view). Open the **Additional configuration** area to add information about the RFC.

   In the **Execution configuration** area, use available drop-down lists or enter values for the required parameters. To configure optional execution parameters, open the **Additional configuration** area.

1. When finished, click **Run**. If there are no errors, the **RFC successfully created** page displays with the submitted RFC details, and the initial **Run output**. 

1. Open the **Run parameters** area to see the configurations you submitted. Refresh the page to update the RFC execution status. Optionally, cancel the RFC or create a copy of it with the options at the top of the page.

## Creating an RFC with the CLI
<a name="ex-rfc-create-cli"></a>

How it works:

1. Use either the Inline Create (you issue a `create-rfc` command with all RFC and execution parameters included), or Template Create (you create two JSON files, one for the RFC parameters and one for the execution parameters) and issue the `create-rfc` command with the two files as input. Both methods are described here.

1. Submit the RFC: `aws amscm submit-rfc --rfc-id ID` command with the returned RFC ID.

   Monitor the RFC: `aws amscm get-rfc --rfc-id ID` command.

To check the change type version, use this command:

```
aws amscm list-change-type-version-summaries --filter Attribute=ChangeTypeId,Value=CT_ID
```
**Note**  
You can use any `CreateRfc` parameters with any RFC whether or not they are part of the schema for the change type. For example, to get notifications when the RFC status changes, add this line, `--notification "{\"Email\": {\"EmailRecipients\" : [\"email@example.com\"]}}"` to the RFC parameters part of the request (not the execution parameters). For a list of all CreateRfc parameters, see the [AMS Change Management API Reference](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_CreateRfc.html).

*INLINE CREATE*:

Issue the create RFC command with execution parameters provided inline (escape quotes when providing execution parameters inline), and then submit the returned RFC ID. For example, you can replace the contents with something like this::

```
aws amscm create-rfc --change-type-id "CT_ID" --change-type-version "VERSION" --title "TITLE" --execution-parameters "{\"Description\": \"example\"}"
```

*TEMPLATE CREATE*:
**Note**  
This example of creating an RFC uses the Load Balancer (ELB) stack change type.

1. Find the relevant CT. The following command searches CT classification summaries for those that contain "ELB" in the **Item** name and creates output of the Category, Item, Operation, and ChangeTypeID in table form (Subcategory for both is `Advanced stack components`).

   ```
   aws amscm list-change-type-classification-summaries --query "ChangeTypeClassificationSummaries[?contains(Item,'ELB')].[Category,Item,Operation,ChangeTypeId]" --output table
   ```

   ```
   ---------------------------------------------------------------------
   |                            CtSummaries                            |
   +-----------+---------------------------+---------------------------+
   | Deployment| Load balancer (ELB) stack | Create | ct-123h45t6uz7jl |
   | Management| Load balancer (ELB) stack | Update | ct-0ltm873rsebx9 |
   +-----------+---------------------------+---------------------------+
   ```

1. Find the most current version of the CT:

   `ChangeTypeId` and `ChangeTypeVersion`: The change type ID for this walkthrough is `ct-123h45t6uz7jl` (create ELB), to find out the latest version, run this command:

   ```
   aws amscm list-change-type-version-summaries --filter Attribute=ChangeTypeId,Value=ct-123h45t6uz7jl
   ```

1. Learn the options and requirements. The following command outputs the schema to a JSON file named CreateElbParams.json.

   ```
   aws amscm get-change-type-version --change-type-id "ct-123h45t6uz7jl" --query "ChangeTypeVersion.ExecutionInputSchema" --output text > CreateElbParams.json
   ```

1. Modify and save the execution parameters JSON file. This example names the file CreateElbParams.json.

   For a provisioning CT, the StackTemplateId is included in the schema and must be submitted in the execution parameters.

   For TimeoutInMinutes, how many minutes are allowed for the creation of the stack before the RFC is failed, this setting will not delay the RFC execution, but you must give enough time (for example, don't specify "5"). Valid values are "60" up to "360," for CTs with long-running UserData: Create EC2 and Create ASG. We recommend the max allowed "60" for all other provisioning CTs.

   Provide the ID of the VPC where you want the stack to be created; you can get the VPC ID with the CLI command `aws amsskms list-vpc-summaries`.

   ```
   {
   "Description":      "ELB-Create-RFC", 
   "VpcId":            "VPC_ID", 
   "StackTemplateId":  "stm-sdhopv00000000000", 
   "Name":             "MyElbInstance",
   "TimeoutInMinutes": 60,
   "Parameters":   {
       "ELBSubnetIds":                     ["SUBNET_ID"],
       "ELBHealthCheckHealthyThreshold":   4,
       "ELBHealthCheckInterval":           5,
       "ELBHealthCheckTarget":             "HTTP:80/",
       "ELBHealthCheckTimeout":            60,
       "ELBHealthCheckUnhealthyThreshold": 5,
       "ELBScheme":                        false
       }
   }
   ```

1. Output the RFC JSON template to a file in your current folder named CreateElbRfc.json:

   ```
   aws amscm create-rfc --generate-cli-skeleton > CreateElbRfc.json
   ```

1. Modify and save the CreateElbRfc.json file. Because you created the execution parameters in a separate file, remove the `ExecutionParameters` line. For example, you can replace the contents with something like this:

   ```
   {
   "ChangeTypeVersion":    "2.0",
   "ChangeTypeId":         "ct-123h45t6uz7jl",
   "Title":                "Create ELB"
   }
   ```

1. Create the RFC. The following command specifies the execution parameters file and the RFC template file:

   ```
   aws amscm create-rfc --cli-input-json file://CreateElbRfc.json --execution-parameters file://CreateElbParams.json
   ```

   You receive the ID of the new RFC in the response and can use it to submit and monitor the RFC. Until you submit it, the RFC remains in the editing state and does not start.

## Tips
<a name="ex-rfc-create-tip"></a>

**Note**  
You can use the AMS API/CLI to create an RFC without creating an RFC JSON file or a CT execution parameters JSON file. To do this, you use the `create-rfc` command and add the required RFC and execution parameters to the command, this is called "Inline Create". Note that all provisioning CTs have contained within the `execution-parameters` block a `Parameters` array with the parameters for the resource. The parameters must have quote marks escaped with a back slash (\$1).  
The other documented method of creating an RFC is called "Template Create." This is where you create a JSON file for the RFC parameters and another JSON file for the execution parameters, and submit the two files with the `create-rfc` command. These files can serve as templates and be re-used for future RFCs.  
When creating RFCs with templates, you can use a command to create the JSON file with the contents you want by issuing a command as shown. The commands create a file named "parameters.json" with the shown content; you could also use these commands to create the RFC JSON file.

# Clone RFCs (re-create) with the AMS console
<a name="ex-clone-rfcs"></a>

You can use the AMS console to clone an existing RFC.

To clone, or recreate, an RFC by using the AMS console, follow these steps:

1. Find the relevant RFC. From the left navigation, click **RFCs**. 

   The RFCs dashboard opens.

1. Scroll through the pages until you find the RFC you want to clone. Use the **Filter** option to narrow the list. Choose the RFC that you want to clone.

   The RFC details page opens.

1. Click **Create a Copy**.

   The **Create a request for change** page opens with all options set as in the original RFC.

1. Make the changes you want. To set additional options, change the **Basic** option to **Advanced**. After you have set all options, choose **Submit**.

   The active RFC details page opens with a new RFC ID for the cloned RFC and the cloned RFC appears in the RFC dashboard.

# Update RFCs
<a name="ex-update-rfcs"></a>

You can resubmit an RFC that has been rejected or that has not yet been submitted, by updating the RFC and then submitting it, or re-submitting it. Note that most RFCs are rejected because the specified `RequestedStartTime` has passed before submission or the specified TimeoutInMinutes is inadequate to run the RFC (since TimeoutInMinutes does not prolong a successful RFC, we recommend always setting this to at least "60" and up to "360" for an Amazon EC2 or an Amazon EC2 Auto Scaling group with long-running UserData). This section describes how to use the CLI version of the `UpdateRfc` command to update an RFC with a new RFC parameter, or new parameters using either stringified JSON or an updated parameters file.

This example describes using the CLI version of the AMS UpdateRfc API (see [Update RFC](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/update-rfc.html)). While there are change types for updating some resources (DNS private and public, load balancer stacks, and stack patching configuration), there is no CT to update an RFC.

We recommend that you submit one UpdateRfc operation at a time. If you submit multiple updates, for example on a DNS stack, the updates might fail attempting to update the DNS at the same time.

REQUIRED DATA: `RfcId`: The RFC you're updating.

OPTIONAL DATA: `ExecutionParameters`: Unless you're updating a non-required field, like `Description`, you would submit modified execution parameters to address the issues that caused the RFC to be rejected or canceled. All submitted non-null values overwrite those values in the original RFC.

1. Find the relevant rejected or canceled RFC, you can use this command (you can substitute the value with `Canceled`):

   ```
   aws amscm list-rfc-summaries --filter Attribute=RfcStatusId,Value=Rejected
   ```

1. You can modify any of the following RFC parameters :

   ```
   {
       "Description": "string",
       "ExecutionParameters": "string",
       "ExpectedOutcome": "string",
       "ImplementationPlan": "string",
       "RequestedEndTime": "string",
       "RequestedStartTime": "string",
       "RfcId": "string",
       "RollbackPlan": "string",
       "Title": "string",
       "WorstCaseScenario": "string"}
   ```

   Example command updating the Description field:

   ```
   aws amscm update-rfc --description "AMSTestNoOpsActionRequired" --rfc-id "RFC_ID" --region us-east-1
   ```

   Example command updating the ExecutionParameters VpcId field:

   ```
   aws amscm update-rfc  --execution-parameters "{\"VpcId\":\"VPC_ID\"}" --rfc-id "RFC_ID" --region us-east-1
   ```

   Example command updating the RFC with an execution parameters file that contains the updates; see example execution parameters file in step 2 of: [EC2 stack \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-ec2-stack-create.html):

   ```
   aws amscm update-rfc --execution-parameters file://CreateEc2ParamsUpdate.json --rfc-id "RFC_ID" --region us-east-1
   ```

1. Resubmit the RFC using `submit-rfc` and the same RFC ID that you have from when the RFC was first created:

   ```
   aws amscm submit-rfc --rfc-id RFC_ID
   ```

   If the RFC succeeds, you receive no confirmation or error messages at the command line.

1. To monitor the status of the request and to view Execution Output, run the following command.

   ```
   aws amscm get-rfc --rfc-id RFC_ID
   ```

# Find RFCs
<a name="ex-rfc-find-col"></a>

## Find a request for change (RFC) with the console
<a name="ex-rfc-find-con"></a>

To find an RFC by using the AMS console, follow these steps.
**Note**  
This procedure applies only to scheduled RFCs, that is, RFCs that did not use the **ASAP** option.

1. From the left navigation, click **RFCs**.

   The RFCs dashboard opens.

1. Scroll through the list or use the **Filter** option to refine the list.

   The RFC list changes per filter criteria.

1. Choose the Subject link for the RFC you want.

   The RFC details page opens for that RFC with information including RFC ID.

1.  If there are many RFCs in the dashboard, you can use the **Filter** option to search by RFC:
   + **Subject**: The subject line, or title (in the API/CLI) given to the RFC when it was created.
   + **RFC ID**: The identifier for the RFC.
   + **Activity state**: If you know the RFC state, you can choose between **AwsOperatorAssigned** meaning an operator is currently looking at the RFC, **AwsActionPending** meaning that an AMS operator must perform something before the RFC execution can proceed or **CustomerActionPending** meaning that you need to take some action before the RFC execution can proceed.
   + **Status**: If you know the RFC status, you can choose between:
     + **Scheduled**: RFCs that were scheduled.
     + **Canceled**: RFCs that were canceled.
     + **In progress**: RFCs in progress.
     + **Success**: RFCs that executed successfully.
     + **Rejected**: RFCs that were rejected.
     + **Editing**: RFCs that are being edited.
     + **Failure**: RFCs that failed.
     + **Pending approval**: RFCs that cannot proceed until either AMS or you approve. Typically, this indicates that you need to approve the RFC. You will have gotten a service notification of this in your Service Requests list.
   + **Change type**: Pick the **Category**, **Subcategory**, **Item**, and **Operation**, and the change type ID is retrieved for you.
   + **Requested start time** or **Requested end time**: This filter option lets you choose **Before** or **After**, and then enter a **Date** and, optionally, a **Time** (hh:mm and time zone). This filter operates successfully only on scheduled RFCs (not ASAP RFCs).
   + **Status**: Either **Scheduled**, **Canceled**, **In progress**, **Success**, **Rejected**, **Editing**, or **Failure**.
   + **Subject**: The subject (or title, if the RFC was created with the API/CLI) that you gave the RFC.
   + **Change type ID**: Use the identifier for the change type submitted with the RFC.

   The search allows you to add the filters, as shown in the following screenshot.  
![\[Search or filter options including Subject, RFC ID, Activity state, and various time-related fields.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/filterRfcAllOptions3.png)

1. Click on the Subject link for the RFC you want.

   The RFC details page opens for that RFC with information including RFC ID.

## Finding a request for change (RFC) with the CLI
<a name="ex-rfc-find-cli"></a>

You can use multiple filters to find an RFC.

To check the change type version, use this command:

```
aws amscm list-change-type-version-summaries --filter Attribute=ChangeTypeId,Value=CT_ID
```
**Note**  
You can use any `CreateRfc` parameters with any RFC whether or not they are part of the schema for the change type. For example, to get notifications when the RFC status changes, add this line, `--notification "{\"Email\": {\"EmailRecipients\" : [\"email@example.com\"]}}"` to the RFC parameters part of the request (not the execution parameters). For a list of all CreateRfc parameters, see the [AMS Change Management API Reference](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_CreateRfc.html).

If you don’t write down the RFC ID, and need to find it later, you can use the AMS change management (CM) system to search for it and narrow the results with a filter or query.

1. The CM API [ListRfcSummaries](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_ListRfcSummaries.html) operation has filters. You can [Filter](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_Filter.html) results based on an `Attribute` and `Value` combined in a logical AND operation, or based on an `Attribute`, a `Condition`, and `Values`.  
**RFC filtering**    
<a name="rfc-filtering-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/ex-rfc-find-col.html)

   Examples:

   To find the IDs of all the RFCs related to SQS (where SQS is contained in the Item portion of the CT), you can use this command:

   ```
   list-rfc-summaries --query 'RfcSummaries[?contains(Item.Name,`SQS`)].[Category.Id,Subcategory.Id,Type.Id,Item.Id,RfcId]' --output table
   ```

   Which returns something like this:

   ```
   ----------------------------------------------------------------------------
   |                         ListRfcSummaries                                   |
   +----------+--------------------------------+-------+-------+----------------+
   |Deployment| Advanced Stack Components      |SQS    |Create |ct-123h45t6uz7jl|
   |Management| Monitoring & Notification  |SQS    |Update |ct-123h45t6uz7jl|
   +----------+--------------------------------+-------+-------+----------------+
   ```

   Another filter available for `list-rfc-summaries` is `AutomationStatusId`, to look for RFCs that are automated or manual:

   ```
   aws amscm list-rfc-summaries --filter Attribute=AutomationStatusId,Value=Automated
   ```

   Another filter available for `list-rfc-summaries` is `Title` (**Subject** in the console):

   ```
    Attribute=Title,Value=RFC-TITLE
   ```

   Example of the new request structure in JSON that returns RFCs where:
   + (Title CONTAINS the phrase "Windows 2012" OR "Amazon Linux") AND
   + (RfcStatusId EQUALS "Success" OR "InProgress") AND
   + (20170101T000000Z <= RequestedStartTime <= 20170103T000000Z) AND (ActualEndTime <= 20170103T000000Z)

   ```
   {
     "Filters": [
       {
         "Attribute": "Title",
         "Values": ["Windows 2012", "Amazon Linux"],
         "Condition": "Contains"
       },
       {
         "Attribute": "RfcStatusId",
         "Values": ["Success", "InProgress"],
         "Condition": "Equals"
       },
       {
         "Attribute": "RequestedStartTime",
         "Values": ["20170101T000000Z", "20170103T000000Z"],
         "Condition": "Between"
       },
       {
         "Attribute": "ActualEndTime",
         "Values": ["20170103T000000Z"],
         "Condition": "Before"
       }
     ]
   }
   ```
**Note**  
With more advanced `Filters`, AMS intends to deprecate the following fields in an upcoming release:  
Value: The Value field is part of the Filters field. Use the Values field that supports more advanced functionality.
RequestedEndTimeRange: Use the RequestedEndTime inside the Filters field that supports more advanced functionality
RequestedStartTimeRange: Use the RequestedStartTime inside the Filters field that supports more advanced functionality.

   For information about using CLI queries, see [ How to Filter the Output with the --query Option](https://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html#controlling-output-filter) and the query language reference, [JMESPath Specification](http://jmespath.org/specification.html).

1. If you're using the AMS console:

   Go to the **RFCs** list page. If needed, you can filter on the RFC **Subject**, which is what you entered as the RFC `Title` when you created it.

## Tips
<a name="ex-rfc-find-tip"></a>

**Note**  
This procedure applies only to scheduled RFCs, that is, RFCs that did not use the **ASAP** option.

# Cancel RFCs
<a name="ex-cancel-rfcs"></a>

You can cancel an RFC using the Console or the AMS API/CLI.

To cancel an RFC with the console, find the RFC in your RFC list, open it, click **Cancel**.

Required Data:
+ `Reason`: Why you are canceling the RFC.
+ `RfcId`: The RFC you are canceling.

1. Typically you would cancel an RFC right after submitting it (so the RFC ID should be handy); otherwise, you would not be able to cancel it unless you scheduled it and it's before the specified start time. If you need to find the RFC ID, you can use this command (you can substitute the `Value` with `PendingApproval` for an RFC that is manually approved):

   ```
   aws amscm list-rfc-summaries --filter Attribute=RfcStatusId,Value=Scheduled
   ```

1. Example command to cancel an RFC:

   ```
   aws amscm cancel-rfc --reason "Bad Stack ID" --rfc-id "RFC_ID" --profile saml --region us-east-1
   ```

# Use the AMS console with RFCs
<a name="ex-rfc-gui"></a>

The AMS console provides features to help you succeed with creating and submitting RFCs.

## Use the RFC List page (Console)
<a name="ex-rfc-list-table"></a>

The AMS console **RFCs** list page provides you with the following options:
+ Advanced RFC search through a **Filter**. For information, see [Find RFCs](ex-rfc-find-col.md).
+ Finding the last time the RFC was **Modified**. This value represents that last time that the RFC status was changed.
+ Viewing RFC details with the RFC **Subject**. Choosing this link opens the details page for that RFC.
+ Viewing RFC status. For information, see [Understand RFC status codes](ex-rfc-status-codes.md)

![\[RFC list page.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/guiRfcListTable.png)


## Use RFC quick create (console)
<a name="ex-rfc-create-qc"></a>

Use the RFC quick create cards, or list table, or choose change types for RFCs by classification.

To learn more, see [Create an RFC](ex-rfc-create-col.md).

## Add RFC correspondence and attachments (console)
<a name="ex-rfc-correspondence"></a>

You can add correspondence to an RFC after it has been submitted and before it is approved; for example, while it's in the state of "PendingApproval". After an RFC is approved (in a state of "Scheduled" or "InProgress"), correspondence cannot be added, because it could be construed as a change to the request. After an RFC is completed (in a state of "Canceled", "Rejected", "Success", or "Failure"), correspondence is once again enabled, though correspondence is disabled once an RFC is closed for more than 30 days.

**Note**  
Each correspondence is limited to 5,000 characters.

**Limitations for attachments:**
+ Only three attachments per correspondence.
+ Limit fifty attachments per RFC.
+ Each attachment must be less than 5 MB in size.
+ Only text files are accepted such as plaintext (`.txt`), comma-separated values (`.csv`), JSON (`.json`), or YAML (`.yaml`). In the case of YAML format, the file must be attached using file extension `.yaml`.
**Note**  
Text files that have XML content are prohibited. If you have XML content to share with AMS, use a service request.
+ File names are limited to 255 characters, with only numbers, letters, spaces, dashes (-), underscores (\$1), and dots (.).
+ Updating and deleting attachments on an RFC is not currently supported.

To add correspondence and attachments to an RFC, follow these steps:

1. In the AMS console, on the RFC details page for an RFC, find the **Correspondence** section at the bottom of the page.

   Before any correspondence:  
![\[Empty correspondence section.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/correspondence-rfc-detail-new.png)

   After some correspondence:  
![\[Correspondence section showing reply form and received correspondence.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/correspondence-reply-form2.png)

1. To add a new correspondence, type your message in the **Reply** text box. To attach files related to the correspondence, choose **Add Attachment**, and then choose the files you want.  
![\[Correspondence section showing comment box and attachments.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/correspondence-add-attachments.png)

1. When you're finished, choose **Submit**.

   The new correspondence, along with links to the attached files, appear in the correspondence list on the RFC details page.  
![\[List of received correspondence.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/correspondence-list2.png)

# Configure RFC email notifications (console)
<a name="ex-rfc-email-notices"></a>

The AMS console **Requests for Change** create page provides you with an option to add email addresses to receive notifications of RFC state changes:

![\[Add email addresses to receive notifications of RFC state changes.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/emailNoticeOption2.png)


Additionally, you can add email addresses for notifications to any change type, for example:

```
aws amscm create-rfc --change-type-id <Change type ID>
                    --change-type-version 1.0 --title "TITLE"
                    --notification "{\"Email\": {\"EmailRecipients\" : [\"email@example.com\"]}}"
```

Add a similar line (`--notification "{\"Email\": {\"EmailRecipients\" : [\"email@example.com\"]}}"`) to any change type inline or template request in the RFC parameters part of the request, not the parameters part.

# Learn about common RFC parameters
<a name="rfc-common-params"></a>

The following are RFC parameters that you are required to submit, and parameters that are commonly used in RFCs:
+ Change type information: ChangeTypeId and ChangeTypeVersion. Ror a list of change type IDs and version numbers, see [Change Type Reference](https://docs.aws.amazon.com/managedservices/latest/ctref/index.html). 

  Run `list-change-type-classification-summaries` in the CLI with the `query` argument to narrow the results. For example, narrow results to change types that contain "Access" in the `Item` name.

  ```
  aws amscm list-change-type-classification-summaries --query "ChangeTypeClassificationSummaries [?contains (Item, 'access')].[Category,Subcategory,Item,Operation,ChangeTypeId]" --output table
  ```

  Run `get-change-type-version` and specify the change type ID. The following command gets the CT version for ct-2tylseo8rxfsc. 

  ```
  aws amscm get-change-type-version --change-type-id ct-2tylseo8rxfsc
  ```
+ Title: A name for the RFC; this becomes the **Subject** of the RFC in the AMS console RFC list and you can search on it with the `GetRfc` command and a filter on `Title`
+ Scheduling: If you want a scheduled RFC, you must include the `RequestedStartTime` and `RequestedEndTime` parameters, or use the **Schedule this change** console option. For an **ASAP** RFC (that runs as soon as it's approved), when using the CLI, leave `RequestedStartTime` and `RequestedEndTime` null. When using the console, accept the **ASAP** option. 

  If the `RequestedStartTime` is missed, the RFC is rejected.
+ Provisioning CTs: The execution parameters, or `Parameters` are the specific settings that are required to provision the resource. They vary widely depending on the CT.
+ Non-provisioning CTs: CTs that do not provision a resource, such as access CTs or Other \$1 Other, or delete stack, have minimal execution parameters and no `Parameters` block.
+ Some RFCs also require that you specify a `TimeoutInMinutes`, or how many minutes are allowed for the creation of the stack before the RFC is failed. Valid values are 60 (minutes) up to 360, for long-running UserData. If the execution can't be completed before the `TimeoutInMinutes` is exceeded, the RFC fails. However, this setting doesn't delay the execution of the RFC.
+ RFCs that create instances, such as an S3 bucket or an ELB, generally provide a schema that allows you to add up to seven tags (key/value pairs). You can add more tags to your S3 bucket by submitting an RFC using the Deployment \$1 Advanced stack components \$1 Tag \$1 Create change type (ct-3cx7we852p3af). EC2, EFS, RDS, and the multi-tiered (HA Two-Tiered and HA One-Tiered) schemas allow up to fifty tags. Tags are specified in the `ExecutionParameters` part of the schema. Providing tags can be of great value. For more information, see [Tagging Your Amazon EC2 Resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html). 

  When using the AMS console, you must open the **Additional configuration** area in order to add tags.<a name="using-tags-tip"></a>
**Tip**  
Many CT schemas have a `Description` and `Name` field near the top of the schema. Those fields are used to name the stack or stack component, they don't name the resource you're creating. Some schemas offer a parameter to name the resource you're creating, and some do not. For example, the CT schema for Create EC2 stack doesn't offer a parameter to name the EC2 instance. In order to do so, you must create a tag with the key "Name" and the value of what you want the name to be. If you do not create such a tag, your EC2 instance displays in the EC2 console without a name attribute. 

## Use the RFC AWS Region option
<a name="ex-rfc-region"></a>

The AMS API and CLI (`amscm` and `amsskms`) endpoints are in `us-east-1`. If you federate with Security Assertion Markup Language (SAML), then scripts are provided to you at onboarding that set your AWS Region to us-east-1. If you use SAML, then you don't need to specify the `--region` option when you issue a command. If your SAML is configured to use us-east-1 but your account isn't in that AWS Region, then you must specify your account-onboarded Region when you issue other AWS commands (for example, `aws s3`).

**Note**  
Most of the command examples provided in this guide don't include the `--region` option.

# Sign up for the RFC daily email
<a name="rfc-digest"></a>

You can sign up for a daily email summarizing the RFC activity in your account over the last 24 hours using the RFC digest feature. The RFC digest feature is a streamlined process that reduces the number of email notifications you receive regarding your account's RFCs. The RFC digest might reduce the likelihood that you miss actions that are pending your response.

To turn on the RFC digest feature, contact your AMS Cloud Service Delivery Manager (CSDM). The CSDM subscribes you. You can request up to 20 email addresses (or aliases) to include on your RFC digest email list. The current email schedule is fixed at 09:00 UTC-8.

To turn off the RFC digest feature, contact your CSDM with your request.

If you don't set up RFC digest and want notifications regarding your RFCs, or if you want more detailed information on your RFCs than what the RFC digest provides, then use the Change Management System to set up CloudWatch Events notifications or email notifications for every individual RFC that you want information on. For information on setting up RFC notifications, see [RFC State Change Notifications](https://docs.aws.amazon.com/managedservices/latest/userguide/rfc-state-change-notices.html).

The topics contained in the RFC digest include the following:
+ Pending Customer Approval: Lists RFCs that are in **PendingApproval** status, awaiting your approval
+ Pending Customer Reply: Lists RFCs that are awaiting your reply on RFC correspondence
+ Pending AWS Approval or Reply: Lists RFCs that are waiting on AMS for reply or approval
+ Completed: Lists RFCs in **Success**, **Failure**, **Cancelled** and **Rejected** status

The following is an example RFC digest:

![\[Example RFC digest\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/RFCDigestExample.png)


# What are change types?
<a name="understanding-cts"></a>

Change type refers to the action that an AWS Managed Services (AMS) request for change (RFC) performs and encompasses the change action itself, and the type of change – manual vs automated. AMS has a large collection of change types not used by other Amazon web services. You use these change types when submitting a request for change (RFC) to deploy, or manage, or gain access to, resources.

**Topics**
+ [Automated and manual CTs](ug-automated-or-manual.md)
+ [CT approval requirements](constrained-unconstrained-ctis.md)
+ [Change type versions](ct-versions.md)
+ [Create change types](ct-creates.md)
+ [Update change types](ct-updates.md)
+ [Internal-only change types](ct-internals.md)
+ [Change type schemas](ct-schemas.md)
+ [Managing permissions for change types](ct-permissions.md)
+ [Redacting sensitive information from change types](ct-redaction.md)
+ [Finding a change type, using the query option](ug-find-ct-ex-section.md)

# Automated and manual CTs
<a name="ug-automated-or-manual"></a>

A constraint on change types is whether they are automated or manual, this is the change type `AutomationStatusId` attribute, called the **Execution mode** in the AMS console.

Automated change types have expected results and execution times and run through the AMS automated system, generally within an hour or less (this largely depends on what resources the CT is provisioning). Manual change types are uncommon, but they are treated differently because they require that an AMS operator act on the RFC before it can be run. That sometimes means communicating with the RFC submitter, so, manual change types require varying lengths of time to complete.

For all scheduled RFCs, an unspecified end time is written to be the time of the specified `RequestedStartTime` plus the `ExpectedExecutionDurationInMinutes` attribute of the submitted change type. For example, if the `ExpectedExecutionDurationInMinutes` is "60" (minutes), and the specified `RequestedStartTime` is `2016-12-05T14:20:00Z` (December 5, 2016 at 4:20 AM), the actual end time would be set to December 5, 2016 at 5:20 AM. To find the `ExpectedExecutionDurationInMinutes` for a specific change type, run this command:

```
aws amscm --profile saml get-change-type-version --change-type-id CHANGE_TYPE_ID --query "ChangeTypeVersion.{ExpectedDuration:ExpectedExecutionDurationInMinutes}"
```

**Note**  
Scheduled RFCs with **Execution mode**= Manual, in the Console, must be set to run at least 24 hours in the future.

**Note**  
When using manual CTs, AMS recommends that you use the ASAP **Scheduling** option (choose **ASAP** in the console, leave start and end time blank in the API/CLI) as these CTs require an AMS operator to examine the RFC, and possibly communicate with you before it can be approved and run. If you schedule these RFCs, be sure to allow at least 24 hours. If approval does not happen before the scheduled start time, the RFC is rejected automatically.

AMS aims to respond to a manual CT within four hours, and will correspond as soon as possible, but it could take much longer for the RFC to actually be run.

For a list of the CTs that are Manual and require AMS review, see the Change Type CSV file, available on the **Developer's Resources** page of the Console.

**YouTube Video**: [ How can I find automated change types for AMS RFCs?](https://www.youtube.com/watch?v=sOzDuCCOduI&list=PLhr1KZpdzukc_VXASRqOUSM5AJgtHat6-&index=2&t=1s)

To find the **Execution mode** for a CT in the AMS console, you must use the **Browse change types** search option. The results show the execution mode of the matching change type or change types.

To find the `AutomationStatus` for a specific change type by using the AMS CLI, run this command:

```
aws amscm --profile saml get-change-type-version --change-type-id CHANGE_TYPE_ID --query "ChangeTypeVersion.{AutomationStatus:AutomationStatus.Name}"
```

You can also look up change types in the [AMS Change Type Reference](https://docs.aws.amazon.com/managedservices/latest/ctref/index.html), which provides information about all AMS change types.

**Note**  
The AMS API/CLI are not currently part of the AWS API/CLI. To access the AMS API/CLI, you download the AMS SDK through the AMS console.

# CT approval requirements
<a name="constrained-unconstrained-ctis"></a>

AMS CTs always have two approval conditions, **AwsApprovalId** and **CustomerApprovalId** that indicate whether the RFC requires AMS or you, or anyone, to approve the execution.

The approval condition is somewhat related to the execution mode; for details, see [Automated and manual CTs](ug-automated-or-manual.md).

To find out the approval condition for a CT, you can look in the [AMS Change Type Reference](https://docs.aws.amazon.com/managedservices/latest/ctref/index.html), or run [GetChangeTypeVersion](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_GetChangeTypeVersion.html). Both will also give you the CT `AutomationStatusId` or **Execution mode**.

You can approve RFCs by using the AMS console or with the following command:

```
aws amscm approve-rfc --rfc-id RFC_ID
```


**CT approval condition**  

| If the CT approval condition is | It requires approval from | And | 
| --- | --- | --- | 
| `AwsApprovalId: Required` | The AMS change type system, | No action is required. This condition is typical for automated CTs. | 
| `AwsApprovalId: NotRequiredIfSubmitter` | The AMS change type system and no one else, if the submitted RFC is for the account it was submitted against, | No action is required. This condition is typical for manual CTs because they will always be reviewed by AMS operators. | 
| `CustomerApprovalId: NotRequired` | The AMS change type system, | If the RFC passes syntax and parameter checks, it is auto approved. | 
| `CustomerApprovalId: Required` | The AMS change type system and you, | A notification is sent to you, and you must explicitly approve the RFC, either by responding to the notice, or running the [ApproveRfc](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_ApproveRfc.html) operation. | 
| `CustomerApprovalId: NotRequiredIfSubmitter` | The AMS change type system and no one else, if you submitted the RFC. | If the RFC passes syntax and parameter checks, it is auto approved. | 
| Urgent Security Incident or Patch | AMS | Is auto approved and implemented. | 

# Change type versions
<a name="ct-versions"></a>

Change types are versioned and the version changes when a major update is made to the change type.

After selecting a change type using the AMS console, you have the option of opening the **Additional configuration** area and selecting a change type version. You can also specify a change type version at the API/CLI command line. You might want to do this for various reasons, including:
+ You know that the version of the **Update** change type that you want must match the version of the **Create** change type that you used to create the resource that you now want to update. For example, you might have an Elastic Load Balancer (ELB) instance that you created with ELB Create change type version 1. To update it, choose ELB Update version 1.
+ You want to use a change type version that has different options in it than the most recent change type. We don't recommend this because AMS updates change types mainly for security reasons and we recommend that you always choose the most recent version.

# Create change types
<a name="ct-creates"></a>

Create change types are matched version-to-version with the Update change types. That is, the change type version that you use to provision a resource must match the version of the Update change type that you would use later to modify that resource. For example, if you create an S3 bucket with the Create S3 Bucket change type version 2.0, and later want to submit an RFC to modify that S3 bucket, you must use the Update S3 Bucket change type version 2.0 as well, even if there is an Update S3 Bucket change type with version 3.0.

We recommend keeping a record of the change type ID and version that you use when provisioning a resource with a Create change type in case you later want to use an Update change type to modify it.

# Update change types
<a name="ct-updates"></a>

AMS provides Update change types to update resources that were created with Create change types. The Update change types must be matched version-to-version with the Create change type originally used to provision the resource.

We recommend keeping a record of the change type ID and version that you use when provisioning a resource to make it easy to update it.

**YouTube Video**: [ How do I use update CTs to change resources in an AWS Managed Services (AMS) account?](https://www.youtube.com/watch?v=dqb31yaAXhc&list=PLhr1KZpdzukc_VXASRqOUSM5AJgtHat6-&index=8&t=30s)

# Internal-only change types
<a name="ct-internals"></a>

You can see change types that are for internal use only. This is so you know what actions AMS can, or does, take. If there is an internal-only change type that you would like to have available for your use, submit a service request.

For example, there is a Management \$1 Monitoring and notification \$1 CloudWatch alarm suppression \$1 Update CT that is internal-only. AMS uses it to deploy infrastructure updates (such as patching) to turn off alarm notifications that the updates might erroneously trigger. When this CT is submitted, you will notice the RFC for the CT in your RFC list. Any internal-only CT that is deployed in an RFC appears in your RFC list.

# Change type schemas
<a name="ct-schemas"></a>

All change types provide a JSON schema for your input in the creation, modification, or access, of resources. The schema provides the parameters, and their descriptions, for you to create a request for change (RFC).

The successful execution of an RFC results in execution output. For provisioning RFCs, the execution output includes a "stack\$1id" that represents the stack in CloudFormation and can be searched in the CloudFormation console. The execution output sometimes includes output of the ID of the instance created and that ID can be used to search for the instance in the corresponding AWS console. For example, the Create ELB CT execution output includes a "stack\$1id" that is searchable in CloudFormation and outputs a key=ELB value=<stack-xxxx> that is searchable in the Amazon EC2 console for Elastic Load Balancing.

Let's examine a CT schema. This is the schema for CodeDeploy Application Create, a fairly small schema. Some schemas have very large `Parameter` areas.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/ct-schemas.html)

**Note**  
This schema allows up to seven tags; however, EC2, EFS, RDS, and the multi-tier create schemas allow up to 50 tags.

# Managing permissions for change types
<a name="ct-permissions"></a>

You can use a custom policy to restrict which change types (CTs) are available to different groups or users.

To learn more about doing this, see the AMS User Guide section [Setting Permissions](https://docs.aws.amazon.com/managedservices/latest/userguide/setting-permissions.html).

# Redacting sensitive information from change types
<a name="ct-redaction"></a>

AMS change type schemas offer a parameter attribute, `"metadata":"ams:sensitive":"true"` that is used for parameters that would contain sensitive information, such as a password. When this attribute is set, the input provided is obscured. Note that you cannot set this parameter attribute; however, if you are working with AMS to create a change type and have a parameter that you would like obscured at input, you can request this.

# Finding a change type, using the query option
<a name="ug-find-ct-ex-section"></a>

This example demonstrates how to use the AMS Console to find the appropriate change type for the RFC that you want to submit.

You can use the console or the API/CLI to find a change type ID (CT) or version. There are two methods, either a search or choosing the classification. For both selection types, You can sort the search by choosing either **Most frequently used**, **Most recently used**, or **Alphabetical**.

**YouTube Video**: [ How do I create an RFC using the AWS Managed Services CLI and where can I find the CT Schema?](https://www.youtube.com/watch?v=IluDFwnJJFU&list=PLhr1KZpdzukc_VXASRqOUSM5AJgtHat6-&index=3&t=150s) 

In the AMS console, on the **RFCs** -> **Create RFC** page:
+ With **Browse by change type** selected (the default), either:
  + Use the **Quick create** area to select from AMS's most popular CTs. Click on a label and the **Run RFC** page opens with the **Subject** option auto-filled for you. Complete the remaining options as needed and click **Run** to submit the RFC. 
  + Or, scroll down to the **All change types** area and start typing a CT name in the option box, you don't have to have the exact or full change type name. You can also search for a CT by change type ID, classification, or execution mode (automated or manual) by entering the relevant words.

    With the default **Cards** view selected, matching CT cards appear as you type, select a card and click **Create RFC**. With the **Table** view selected, choose the relevant CT and click **Create RFC**. Both methods open the **Run RFC** page.
+ Alternatively, and to explore change type choices, click **Choose by category** at the top of the page to open a series of drop-down option boxes.
+ Choose **Category**, a **Subcategory**, an **Item**, and an **Operation**. The information box for that change type appears a panel appears at the bottom of the page.
+ When you're ready, press **Enter**, and a list of matching change types appears.
+ Choose a change type from the list. The information box for that change type appears at the bottom of the page.
+ After you have the correct change type, choose **Create RFC**.
**Note**  
The AMS CLI must be installed for these commands to work. To install the AMS API or CLI, go to the AMS console **Developers Resources** page. For reference material on the AMS CM API or AMS SKMS API, see the AMS Information Resources section in the User Guide. You may need to add a `--profile` option for authentication; for example, `aws amsskms ams-cli-command --profile SAML`. You may also need to add the `--region` option as all AMS commands run out of us-east-1; for example `aws amscm ams-cli-command --region=us-east-1`.
**Note**  
The AMS API/CLI (amscm and amsskms) endpoints are in the AWS N. Virginia Region, `us-east-1`. Depending on how your authentication is set, and what AWS Region your account and resources are in, you may need to add `--region us-east-1` when issuing commands. You may also need to add `--profile saml`, if that is your authentication method.

To search for a change type using the AMS CM API (see [ListChangeTypeClassificationSummaries](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_ListChangeTypeClassificationSummaries.html)) or CLI:

You can use a filter or query to search. The ListChangeTypeClassificationSummaries operation has [Filters](https://docs.aws.amazon.com/managedservices/latest/ApiReference-cm/API_ListChangeTypeClassificationSummaries.html#amscm-ListChangeTypeClassificationSummaries-request-Filters) options for `Category`, `Subcategory`, `Item`, and `Operation`, but the values must match the existing values exactly. For more flexible results when using the CLI, you can use the `--query` option. 


**Change type filtering with the AMS CM API/CLI**  
<a name="ct-filtering-table"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/ug-find-ct-ex-section.html)

1. Here are some examples of listing change type classifications:

   The following command lists all change type categories.

   ```
   aws amscm list-change-type-categories
   ```

   The following command lists the subcategories belonging to a specified category.

   ```
   aws amscm list-change-type-subcategories --category CATEGORY
   ```

   The following command lists the items belonging to a specified category and subcategory.

   ```
   aws amscm list-change-type-items --category CATEGORY --subcategory SUBCATEGORY
   ```

1. Here are some examples of searching for change types with CLI queries:

   The following command searches CT classification summaries for those that contain "S3" in the Item name and creates output of the category, subcategory, item, operation, and change type ID in table form. 

   ```
   aws amscm list-change-type-classification-summaries --query "ChangeTypeClassificationSummaries [?contains(Item, 'S3')].[Category,Subcategory,Item,Operation,ChangeTypeId]" --output table
   ```

   ```
   +---------------------------------------------------------------+
   |               ListChangeTypeClassificationSummaries           |
   +----------+-------------------------+--+------+----------------+
   |Deployment|Advanced Stack Components|S3|Create|ct-1a68ck03fn98r|
   +----------+-------------------------+--+------+----------------+
   ```

1. You can then use the change type ID to get the CT schema and examine the parameters. The following command outputs the schema to a JSON file named CreateS3Params.schema.json.

   ```
   aws amscm get-change-type-version --change-type-id "ct-1a68ck03fn98r" --query "ChangeTypeVersion.ExecutionInputSchema" --output text > CreateS3Params.schema.json
   ```

   For information about using CLI queries, see [How to Filter the Output with the --query Option](https://docs.aws.amazon.com/cli/latest/userguide/controlling-output.html#controlling-output-filter) and the query language reference, [JMESPath Specification](http://jmespath.org/specification.html).

1. After you have the change type ID, we recommend verifying the version for the change type to make sure it's the latest version. Use this command to find the version for a specified change type:

   ```
   aws amscm list-change-type-version-summaries --filter Attribute=ChangeTypeId,Value=CHANGE_TYPE_ID
   ```

   To find the `AutomationStatus` for a specific change type, run this command:

   ```
   aws amscm --profile saml get-change-type-version --change-type-id CHANGE_TYPE_ID --query "ChangeTypeVersion.{AutomationStatus:AutomationStatus.Name}"
   ```

   To find the `ExpectedExecutionDurationInMinutes` for a specific change type, run this command:

   ```
   aws amscm --profile saml get-change-type-version --change-type-id ct-14027q0sjyt1h --query "ChangeTypeVersion.{ExpectedDuration:ExpectedExecutionDurationInMinutes}"
   ```

# Troubleshooting RFC errors in AMS
<a name="rfc-troubleshoot"></a>

Many AMS provisioning RFC failures can be investigated through the CloudFormation documentation. See [ Troubleshooting AWS CloudFormation: Troubleshooting Errors](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors)

Additional troubleshooting suggestions are provided in the following sections.

## "Management" RFC errors in AMS
<a name="rfc-access-failure"></a>

AMS "Management" Category change types (CTs) allow you to request access to resources as well as manage existing resources. This section describes some common issues.

### RFC access errors
<a name="rfc-access-failure"></a>
+ Make sure the Username and FQDN you specified in the RFC are correct and exist in the domain. For help finding your FQDN, see [Finding your FQDN](https://docs.aws.amazon.com/managedservices/latest/userguide/find-FQDN.html).
+ Make sure the stack ID you specified for access is an EC2-related stack. Stacks such as ELB and Amazon Simple Storage Service (S3) are not candidates for access RFCs, instead, use your read only access role to get access these stacks resources. For help finding a stack ID, see [Finding stack IDs](https://docs.aws.amazon.com/managedservices/latest/userguide/find-stack.html)
+ Make sure the stack ID you provided is correct and belongs to the relevant account.

For help with other access RFC failures, see [Access management](https://docs.aws.amazon.com/managedservices/latest/userguide/access-mgmt.html).

**YouTube Video**: [ How do I raise a Request for Change (RFC) properly to avoid rejections and failures?](https://www.youtube.com/watch?v=IFOn4Q-5Cas&list=PLhr1KZpdzukc_VXASRqOUSM5AJgtHat6-&index=5&t=242s)

### RFC (manual) CT scheduling errors
<a name="manual-ct-schedule-failure"></a>

Most change types are ExecutionMode=Automated, but some are ExecutionMode=Manual and that affects how you should schedule them to avoid RFC failure.

Scheduled RFCs with ExecutionMode=Manual, must be set to execute at least 24 hours in the future if you are using the AMS Console to create the RFC. 

AMS aims to respond to a manual CT within eight hours, and will correspond as soon as possible, but it could take much longer for the RFC to actually be executed.

### Using RFCs with manual update CTs
<a name="manual-ct-update-failure"></a>

AMS Operations reject Management \$1 Other \$1 Other RFCs for updates to stacks, when there is an Update change type for the type of stack that you want to update.

### RFC delete stack errors
<a name="rfc-delete-stack-fail"></a>

RFC delete stack failures: If you use the Management \$1 Standard stacks \$1 Stack \$1 Delete CT, you will see the detailed events in the CloudFormation Console for the stack with the AMS stack name. You can identify your stack by checking it against the name it has in the AMS Console. The CloudFormation Console provides more details about failure causes.

Before deleting a stack, you should consider how the stack was created. If you created the stack using an AMS CT and did not add or edit the stack resources, then you can expect to delete it without issue. However, it is a good idea for you remove any manually-added resources from a stack before submitting a delete stack RFC against it. For example, if you create a stack using the full stack CT (HA Two Tier), it includes a security group - SG1. If you then use AMS to create another security group - SG2, and reference the new SG2 in the SG1 created as part of the full stack, and then use the delete stack CT to delete the stack, the SG1 will not delete as it is referenced by SG2.

**Important**  
Deleting stacks can have unwanted and unanticipated consequences. AMS prefers to \$1not\$1 delete stacks or stack resources on behalf of customers for this reason. Note, that AMS will only delete resources on your behalf (through a submitted Mangement \$1 Other \$1 Other \$1 Update change type) that are not possible to delete using the appropriate, automated, change type to delete. Additional considerations:  
If the resources are enabled for 'delete protection', AMS can help to unblock this if you submit a Management \$1 Other \$1 Other \$1 Update change type and, after the deletion protection is removed, you can use the automated CT to delete that resource.
If there are multiple resources in a stack, and you want to delete only a subset of the stack resources, use the CloudFormation Update change type (see [CloudFormation Ingest Stack: Updating](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-cfn-ingest-update-col.html)). You can also submit a Management \$1 Other \$1 Other \$1 Update change type and AMS engineers can help you craft the changeset, if needed.
If there are issues using the CloudFormation Update CT due to drifts, AMS can help if you submit a Management \$1 Other \$1 Other \$1 Update to resolve the drift (as far as supported by the AWS CloudFormation Service) and provide a ChangeSet that you can then validate and execute using the automated CT, Management/Custom Stack/Stack From CloudFormation Template/Approve Changeset and Update.
AMS maintains the above restrictions to help ensure there are no unexpected or unanticipated resource deletions.

For more information, see [ Troubleshooting AWS CloudFormation: delete stack fails](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-delete-stack-fails).

### RFC update DNS errors
<a name="rfc-update-dns-failure"></a>

Multiple RFCs to update a DNS hosted zone can fail, some without reason. Creating multiple RFCs at the same time to update DNS hosted zones (private or public) can cause some RFCs to fail because they are trying to update the same stack at the same time. AMS change management rejects or fails RFCs that are not able to update a stack because the stack is already being updated by another RFC. AMS recommends that you create one RFC at a time and wait for the RFC to succeed before raising a new one for the same stack.

### RFC IAM entities errors
<a name="making-iam-requests"></a>

AMS provisions a number of default IAM roles and profiles into AMS accounts that are designed to meet your needs. However, you may need to request additional IAM resources occasionally.

The process for submitting RFCs requesting custom IAM resources follows the standard workflow for manual RFCs, but the approval process also includes a security review to ensure appropriate security controls are in place. Therefore, the process typically takes longer than other manual RFCs. To reduce the cycle time on these RFCs, please follow the following guidelines.

For information on what we mean by an IAM review and how it maps to our Technical Standards and Risk Acceptance process, see [Understand RFC security reviews](rfc-security.md).

Common IAM resources requests:
+ If you are asking for a policy pertaining to a major cloud-compatible application, such as CloudEndure, see the AMS pre-approved IAM CloudEndure policy: Unpack the [WIGs Cloud Endure Landing Zone Example](samples/wigs-ce-lz-examples.zip) file and open the `customer_cloud_endure_policy.json`
**Note**  
If you want a more permissive policy, discuss your needs with your CloudArchitect/CSDM and obtain, if needed, an AMS Security Review and Signoff before submitting an RFC implementing the policy.
+ If you want to modify a resource deployed by AMS in your account by default, we recommend that you ask for a modified copy of that resource instead of changes to the existing one.
+ If you are requesting permissions for a human user (instead of attaching the permissions to the user) attach the permissions to a role, and then grant the user permission to assume that role. For details on doing this, see [Temporary AMS Advanced console access](https://docs.aws.amazon.com/managedservices/latest/userguide/access-console-temp.html).
+ If you require exceptional permissions for a temporary migration or workflow, provide an end date for those permissions in your request.
+ If you’ve already discussed the subject of your request with your security team, provide evidence of their approval to your CSDM with as much detail as possible.

If AMS rejects an IAM RFC we provide a clear reason for the rejection. For example, we might reject an IAM policy create request and explain what about the policy is inappropriate. In that case, you can make the identified changes and resubmit the request. If additional clarity on the status of a request is required, submit a service request, or contact your CSDM.

The following list describes the typical risks that AMS tries to mitigate as we review your IAM RFCs. If your IAM RFC has any of these risks, it may result in the rejection of the RFC. In cases where you require an exception, AMS asks for approvals from your security team. To seek such an exception, coordinate with your CSDM.

**Note**  
AMS may, for any reason, decline any change to IAM resources inside of an account. For concerns regarding any RFC rejection, reach out to AMS Operations via a service request, or contact your CSDM.
+ Privilege escalation, such as permissions that allow you to modify your own permissions, or to modify the permissions of other resources inside the account. Examples:
  + The use of `iam:PassRole` with another, more privileged role.
  + Permission to attach/detach IAM policies from a role or user.
  + The modification of IAM policies in the account.
  + The ability to make API calls in the context of management infrastructure.
+ Permissions to modify resources or applications that are required to provide AMS services to you. Examples:
  + Modification of AMS infrastructure like the bastions, management host, or EPS infrastructure.
  + Deletion of log management AWS Lambda functions, or log streams.
  + The deletion or modification of the default CloudTrail monitoring application. 
  + The modification of the Directory Services Active Directory (AD).
  + Disabling CloudWatch (CW) alarms.
  + The modification of the principals, policies, and namespaces deployed in the account as a part of the landing zone.
+ Deployment of infrastructure outside of best practices, such as permissions that allow the creation of infrastructure in a state that endangers your information security. Examples:
  + The creation of public, or unencrypted, S3 buckets or public sharing of EBS volumes.
  + The provisioning of public IP addresses.
  + The modification of security groups to allow broad access.
+ Overly broad permissions capable of causing application impact, such as permissions that can result in data loss, integrity loss, inappropriate configuration, or interruptions of service for your infrastructure and the applications inside the account. Examples:
  + Disabling, or redirecting, network traffic through APIs like `ModifyNetworkInterfaceAttribute` or `UpdateRouteTable`.
  + The disabling of managed infrastructure by detaching volumes from managed hosts.
+ Permissions for services not a part of the AMS service description and not supported by AMS.

  Services not listed in the AMS Service description cannot be used in AMS accounts. To request support for a feature or service, please reach out to your CSDM.
+ Permissions that do not meet your stated goal as they are either too generous, or too conservative, or are applied to the wrong resources. Examples:
  + A request for `s3:PutObject` permissions to an S3 bucket that has mandatory KMS encryption, without `KMS:Encrypt` permissions to the relevant key.
  + Permissions that pertain to resources that don’t exist in the account.
  + IAM RFCs where the description of the RFC does not seem to match the request.

## "Deployment" RFC errors
<a name="rfc-provisioning-fail"></a>

AMS "Deployment" Category change types (CTs) allow you to request various AMS-supported resources be added to your account.

Most AMS CTs that create a resource are based on CloudFormation templates. As a customer you have read-only access to all AWS services including CloudFormation, you can quickly identify the CloudFormation stack that represents your stack based on the stack description using the CloudFormation Console. The failed stack will likely be in a state of DELETE\$1COMPLETE. Once you have identified the CloudFormation stack, the events will show you the specific resource that failed to create, and why.

### Using CloudFormation documentation to troubleshoot
<a name="rfc-cfn-docs"></a>

Most AMS provisioning RFCs use a CloudFormation template and that documentation can be helpful for troubleshooting. See documentation for that CloudFormation template:
+ Create application load balancer failure: [ AWS::ElasticLoadBalancingV2::LoadBalancer (Application Load Balancer)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-loadbalancer.html)
+ Create Auto scaling group: [ AWS::AutoScaling::AutoScalingGroup (Auto Scaling Group)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html)
+ Create memcached cache: [ AWS::ElastiCache::CacheCluster (Cache Cluster)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticache-cache-cluster.html)
+ Create Redis cache: [ AWS::ElastiCache::CacheCluster (Cache Cluster)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticache-cache-cluster.html)
+ Create DNS Hosted Zone (used with Create DNS private/public): [ AWS::Route53::HostedZone (R53 Hosted Zone)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-route53-hostedzone.html)
+ Create DNS Record Set (used with Create DNS private/public): [ AWS::Route53::RecordSet (Resource Record Sets)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-recordset.html)
+ Create EC2 stack: [ AWS::EC2::Instance (Elastic Compute Cloud)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html)
+ Create Elastic File System (EFS): [ AWS::EFS::FileSystem (Elastic File System)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-efs-filesystem.html)
+ Create Load balancer: [ AWS::ElasticLoadBalancing::LoadBalancer (Elastic Load Balancer)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb.html)
+ Create RDS DB: [ AWS::RDS::DBInstance (Relational Database)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-rds-database-instance.html)
+ Create Amazon S3: [ AWS::S3::Bucket (Simple Storage Service)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html)
+ Create Queue: [ AWS::SQS::Queue (Simple Queue Service)](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sqs-queues.html)

### RFC creating AMIs errors
<a name="rfc-create-ami-failure"></a>

An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). From an AMI, you launch an instance, which is a copy of the AMI running as a virtual server in the cloud. AMIs are very useful, and required to create EC2 instances or Auto Scaling groups; however, you must observe some requirements:
+ The instance you specify for `Ec2InstanceId` must be in a stopped state for the RFC to succeed. Do not use Auto Scaling group (ASG) instances for this parameter because the ASG will terminate a stopped instance.
+ To create an AMS Amazon Machine Image (AMI), you must start with an AMS instance. Before you can use the instance to create the AMI, you must prepare it by ensuring that it is stopped and dis-joined from its domain. For details, see [ Create a Standard Amazon Machine Image Using Sysprep](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/Creating_EBSbacked_WinAMI.html#23ami-create-standard)
+ The name you specify for the new AMI must be unique within the account or the RFC fails. How to do this is described in [AMI \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-ami-create.html), and for more details, see and [AWS AMI Design](https://aws.amazon.com/answers/configuration-management/aws-ami-design/).

**Note**  
For additional information for prepping for AMI creation, see [AMI \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-ami-create.html).

### RFCs creating EC2s or ASGs errors
<a name="rfc-create-ec2-asg-failure"></a>

For EC2 or ASG failures with timeouts, AMS recommends that you confirm if the AMI used is customized. If it is, please refer to the AMI creation steps included in this guide (see [AMI \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-ami-create.html)) to ensure that the AMI was created correctly. A common mistake when creating a custom AMI is not following the steps in the guide to rename or invoke Sysprep.

### RFCs creating RDS errors
<a name="rfc-create-rds-failure"></a>

Amazon Relational Database Service (RDS) failures can occur for many different reasons because you can use many different engines when you create the RDS, and each engine has its own requirements and limitations. Before attempting to create an AMS RDS stack, carefully review AWS RDS parameter values, see [CreateDBInstance](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBInstance.html).

To learn more about Amazon RDS in general, including size recommendations, see [Amazon Relational Database Service Documentation](https://aws.amazon.com/documentation/rds/).

### RFCs creating Amazon S3s errors
<a name="rfc-create-s3-failure"></a>

One common error when creating an S3 storage bucket is not using a unique name for the bucket. If you submitted an S3 bucket Create CT with the same name as one previously submitted, it would fail because an S3 bucket would already exist with that BucketName. This would be detailed in the CloudFormation Console, where you will see that the stack event shows that the bucket name is already in use.

## RFC validation versus execution errors
<a name="rfc-valid-execute-errors"></a>

RFC failures and related messages differ in the output messages on the AMS console RFC details page for a selected RFC:
+ Validation Failures reasons are available in Status Field only
+ Execution Failures reasons are available in Execution Output as well as Status Fields.

![\[Request for change details showing rejected status due to no domain trust found.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/rfcReason.png)


## RFC error messages
<a name="rfc-error-messages"></a>

When you come across the following error for the listed change types (CTs), you can use these solutions to help you find the source of the problems and fix them.

`{"errorMessage":"An error has occurred during RFC execution. We are investigating the issue.","errorType":"InternalError"}`

If you require further assistance after referring to the following troubleshooting options, then engage AMS through RFC correspondence. For more information, see [RFC Correspondence and Attachment (Console)](https://docs.aws.amazon.com/managedservices/latest/userguide/ex-rfc-gui.html#ex-rfc-correspondence).

### Workload ingestion (WIGS) errors
<a name="rfc-valid-execute-wigs"></a>

**Note**  
Validation tools for both Windows and Linux can be downloaded and run directly on your on-premises servers, as well as EC2 instances in AWS. These can be found through the *AMS Advanced Application Developer's Guide* [Migrating workloads: Linux pre-ingestion validation](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-migrate-instance-linux-validation.html) and [Migrating workloads: Windows pre-ingestion validation](https://docs.aws.amazon.com/managedservices/latest/appguide/ex-migrate-instance-win-validation.html). 
+ Make sure EC2 instance exists in target AMS account. For example, if you have shared your AMI from a non-AMS account to an AMS account, you'll have to create an EC2 instance in your AMS account with the shared AMI before you can submit a Workload Ingest RFC.
+ Check to see if the security groups attached to the instance have egress traffic allowed. The SSM Agent needs to be able to connect to its public endpoint.
+ Check to see if the instance has the right permissions to connect with the SSM agent. These permissions come with the `customer-mc-ec2-instance-profile`, you can check for this in the EC2 console:  
![\[EC2 instance details showing IAM role set to customer-mc-ec2-instance-profile.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/ec2ConsoleWCircle.png)

### EC2 instance stack stop errors
<a name="rfc-valid-execute-ec2-stop"></a>
+ Check to see if the instance is already in a stopped or terminated state.
+ If the EC2 instance is online and you see the `InternalError` error, then submit a service request for AMS to investigate.
+ Note that you can't use the change type Management \$1 Advanced stack components \$1 EC2 instance stack \$1 Stop ct-3mvvt2zkyveqj to stop an Auto Scaling group (ASG) instance. If you need to stop an ASG instance, then submit a service request.

### EC2 instance stack create errors
<a name="rfc-valid-execute-ec2-create"></a>

The `InternalError` message is from CloudFormation; a CREATION\$1FAILED status reason. You can find details on the stack failure in CloudWatch stack events by following these steps:
+ In the AWS Management console, you can view a list of stack events while your stack is being created, updated, or deleted. From this list, find the failure event and then view the status reason for that event.

  The status reason might contain an error message from AWS CloudFormation or from a particular service that can help you understand the problem.
+ For more information about viewing stack events, see [ Viewing AWS CloudFormation Stack Data and Resources on the AWS Management Console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-view-stack-data-resources.html).

### EC2 instance volume restore errors
<a name="rfc-ec2-vol-restore-ec2-fail"></a>

AMS creates an internal troubleshooting RFC when EC2 instance volume restore fails. This is done because EC2 instance volume restore is an important part of disaster recovery (DR) and AMS creates this internal troubleshooting RFC for you automatically.

When the internal troubleshooting RFC is created, a banner is displayed providing you with links to the RFC. This internal troubleshooting RFC provides your with more visibility into RFC failures and, rather than submitting retry RFCs leading to the same errors, or making you manually reach out to AMS for this failure, you can keep track of your changes and know that the failure is being worked on by AMS. This also reduces the time-to-recovery (TTR) metric for their change as AMS Operators proactively work on the RFC failure instead of waiting for your request.

## How to get help with an RFC
<a name="rfc-escalate"></a>

You can reach out to AMS to identify the root cause of your failure. AMS business hours are 24 hours a day, 7 days a week, 365 days a year.

AMS provides several avenues for you to ask for help.
+ If you require assistance with an open RFC or an RFC which is completed but was incorrect, engage AMS through RFC bi-directional correspondence. For more information, see [RFC Correspondence and Attachment (Console)](https://docs.aws.amazon.com/managedservices/latest/userguide/ex-rfc-gui.html#ex-rfc-correspondence).
+ To report an AWS or AMS service performance issue that impacts your managed environment, use the AMS console and submit an incident report. For details, see [Reporting an Incident](https://docs.aws.amazon.com/managedservices/latest/userguide/gui-ex-report-incident.html). For general information about AMS incident management, see [Incident response](https://docs.aws.amazon.com/managedservices/latest/userguide/sec-incident-response.html).
+ For specific questions about how you or your resources or applications are working with AMS, or to escalate an incident, email one or more of the following:

  1. First, if you are unsatisfied with the service request or incident report response, email your CSDM: ams-csdm@amazon.com

  1. Next, if escalation is required, you can email the AMS Operations Manager (but your CSDM will probably do this): ams-opsmanager@amazon.com

  1. Further escalation would be to the AMS Director: ams-director@amazon.com

  1. Finally, you are always able to reach the AMS VP: ams-vp@amazon.com

# Direct Change mode in AMS
<a name="direct-change-mode-section"></a>

**Topics**
+ [Getting Started with Direct Change mode](dcm-get-started.md)
+ [Security and compliance](dcm-security-n-compliance.md)
+ [Change management in Direct Change mode](dcm-change-mgmt.md)
+ [Creating stacks using Direct Change mode](dcm-creating-stacks.md)
+ [Direct Change Mode use cases](dcm-use-cases.md)

AWS Managed Services (AMS) Direct Change mode (DCM) extends AMS Advanced change management by providing native AWS access to AMS Advanced Plus and Premium accounts to provision and update AWS resources. With DCM, you have the option to use native AWS API (console or CLI/SDK) or AMS Advanced change management requests for change (RFCs), and in either case the resources and changes to them are fully supported by AMS, including monitoring, patch, backup, incident response management. Resources provisioned through DCM are registered in the AMS service knowledge management system (SKMS), joined to the AMS managed Active Directory domain (when applicable), and run AMS management agents. Use existing tooling (for example, CloudFormation, AWS SDK, and CDK) to develop and deploy AMS-managed CloudFormation stacks.

**Note**  
Direct Change mode does not remove AMS change management RFCs. You have full access to AMS RFCs with DCM.

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/Qu1aKIUPT28?si=KrOqr8pniwfh7Nob/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/Qu1aKIUPT28?si=KrOqr8pniwfh7Nob)


# Getting Started with Direct Change mode
<a name="dcm-get-started"></a>

Begin by checking prerequisites and then submitting a request for change (RFC) in your eligible AMS Advanced account.

1. Confirm that the account that you want to use with DCM meets the requirements:
   + The account is AMS Advanced Plus or Premium.
   + The account doesn't have Service Catalog enabled. We currently don't support onboarding accounts to both DCM and Service Catalog at the same time. If you are already onboarded to Service Catalog but are interested in DCM, discuss your needs with your cloud service delivery manager (CSDM). If you decide to switch from Service Catalog to DCM, offboard Service Catalog, to do that, include the ask in the request for change below. For details about Service Catalog in AMS, see [AMS and Service Catalog](https://docs.aws.amazon.com/managedservices/latest/userguide/ams-service-catalog.html).

1. Submit a request for change (RFC) using the Management \$1 Managed account \$1 Direct Change mode \$1 Enable change type (ct-3rd4781c2nnhp). For an example walkthrough, see [Direct Change mode \$1 Enable](https://docs.aws.amazon.com/managedservices/latest/ctref/management-managed-direct-change-mode-enable.html).

   After the CT is processed, the predefined IAM roles, `AWSManagedServicesCloudFormationAdminRole` and `AWSManagedServicesUpdateRole` are provisioned in the specified account.

1. Assign the appropriate role to the users that require DCM access using your internal federation process. 

**Note**  
You can specify any number of SAMLIdentityProviders, AWS Services, and IAM Entities (Roles, Users etc) to assume the roles. You must provide at least one: `SAMLIdentityProviderARNs`, `IAMEntityARNs`, or `AWSServicePrincipals`. For more information, consult with your company's IAM department or with your AMS cloud architect (CA).

## Direct Change mode IAM roles and policies
<a name="dcm-gs-iam-roles-and-policies"></a>

When Direct Change mode is enabled in an account, these new IAM entities are deployed:

`AWSManagedServicesCloudFormationAdminRole`: This role grants access to the CloudFormation console, create and update CloudFormation stacks, view drift reports, and create and execute CloudFormation ChangeSets. Access to this role is managed through the your SAML provider.

Managed policies that are deployed and attached to the role `AWSManagedServicesCloudFormationAdminRole` are:
+ AMS Advanced multi-account landing zone (MALZ) Application account
  + AWSManagedServices\$1CloudFormationAdminPolicy1
  + AWSManagedServices\$1CloudFormationAdminPolicy2
    + This policy represents the permissions granted to the `AWSManagedServicesCloudFormationAdminRole`. You and partners use this policy to grant access to an existing role in the account and allow that role to launch and update CloudFormation stacks in the account. This might require additional AMS service control policy (SCP) updates to allow other IAM entities to launch CloudFormation stacks.
+ AMS Advanced single-account landing zone (SALZ) account
  + AWSManagedServices\$1CloudFormationAdminPolicy1
  + AWSManagedServices\$1CloudFormationAdminPolicy2
  + cdk-legacy-mode-s3-access [in-line policy]
  + AWS ReadOnlyAccess policy

`AWSManagedServicesUpdateRole`: This role grants restricted access to downstream AWS service APIs. The role is deployed with managed policies that provide mutating and non-mutating API operations, but in general restricts mutating operations (such as Create/Delete/PUT), against certain services such as IAM, KMS,GuardDuty, VPC, AMS infrastructure resources and configuration, and so forth. Access to this role is managed through the your SAML provider.

Managed policies that are deployed and attached to the role `AWSManagedServicesUpdateRole` are:
+ AMS Advanced multi-account landing zone Application account
  + AWSManagedServicesUpdateBasePolicy 
  + AWSManagedServicesUpdateDenyPolicy 
  + AWSManagedServicesUpdateDenyProvisioningPolicy 
  + AWSManagedServicesUpdateEC2AndRDSPolicy 
  + AWSManagedServicesUpdateDenyActionsOnAMSInfraPolicy
+ AMS Advanced single-account landing zone account
  + AWSManagedServicesUpdateBasePolicy 
  + AWSManagedServicesUpdateDenyProvisioningPolicy 
  + AWSManagedServicesUpdateEC2AndRDSPolicy 
  + AWSManagedServicesUpdateDenyActionsOnAMSInfraPolicy1 
  + AWSManagedServicesUpdateDenyActionsOnAMSInfraPolicy2

Besides these, the managed policy `AWSManagedServicesUpdateRole` role also has the AWS managed policy `ViewOnlyAccess` attached to it.

# Security and compliance
<a name="dcm-security-n-compliance"></a>

Security and compliance is a shared responsibility between AMS Advanced and you, as our customer. AMS Advanced Direct Change mode does not change this shared responsibility.

## Security in Direct Change mode
<a name="dcm-security"></a>

AMS Advanced offers additional value with a prescriptive landing zone, a change management system, and access management. When using Direct Change mode, this responsibility model does not change. However, you should be aware of additional risks.

The Direct Change Mode "Update" role (see [Direct Change mode IAM roles and policies](dcm-get-started.md#dcm-gs-iam-roles-and-policies)) provides elevated permissions allowing the entity with access to it, to make changes to infrastructure resources of AMS-supported services within your account. With elevated permissions, varied risks exist depending on the resource, service, and actions, especially in situations where an incorrect change is made due to oversight, mistake, or lack of adherence to your internal process and control framework.

As per AMS Technical Standards, the following risks have been identified and recommendations are made as follows. Detailed information about AMS Technical Standards is available through AWS Artifact. To access AWS Artifact, contact your CSDM for instructions or go to [Getting Started with AWS Artifact](http://aws.amazon.com/artifact/getting-started).

**AMS-STD-001: Tagging**

<a name="AMS-STD-001"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/dcm-security-n-compliance.html)

**AMS-STD-002: Identity and Access Management (IAM)**


| Standards | Does it break | Risks | Recommendations | 
| --- | --- | --- | --- | 
| 4.7 Actions, which bypass the change management process (RFC), must not be permitted such as starting or stopping of an instance, creation of S3 buckets or RDS instances, and so forth. Developer mode accounts and Self-Service Provisioned mode services (SSPS) are exempted as long as actions are performed within the boundaries of the assigned role. | Yes. The purpose of self service actions allow you to perform actions bypassing the AMS RFC system. | The secure access model is a core technical facet of AMS and an IAM user for console or programmatic access circumvents this access control. The IAM users access is not monitored by AMS change management. Access is logged in CloudTrail only. | The IAM user should be time-bounded and granted permissions based on least-privilege and need-to-know. | 

**AMS-STD-003: Network Security**

<a name="AMS-STD-003"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/dcm-security-n-compliance.html)

**AMS-STD-007: Logging**

<a name="AMS-STD-007"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/dcm-security-n-compliance.html)

Work with your internal authorization and authentication team to control the permissions to the Direct Change mode roles accordingly.

## Compliance in Direct Change mode
<a name="dcm-compliance"></a>

Direct Change mode is compatible with both production and non-production workloads. It's your responsibility to ensure adherence to any compliance standards (for example, PHI, HIPAA, PCI), and to ensure that the use of Direct Change mode complies with your internal control frameworks and standards.

# Change management in Direct Change mode
<a name="dcm-change-mgmt"></a>

Change management is the process that AMS Advanced uses to implement requests for change. A request for change (RFC) is a request created by either you, or AMS Advanced through the AMS Advanced interface to make a change to your managed environment and includes an AMS Advanced change type (CT) ID for a particular operation. For more information, see [Change management](https://docs.aws.amazon.com/managedservices/latest/userguide/ex-what-is.html).

**Note**  
Direct Change mode does not remove AMS change management RFCs, you still have full access to AMS RFCs with DCM.

AMS Direct Change mode (DCM) extends AMS Advanced change management by providing native AWS access to AMS Advanced Plus and Premium accounts to provision and update AWS resources. Users who have been granted Direct Change mode permission through the IAM roles, can use native AWS API access to provision and make changes to resources in their AMS Advanced accounts. The users can still use AMS Advanced change management RFCs using the same IAM roles. In both cases the resources and changes to them are fully supported by AMS, including monitoring, patch, backup, incident response management. Users who do not have the appropriate role in these accounts must use the AMS Advanced change management RFC process to make changes. 

## Change management use cases
<a name="dcm-cm-use-cases"></a>

For security reasons, some changes in AMS Advanced can only be done through the change management request for change (RFC) process. The `AWSManagedServicesCloudFormationAdminRole` is restricted to actions taken through CloudFormation (CFN). For more about how to create stacks through DCM, see [Creating stacks using Direct Change mode](https://docs.aws.amazon.com/managedservices/latest/userguide/dcm-creating-stacks.html). The `AWSManagedServicesUpdateRole` is restricted for the following actions.

For example walkthroughs for each change type, including the Management \$1 Managed account \$1 Direct Change mode \$1 Enable (ct-3rd4781c2nnhp) change type, see the "Additional Information" section for the relevant change type in the *AMS Advanced Change Type Reference* [Change Types by Classification](https://docs.aws.amazon.com/managedservices/latest/ctref/classifications.html) section.

<a name="AMS-STD-007"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/dcm-change-mgmt.html)

# Creating stacks using Direct Change mode
<a name="dcm-creating-stacks"></a>

There are two requirements when launching stacks in CloudFormation using the `AWSManagedServicesCloudFormationAdminRole`, in order for the stack to be managed by AMS:
+ The template must contain an `AmsStackTransform`.
+ The stack name must start with the prefix `stack-` followed by a 17 character alphanumeric string.

**Note**  
To successfully use the `AmsStackTransform`, you must acknowledge that your stack template contains the `CAPABILITY_AUTO_EXPAND` capability in order for CloudFormation (CFN) to create or update the stack. You do this by passing the `CAPABILITY_AUTO_EXPAND` as part of your create-stack request. CFN rejects the request if this capability is not acknowledged when the `AmsStackTransform` is included in the template. The CFN console ensures that you pass this capability if you have the transform in your template, but this can be missed when you are interacting with CFN via their APIs.  
You must pass this capability whenever you use the following CFN API calls:  
[CreateChangeSet](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateChangeSet.html)
[ CreateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html#API_CreateStack_RequestParameters)
[UpdateStack](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStack.html)

When creating or updating a stack with DCM, the same validations and augmentations of CFN Ingest and Stack Update CTs are performed on the stack, for more information see [CloudFormation Ingest Guidelines, Best Practices, and Limitations](https://docs.aws.amazon.com/managedservices/latest/appguide/cfn-author-templates.html). The exception to this is that the AMS default security groups (SGs) will not be attached to any stand-alone EC2 instances or EC2 instances in Auto Scaling groups (ASGs). When you create your CloudFormation template, with stand-alone EC2 instances or ASGs, you can attach the default SGs. 

**Note**  
IAM roles can now be created and managed with the `AWSManagedServicesCloudFormationAdminRole`.

The AMS default SGs have ingress and egress rules that allow the instances to launch successfully and to be accessed later through SSH or RDP by AMS operations and you. If the you find that the AMS default security groups are too permissive, you can create your own SGs with more restrictive rules and attach them to your instance, as long as it still allows you and AMS operations to access the instance during incidents.

The AMS default security groups are the following:
+ SentinelDefaultSecurityGroupPrivateOnly: Can be accessed in the CFN template through this SSM parameter `/ams/${VpcId}/SentinelDefaultSecurityGroupPrivateOnly`
+ SentinelDefaultSecurityGroupPrivateOnlyEgressAll: Can be accessed in the CFN template through this SSM parameter `/ams/${VpcId}/SentinelDefaultSecurityGroupPrivateOnlyEgressAll`

## AMS Transform
<a name="dcm-cs-ams-transform"></a>

 Add a `Transform` statement to your CloudFormation template. This adds a CloudFormation macro that validates and registers the stack with AMS at launch time. 

**JSON **Example

```
"Transform": {
    "Name": "AmsStackTransform",
    "Parameters": {
      "StackId": {"Ref" : "AWS::StackId"}
    }
  }
```

**YAML **Example

```
Transform:
  Name: AmsStackTransform
  Parameters:
    StackId: !Ref 'AWS::StackId'
```

Also add the `Transform` statement when updating the template of an existing stack.

**JSON **Example

```
{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description" : "Create an SNS Topic",
    "Transform": {
      "Name": "AmsStackTransform",
      "Parameters": {
        "StackId": {"Ref" : "AWS::StackId"}
     }
  },
  "Parameters": {
    "TopicName": {
      "Type": "String",
      "Default": "HelloWorldTopic"
    }
  },
  "Resources": {
    "SnsTopic": {
      "Type": "AWS::SNS::Topic",
      "Properties": {
        "TopicName": {"Ref": "TopicName"}
      }
    }
  }
}
```

**YAML **Example

```
AWSTemplateFormatVersion: '2010-09-09'
Description: Create an SNS Topic
Transform:
  Name: AmsStackTransform
  Parameters:
    StackId: !Ref 'AWS::StackId'
Parameters:
  TopicName:
    Type: String
    Default: HelloWorldTopic
Resources:
  SnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      TopicName: !Ref TopicName
```

## Stack name
<a name="dcm-cs-stack-name"></a>

The stack name must start with the prefix `stack-` followed by a 17 character alphanumeric string. This is to maintain compatibility with other AMS systems that operate on AMS stack IDs. 

 The following are examples of ways to generate compatible stack IDs:

Bash:

```
echo "stack-$(env LC_CTYPE=C tr -dc 'a-z0-9' < /dev/urandom | head -c 17)"
```

Python:

```
import string
import random

'stack-' + ''.join(random.choices(string.ascii_lowercase + string.digits, k=17))
```

Powershell:

```
"stack-" + ( -join ((0x30..0x39) + ( 0x61..0x7A) | Get-Random -Count 17  | % {[char]$_}) )
```

# Direct Change Mode use cases
<a name="dcm-use-cases"></a>

The following are uses cases for Direct Change Mode:

**Resource provision and management through CloudFormation**
+ Integrate existing CloudFormation-based tooling and processes.

**Ongoing resource management and updates**
+ Small atomic changes with low risk.
+ Changes that would otherwise run through a manual or automated RFC.
+ Tooling that requires native AWS API access.
+ The DCM role can be utilized if you're in the migration stage. Migration teams leverage the permissions on the DCM to create or modify stacks.
+ DCM roles can be used in the CI/CD pipeline to build new AMIs, create Amazon ECS tasks, and so on.

# AMS Advanced Developer mode
<a name="developer-mode-section"></a>

**Topics**
+ [Getting started with AMS Advanced Developer mode](developer-mode-implement.md)
+ [Security and compliance in Developer mode](developer-mode-security-and-compliance.md)
+ [Change management in Developer mode](developer-mode-change-management.md)
+ [Provisioning infrastructure in AMS Developer mode](developer-mode-provisioning.md)
+ [Detective controls in AMS Developer mode](developer-mode-detective-controls.md)
+ [Logging, monitoring, and event management in AMS Developer mode](developer-mode-logging.md)
+ [Incident management in AMS Developer mode](developer-mode-incident-management.md)
+ [Patch management in AMS Developer mode](developer-mode-patch-management.md)
+ [Continuity management in AMS Developer mode](developer-mode-continuity.md)
+ [Security and access management in AMS Developer mode](developer-mode-security-and-access.md)

AWS Managed Services (AMS) Developer mode uses elevated permissions in AMS Advanced Plus and Premium accounts to provision and update AWS resources outside of the AMS Advanced change management process. AMS Advanced Developer mode does this by leveraging native AWS API calls within the AMS Advanced Virtual Private Cloud (VPC), enabling you to design and implement infrastructure and applications in your managed environment.

When using an account that has Developer mode enabled, continuity management, patch management, and change management are provided for resources provisioned through the AMS Advanced change management process or by using an AMS Amazon Machine Image (AMI). However, these AMS management features are not offered for resources provisioned through native AWS APIs. 

You are responsible for monitoring infrastructure resources that are provisioned outside of the AMS Advanced change management process. Developer mode is compatible with both production and non-production workloads. With elevated permissions, you have an increased responsibility to ensure adherence to internal controls.

**Important**  
Resources that you create using Developer mode can be managed by AMS Advanced only if they are created using AMS Advanced change management processes.

Developer mode is one of the AMS Advanced modes you can employ. For more information, see [Modes overview](ams-modes-ug.md).

# Getting started with AMS Advanced Developer mode
<a name="developer-mode-implement"></a>

Learn the various AMS Advanced accounts with AMS Advanced Developer mode and how to successfully implement Developer mode.

**Topics**
+ [Before you begin](developer-mode-faqs.md)
+ [Prerequisites for Developer mode](#developer-mode-implement-prerequisites)
+ [How to implement Developer mode](#developer-mode-implement-steps)
+ [Developer mode permissions](#developer-mode-role)

# Before you begin with AMS Developer mode
<a name="developer-mode-faqs"></a>

Before implementing Developer mode, there are a few things you should know.

AMS Advanced cannot manage existing stacks or resources in a DevMode account that were created outside of the AMS Advanced change management process through requests for change (RFCs). However, while the account is in DevMode, AMS Advanced continues to manage resources provisioned through the AMS Advanced change management process with RFCs.

You cannot start with a DevMode account and later covert it to an AMS Advanced-managed application account.

## Prerequisites for AMS Developer mode
<a name="developer-mode-implement-prerequisites"></a>

The following are the prerequisites for implementing Developer mode: 
+ You must be an AMS Advanced customer with at least one onboarded AMS Advanced Plus or Premium account.
+ Any account you use must be an AMS Advanced Plus or Premium account.
+ **Multi-Account Landing Zone (MALZ)**: You must use the `AWSManagedServicesDevelopmentRole` predefined AWS Identity and Access Management (IAM) role. You request this role. The next section describes how to acquire Developer mode permissions.
+ **Single-Account Landing Zone (SALZ)**: You must use the `customer_developer_role` predefined AWS Identity and Access Management (IAM) role. You request this role. The next section describes how to acquire Developer mode permissions.

## How to implement AMS Advanced Developer mode
<a name="developer-mode-implement-steps"></a>

You implement Developer mode by requesting that your eligible AMS Advanced account be provisioned with the predefined IAM role:
+ **MALZ**: `AWSManagedServicesDevelopmentRole`
+ **SALZ**: `customer_developer_role`

You then assign the role to the relevant users in your federated network.

AMS Advanced recommends that you ensure that your use of Developer mode complies with your internal control frameworks and standards as Developer mode creates two vectors of change: AMS Advanced change management for AMS Advanced-managed resources and customer-managed role federation for resources that you, as our customer, manage. While AMS Advanced processes remain compliant with our declarations, customer processes and control frameworks might need to be updated.

**To implement Developer mode in your AMS Advanced account**

1. Confirm the account that you want to use with Developer mode meets the requirements listed in [Prerequisites for AMS Developer mode](#developer-mode-implement-prerequisites).

1. Submit a request for change (RFC) using the change type (CT) Management \$1 Managed account \$1 Developer mode \$1 Enable (managed automation). For an example of how to use this CT, see [ Developer Mode \$1 Enable (Managed Automation)](https://docs.aws.amazon.com/managedservices/latest/ctref/management-managed-developer-mode-enable-review-required.html).

   After the CT is processed, the predefined IAM role, (`AWSManagedServicesDevelopmentRole` for **MALZ**, `customer_developer_role` for **SALZ**), is provisioned in the requested account.

1. Assign the appropriate role to the users that require Developer mode access using your internal federation process.

   AMS Advanced recommends that you limit access to prevent unwanted or unapproved provisioning of, or changes to, resources.

## AMS Advanced Developer mode permissions
<a name="developer-mode-role"></a>

The predefined role (`AWSManagedServicesDevelopmentRole` for **MALZ**, `customer_developer_role` for **SALZ**), grants permission to create application infrastructure resources within the AMS Advanced VPC, including IAM roles, while restricting access to *shared service* components that are operated by AMS Advanced (for example, management hosts, domain controllers, Trend Micro EPS, bastions, and unsupported AWS services). The role also restricts access to the following AWS services: Amazon GuardDuty, AWS Organizations, AWS Directory Service APIs, and AMS Advanced logs.

While the role allows you to create additional IAM roles, the same permissions boundaries included in Developer mode access are enforced on any IAM role created by the `AWSManagedServicesDevelopmentRole`.

# Security and compliance in Developer mode
<a name="developer-mode-security-and-compliance"></a>

Security and compliance is a shared responsibility between AMS Advanced and you as our customer. AMS Advanced Developer mode shifts the shared responsibility to you for resources provisioned outside of the change management process or provisioned through change management but updated with Developer mode permissions. For more information about shared responsibility, see [AWS Managed Services](https://aws.amazon.com/managed-services/).

**Cautions:**
+ DevMode allows you and your authorized team to bypass the deny-by-default principles at the core of AMS security. The advantages, self-service, less time waiting for AMS must be weighed against the disadvantages, anyone can perform unexpected and destructive actions without the knowledge of their security team. Automated change types to enable Dev mode and Direct Change mode are exposed, and any authorized person in your org can run these CTs and enable these modes.
+ You are responsible for managing the permissions of CT execution from your user base.
+ AMS doesn’t manage CT execution permissions

**Recommendations:**
+ **Protect**
  + Customers can prevent access to this CT via permissioning, see [Restrict permissions with IAM role policy statements](https://docs.aws.amazon.com/managedservices/latest/userguide/request-iam-user.html)
  + Prevent access to this CT by implementing a proxy such as an ITSM system
  + Utilize service control policies (SCPs) that prevent policies and behaviors as needed, see [AMS Preventative and Detective Controls Library](https://docs.aws.amazon.com/managedservices/latest/userguide/scp-library.html)
+ **Detect**
  + Monitor your RFC’s for these CTs (Enable developer mode ct-1opjmhuddw194 and Direct change mode, Enable ct-3rd4781c2nnhp) being executed and respond accordingly
  + Review and/or audit your accounts for the presence of the IAM resources to identify those accounts where Developer mode or Direct Change mode have been deployed
+ **Respond**
  + Remove accounts in Developer mode as needed

## Security in Developer mode
<a name="developer-mode-security"></a>

AMS Advanced offers additional value with a prescriptive landing zone, a change management system, and access management. When using Developer mode the security value of AMS Advanced is persisted by using the same account configuration of standard AMS Advanced accounts that establishes the baseline AMS Advanced security hardened network. The network is protected by the permissions boundary enforced in the role (`AWSManagedServicesDevelopmentRole` for **MALZ**, `customer_developer_role` for **SALZ**), which restricts the user from breaking down the parameter protections established when the account is set up.

For example, users with the role can access Amazon Route 53 but AMS Advanced internal hosted zone is restricted. The same permissions boundaries are enforced on an IAM role created by the `AWSManagedServicesDevelopmentRole`, enforcing permissions boundaries on the `AWSManagedServicesDevelopmentRole` that restricts the user from breaking down the parameter protections established when the account is onboarded to AMS Advanced.

## Compliance in Developer mode
<a name="developer-mode-compliance"></a>

Developer mode is compatible with both production and non-production workloads. It's your responsibility to ensure adherence to any compliance standards (for example, PHI, HIPAA, PCI), and to ensure that the use of Developer mode complies with your internal control frameworks and standards.

# Change management in Developer mode
<a name="developer-mode-change-management"></a>

Change management is the process the AMS Advanced service uses to implement requests for change. A request for change (RFC) is a request created by either you or AMS Advanced through the AMS Advanced interface to make a change to your managed environment and includes a change type (CT) ID for a particular operation. For more information, see [Change management modes](using-change-management.md). 

Change management is not enforced in AMS Advanced accounts where Developer mode permissions are granted. Users who have been granted Developer mode permission with the IAM role (`AWSManagedServicesDevelopmentRole` for **MALZ**, `customer_developer_role` for **SALZ**), can use native AWS API access to provision and make changes to resources in their AMS Advanced accounts. Users who do not have the appropriate role in these accounts must use the AMS Advanced change management process to make changes. 

**Important**  
Resources that you create using Developer mode can be managed by AMS Advanced only if they are created using AMS Advanced change management processes. Requests for changes submitted to AMS Advanced for resources created outside of the AMS Advanced change management process are rejected by AMS Advanced because they must be handled by you.

## Self-service provisioning services API restrictions
<a name="developer-mode-ssps-restrictions"></a>

All AMS Advanced self-provisioned services are supported with Developer mode. Access to self-provisioned services are subject to the limitations outlined in the respective user guide sections for each. If a self-provisioned service is not available with your Developer mode role, you can request an updated role through the Developer mode change type.

The following services do not provide full access to service APIs:


**Self-Provisioned Services Restricted in Developer mode**  

| Service | Notes | 
| --- | --- | 
|  Amazon API Gateway | All Gateway APIs calls are allowed except `SetWebACL`. | 
|  Application Auto Scaling | Can only register or deregister scalable targets, and put or delete a scaling policy. | 
|  AWS CloudFormation | Can't access or modify CloudFormation stacks that have a name prefixed with `mc-`. | 
|  AWS CloudTrail | Can't access or modify CloudTrail resources that have a name prefixed with `ams-` and/or `mc-`. | 
|  Amazon Cognito (User Pools) | Can't associate software tokens. Can't create user pools, user import jobs, resource servers, or identity providers. | 
|  AWS Directory Service | Only the following Directory Service actions are required by `Connect` and `WorkSpaces` services. All other Directory Service actions are denied by the Developer mode permission boundary policy: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/developer-mode-change-management.html) In single-account landing zone accounts, the boundary policy explicitly denies access to the AMS Advanced managed directory used by AMS Advanced for maintaining access to dev-mode enabled accounts. | 
|  Amazon Elastic Compute Cloud | Can't access Amazon EC2 APIs that contain the string: `DhcpOptions`, `Gateway`, `Subnet`, `VPC`, and `VPN`. Can't access or modify Amazon EC2 resources that have a tag prefixed with `AMS`, `mc`, `ManagementHostASG`, and/or `sentinel`. | 
|  Amazon EC2 (Reports) | Only view access is granted (cannot modify). Note: Amazon EC2 Reports is moving. The **Reports** menu item will be removed from the Amazon EC2 console navigation menu. To view your Amazon EC2 usage reports after it has been removed, use the AWS Billing and Cost Management console. | 
|  AWS Identity and Access Management (IAM) | Can't delete existing permission boundaries, or modify IAM user password policies. Can't create or modify IAM resources unless you are using the correct IAM role (`AWSManagedServicesDevelopmentRole` for **MALZ**, `customer_developer_role` for **SALZ**)). Can't modify IAM resources that are prefixed with: `ams`, `mc`, `customer_deny_policy`, and/or `sentinel`. When creating a new IAM resource (role, user, or group), the permission boundary (**MALZ**: `AWSManagedServicesDevelopmentRolePermissionsBoundary`, **SALZ**: `ams-app-infra-permissions-boundary`) must be attached. | 
|  AWS Key Management Service (AWS KMS) | Can't access or modify AMS Advanced-managed KMS keys. | 
|  AWS Lambda | Can't access or modify AWS Lambda functions that are prefixed with `AMS`. | 
|  CloudWatch Logs | Can't access CloudWatch log streams that a name prefixed with: `mc`, `aws`, `lambda`, and/or `AMS`. | 
|  Amazon Relational Database Service (Amazon RDS) | Can't access or modify Amazon Relational Database Service (Amazon RDS) databases (DBs) that have a name prefixed with: `mc-`. | 
|  AWS Resource Groups | Can only access `Get`, `List`, and `Search` Resource Group API actions. | 
|  Amazon Route 53 | Can't access or modify Route53 AMS Advanced-maintained resources. | 
|  Amazon S3 | Can't access Amazon S3 buckets that have a name prefixed with: `ams-*`, `ams`, `ms-a`, or `mc-a`. | 
|  AWS Security Token Service | The only security token service API allowed is `DecodeAuthorizationMessage`. | 
|  Amazon SNS | Can't access SNS topics that have a name prefixed with: `AMS-`, `Energon-Topic`, or `MMS-Topic`. | 
|  AWS Systems Manager Manager (SSM) | Can't modify SSM parameters that are prefixed with `ams`, `mc`, or `svc`. Can't use the SSM API `SendCommand` against Amazon EC2 instances that have a tag prefixed with `ams` or `mc`. | 
|  AWS Tagging | You only have access to AWS Tagging API actions that are prefixed with `Get`. | 
|  AWS Lake Formation | The following AWS Lake Formation API actions are denied: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/developer-mode-change-management.html) | 
|  Amazon Elastic Inference | You can only call the Elastic Inference API action `elastic-inference:Connect`. This permission is included in the `customer_sagemaker_admin_policy` that is attached to the `customer_sagemaker_admin_role`. This action gives you access to the Elastic Inference accelerator. | 
|  AWS Shield | No access to any of this services APIs or console. | 
|  Amazon Simple Workflow Service | No access to any of this services APIs or console. | 

# Provisioning infrastructure in AMS Developer mode
<a name="developer-mode-provisioning"></a>

Users that don't have the Developer mode IAM role, `AWSManagedServicesDevelopmentRole`, in accounts where Developer mode is enabled, are required to follow the AMS Advanced change management process that leverages AMS Advanced AMIs. Users with correct role (**MALZ**: `AWSManagedServicesDevelopmentRole`, **SALZ**: `customer_developer_role`) can use the AMS Advanced change management system and AMS Advanced AMIs but are not required to. 

**Note**  
An AWS AMI, that has not been processed through AMS Advanced workload ingestion, or created in an AMS Advanced account, will not include AMS Advanced-required configurations.



# Detective controls in AMS Developer mode
<a name="developer-mode-detective-controls"></a>

This section has been redacted because it contains sensitive AMS security-related information. This information is available through the AMS console **Documentation**. To access AWS Artifact, you can contact your CSDM for instructions or go to [Getting Started with AWS Artifact](https://aws.amazon.com/artifact/getting-started).

# Logging, monitoring, and event management in AMS Developer mode
<a name="developer-mode-logging"></a>

Logging, monitoring, and event management aren't available for resources provisioned outside of the AMS Advanced change management process, or for resources provisioned through change management and then altered by an account using Developer mode permissions.

# Incident management in AMS Developer mode
<a name="developer-mode-incident-management"></a>

No change to incident response times. Incident resolution is a best effort for resources provisioned outside the change management process, or resources provisioned through change management and then altered by an account using Developer mode permissions.

**Note**  
AMS service level agreement (SLA) does not apply for resources created or updated outside of the AMS change management system (requests for change or RFCs), Developer mode included; therefore, resources updated or created in Developer mode are automatically degraded to a P3 and AMS support is best effort.

# Patch management in AMS Developer mode
<a name="developer-mode-patch-management"></a>

Patch management is not available for resources provisioned outside of the AMS Advanced change management process, or for resources provisioned through change management and then altered by an account using Developer mode permissions. Patching times:
+ For a critical security update: Within 10 business days of release by the vendor for resources provisioned through change management and then altered by an account using Developer mode permissions.
+ For an important update: Within 2 months of release by the vendor for resources provisioned through change management and then altered by an account using Developer mode permissions.

# Continuity management in AMS Developer mode
<a name="developer-mode-continuity"></a>

Continuity management is not available for resources provisioned outside of the AMS Advanced change management process, or for resources provisioned through change management and then altered by an account using Developer mode permissions.

Environment recovery initiation time can take up to 12 hours for resources provisioned outside of the AMS Advanced change management process, or for resources provisioned through change management and then altered by an account using Developer mode permissions.

# Security and access management in AMS Developer mode
<a name="developer-mode-security-and-access"></a>

Anti-malware protection is your responsibility for resources provisioned outside of the AMS Advanced change management process, or for resources provisioned through change management and then altered by an account using Developer mode permissions. Access to Amazon Elastic Compute Cloud (Amazon EC2) instances not provisioned through AMS Advanced change management might be controlled by key pairs instead of providing federated access.

# Self-Service Provisioning mode in AMS
<a name="self-service-provisioning-section"></a>

AWS Managed Services (AMS) Self-Service Provisioning (SSP) mode provides full access to native AWS service and API capabilities in AMS managed accounts. You access services through standardized, scoped down, AWS Identity and Access Management roles. AMS provides service requests and incident management. Alerting, monitoring, logging, patch, back up, and change management are your responsibility. In many cases, Self-Service Provisioning services (SSPS) are self-managed, or serverless, and don’t require management of certain operational tasks like patching. You benefit from using these services within the environment boundary defined by AMS guardrails and any IAM changes (including service linked roles, service roles, cross-account roles, or policy updates) need to be approved by AMS Operations to maintain the baseline security of the platform. You can leverage CloudFormation templates to automate deployment of these services, but this isn't supported for all SSP services.

**Important**  
Use SSP mode in your AWS Managed Services (AMS) accounts to access and employ AWS services, with restrictions as noted.

There are some AWS services that you can use without AMS management, in your AMS account. The Self-Service Provisioning mode services, or SSPS for short, how to add them into your AMS account and FAQs for each, are described in the section.

Self-service provisioning services are offered as is, and you're responsible for managing them. AMS provides no alerts, monitoring, logging, or patching for the resources associated with those services. AMS provides IAM roles that enable you to use the service in your AMS account safely. AMS SLAs do not apply. 

For resources that you provision through self-service, AMS provides incident management, detective controls and guardrails, reporting, designated resources (Cloud Service Delivery Manager and Cloud Architect), security and access, and technical support through service requests. Additionally, where applicable, you assume responsibility for continuity management, patch management, infrastructure monitoring, and change management for resources provisioned or configured outside of the AMS change management system.

# Getting started with SSP mode in AMS
<a name="ssp-mode-get-start"></a>

Self-service provisioning is one of the AMS modes for multi-account landing zone (MALZ) that you can employ. For more information, see [Modes overview](ams-modes-ug.md).

To provide self-service provisioning capabilities, AMS has created elevated IAM roles with permission boundaries to limit unintended changes from direct AWS service access. The roles don't prevent all changes and you must adhere to your internal controls and compliance policies, and validate that all AWS services being used meet the required certifications. This is the self-service provisioning mode. For details on AWS compliance requirements, see [AWS Compliance](https://aws.amazon.com/compliance/).

To add a self-service provisioning service to your multi-account landing zone Application account, use the **Management \$1 AWS service \$1 Self-provisioned service \$1 Add** change type (CT), either the review-required CT or automated CT, as instructed for the service.

**Note**  
To request that AMS provide an additional self-service provisioning service, file a service request.

# Use AMS SSP to provision Amazon API Gateway in your AMS account
<a name="api-gateway"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon API Gateway capabilities directly in your AMS managed account. [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Using the AWS Management Console you can create REST and WebSocket APIs that act as a front door for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud ([Amazon EC2](https://aws.amazon.com/ec2/)), code running on [AWS Lambda](https://aws.amazon.com/lambda/), any web application, or real-time communication applications.

API Gateway handles all the tasks involved in accepting and processing up-to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales. To learn more, see [Amazon API Gateway](https://aws.amazon.com/api-gateway/).

## FAQ: API Gateway in AMS
<a name="set-api-gateway-faqs"></a>

**Q: How do I request access to Amazon API Gateway in my AMS account?**

Request access to API Gateway by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_apigateway_author_role` and `customer_apigateway_cloudwatch_role`. After provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using Amazon API Gateway in my AMS account?**
+ API Gateway configuration is limited to resources without `AMS-` or `MC-` prefixes to prevent any modifications to AMS infrastructure.
+ `CREATE` privileges for VPCLink are disabled in order to prevent unregulated creation of Elastic Load Balancers. If VPCLinks are required, see [Application Load Balancer \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-application-load-balancer-create.html).

**Q: What are the prerequisites or dependencies to using Amazon API Gateway in my AMS account?**

It depends on the type of API Gateway you want to deploy. It can be a standalone service, but it can also request access to existing services (for instance, network load balancer).

# Use AMS SSP to provision Alexa for Business in your AMS account
<a name="aws-alexa-bus"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Alexa for Business capabilities directly in your AMS managed account. Alexa for Business is a service that enables your organization and employees to use Alexa to get more work done. With Alexa for Business, you can use Alexa as your intelligent assistant to be more productive in meeting rooms, at your desk, and even with the Alexa devices you already use at home or on the go. IT and facilities managers can use Alexa for Business to measure and increase the utilization of the existing meeting rooms in their workplace.

To learn more, see [Alexa for Business](https://aws.amazon.com/alexaforbusiness/).

## Alexa for Business in AWS Managed Services FAQ
<a name="set-aws-alexa-bus-faqs"></a>

**Q: How do I request access to Alexa for Business in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_alexa_console_role`. A `customer_alexa_device_setup_user` is also created for the Device Setup Tool provided by Alexa for Business; this Device Setup Tool can then be used to set up your devices. Once provisioned in your account, you must onboard the roles in your federation solution.

The Alexa for Business gateway enables you to connect Alexa for Business to your Cisco Webex and Poly Group Series endpoints to control meetings with your voice. The gateway software runs on your on-premises hardware and securely proxies conferencing directives from Alexa for Business to your Cisco endpoint. The gateway needs two pairs of AWS credentials to communicate with Alexa for Business. We provide two limited-access IAM users: `customer_alexa_gateway_installer_user` and `customer_alexa_gateway_execution_user` for your Alexa for Business gateways, one for installing the gateway and one for operating the gateway; these can be requested by submitting an RFC with the Deployment \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Create entity or policy (managed automation) change type (ct-3dpd8mdd9jn1r).

**Note**  
To generate usage reports and send them to Amazon S3, specify the Amazon S3 bucket name in the self-provisioned service RFC.

**Q: What are the restrictions to using Alexa for Business in my AMS account?**

There are no restrictions. Full functionality of Alexa for Business is available with the Alexa for Business self-provisioned service role.

**Q: What are the prerequisites or dependencies to using Alexa for Business in my AMS account?**
+ If you intend to use WPA2 Enterprise Wi-Fi to set up your shared devices, then specify this network security type in the Device Setup Tool, for which an AWS Private Certificate Authority is required.
+ AMS only creates secret keys that start with the namespace "A4B". This is restrictive only to this namespace.

**Q: What Alexa for Business functionality requires separate RFCs?**

To register an Alexa Voice Service (AVS) device with Alexa for Business, provide access to the Alexa built-in device maker. To do this, an IAM role needs to be created in the Alexa for Business console that can be deployed using the Management \$1 Other \$1 Other change type. This allows the AVS device maker to register and manage devices with Alexa for Business on your behalf.

# Use AMS SSP to provision Amazon WorkSpaces Applications in your AMS account
<a name="amz-app-stream-2.0"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon WorkSpaces Applications (WorkSpaces Applications) capabilities directly in your AMS managed account. WorkSpaces Applications lets you move your desktop applications to AWS, without rewriting them. You can install your applications on WorkSpaces Applications, set launch configurations, and make your applications available to users. WorkSpaces Applications offers a wide selection of virtual machine options so that you can select the instance type that best matches your application requirements, and set the auto-scale parameters so that you can easily meet the needs of your end users. WorkSpaces Applications enables you to launch applications in your own network, which means your applications can interact with your existing AWS resources.

Amazon WorkSpaces Applications enables you to quickly and easily install, test, and update your applications using the image builder. Any application that runs on Microsoft Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019 is supported, and you don’t need to make any modifications. When your testing is complete, you can set application launch configurations, default user settings, and publish your image for users to access.

To learn more, see [WorkSpaces Applications](https://aws.amazon.com/appstream2/).

## WorkSpaces Applications in AWS Managed Services FAQ
<a name="set-amz-app-stream-2.0-faqs"></a>

**Q: How do I request access to WorkSpaces Applications in my AMS account?**

Request access to WorkSpaces Applications by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_appstream_console_role`.

A `customer_appstream_stream_role` is also deployed to stream applications that require users to be authenticated using their Active Directory login credentials.

Once provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using WorkSpaces Applications in my AMS account?**
+ The following functionality must be configured by the AMS Support team, and requires specific RFCs. Instruction on requesting additional functionality can be found in section 4.
  + Creating and Streaming from Interface VPC Endpoints.
  + Support for Amazon S3 endpoints for home folders and application setting persistence on a private network.
  + Creating and choosing the IAM role that will be available on all fleet streaming instances.
  + Joining WorkSpaces Applications fleets and image builders Microsoft Active Directory domains.
  + Creating WorkSpaces Applications Custom Usage Reports.
  + Custom branding is currently not supported. 

**Q: What are the prerequisites or dependencies to using WorkSpaces Applications in my AMS account?**

While submitting the RFC to onboard WorkSpaces Applications, include the Amazon S3 bucket name to be used for the WorkSpaces Applications usage report. The bucket name is added to the `customer-appstream-usagereports-policy` that is created when WorkSpaces Applications is onboarded.

**Q: What WorkSpaces Applications functionality requires separate RFCs?**
+ In order to choose an interface VPC endpoint for WorkSpaces Applications, submit a Management \$1 Other \$1 Other \$1 Update change type RFC to create a VPC endpoint in your account. For steps to create custom endpoints for WorkSpaces Applications, see [ Creating and Streaming from Interface VPC Endpoints](https://docs.aws.amazon.com/appstream2/latest/developerguide/creating-streaming-from-interface-vpc-endpoints.html) in the WorkSpaces Applications user guide. 
+ Support for Amazon S3 endpoints for home folders and application setting persistence on a private network can be configured by requesting Amazon S3 VPC endpoints with a Management \$1 Other \$1 Other \$1 Create change type RFC. The RFC must include the target Amazon S3 bucket hosting the home folder contents, or application settings Amazon S3 buckets, respectively. This RFC will provide WorkSpaces Applications the permissions it needs to access Amazon S3 VPC endpoints. For steps to create custom endpoints for streams, see [ Using Amazon S3 VPC Endpoints for Home Folders and Application Settings Persistence](https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-network-vpce-iam-policy.html) in the WorkSpaces Applications user guide.
+ In order to create and choose an IAM role that will be available on all fleet streaming instances, submit a Deployment \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Create entity or policy (managed automation) change type (ct-3dpd8mdd9jn1r) RFC requesting the IAM role with the required policy. The IAM role name should always start with prefix : "customer\$1appstream". 
+ Amazon WorkSpaces Applications fleets and image builders can be joined to domains in Microsoft Active Directory by submitting a Management \$1 Other \$1 Other \$1 Update change type RFC for the Service Account creation in Active Directory (AD). Minimal permissions required to join Microsoft Active Directory are defined in the WorkSpaces Applications documentation at [ Granting Permissions to Create and Manage Active Directory Computer Objects](https://docs.aws.amazon.com/appstream2/latest/developerguide/active-directory-admin.html#active-directory-permissions).
+ In order to create custom WorkSpaces Applications Usage Reports, submit a Management \$1 Other \$1 Other \$1 Create change type RFC requesting following:
  + "AppStreamUsageReports" CFN stack creation
  + "customer\$1appstream\$1usagereports\$1role" be provisioned in the account
  + Also, provide the following details:
    + Provide CRON expression to schedule Crawler run. By default it is 23:00 UTC everyday.
    + Amazon S3 bucket ARN to be used for Athena query results. This bucket should have prefix: `aws-athena-query-results`
    + Amazon S3 bucket ARN for WorkSpaces Applications Usage Reports Logs. 

  After the role is provisioned, onboard the role into your federation solution and login, then access AWS GlueAWS Glue and Athena for generating custom reports using the usage report role. For details about using WorkSpaces Applications Usage Reports see [ Create Custom Reports and Analyze WorkSpaces Applications Usage Data](https://docs.aws.amazon.com/appstream2/latest/developerguide/configure-custom-reports-analyze-usage-data.html), in the WorkSpaces Applications documentation.

# Use AMS SSP to provision Amazon Athena in your AMS account
<a name="athena"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Athena (Athena) capabilities directly in your AMS managed account. Athena is an interactive query service that helps you to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. You point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex extract-transform-load (ETL) jobs to prepare your data for analysis. This makes it straight-forward for anyone with SQL skills to quickly analyze large-scale datasets. To learn more, see [Amazon Athena](https://aws.amazon.com/athena/).

## FAQ: Athena in AMS
<a name="set-athena-faqs"></a>

**Q: How do I request access to Amazon Athena in my AMS account?**

Request access to Athena by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_athena_console_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Athena in my AMS account?**

There are no restrictions. Full functionality of Amazon Athena is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Athena in my AMS account?**

Athena has a major dependency on the AWS Glue service, as it uses the data catalog/metastore created with AWS Glue. Therefore, AWS Glue permissions are included in the successful Athena RFC.

The role `customer_athena_console_role` has a prerequisite for an Amazon S3 bucket. To create a new bucket, use the automated CT `ct-1a68ck03fn98r` (Deployment \$1 Advanced stack components \$1 S3 storage \$1 Create). When you use this automated CT to create an S3 bucket for Athena, the bucket name must begin with prefix `athena-query-results-*`.

# Use AMS SSP to provision Amazon Bedrock in your AMS account
<a name="bedrock"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Bedrock capabilities directly in your AMS managed account. Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI startups and AWS available for your use through a unified API. You can choose from a wide range of foundation models to find the model that is best suited for your use case. Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources.

With Amazon Bedrock's serverless experience, you can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure. For more information, see [Amazon Bedrock](https://aws.amazon.com/bedrock/).

## FAQ: Amazon Bedrock in AMS
<a name="set-bedrock-faqs"></a>

**Q: How do I request access to Amazon Bedrock in my AMS account?**

To request access to Amazon Bedrock submit an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_bedrock_console_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Bedrock in my AMS account?**
+ Amazon Bedrock knowledge bases aren't supported by default as part of the SSPS role due to its dependency on Amazon OpenSearch Service Serverless which is not currently supported on AMS.
+ Bedrock Studio isn't supported due to its dependency on unsupported services such as Amazon DataZone.

**Q: What are the prerequisites or dependencies to using Amazon Bedrock in my AMS account?**
+ Third-party model subscriptions that require AWS Marketplace permissions must be done by the default role (`AWSManagedServicesAdminRole` on MALZ and `Customer_ReadOnly_Role` on SALZ). This is because the default role includes AWS Marketplace permissions.
+ If data encryption is used, then you must provide the AWS KMS key ARN when you request creation of the console role. Also, the Amazon S3 bucket in use must have “bedrock” in its name.

# Use AMS SSP to provision Amazon CloudSearch in your AMS account
<a name="cloud-search"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon CloudSearch capabilities directly in your AMS managed account. Amazon CloudSearch is a managed service in the AWS Cloud that you use to cost-effective to set up, manage, and scale a search solution for your website or application. Amazon CloudSearch supports 34 languages and popular search features such as highlighting, autocomplete, and geospatial search. To learn more, see [Amazon CloudSearch](https://aws.amazon.com/cloudsearch/).

**Note**  
AWS has closed new customer access to Amazon CloudSearch, effective July 25, 2024. Amazon CloudSearch existing customers can continue to use the service as normal. AWS continues to invest in security, availability, and performance improvements for Amazon CloudSearch, but we do not plan to introduce new features.  
To understand the differences between Amazon CloudSearch and Amazon OpenSearch Service, and how you can transition to OpenSearch Service, reach out to your cloud architect (CA) for guidance. For more information on transitioning to OpenSearch Service, see [Transition from Amazon CloudSearch to Amazon OpenSearch Service service](https://aws.amazon.com/blogs/big-data/transition-from-amazon-cloudsearch-to-amazon-opensearch-service/).

## Amazon CloudSearch in AWS Managed Services FAQ
<a name="set-cs-faqs"></a>

**Q: How do I request access to Amazon CloudSearch in my AMS account?**

Request access to Amazon CloudSearch by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_csearch_admin_role` and `customer_csearch_dev_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon CloudSearch in my AMS account?**

Full functionality of Amazon CloudSearch is available in your AMS account. All AMS-supported database solutions are currently supported on Amazon CloudSearch. Note that, currently, DynamoDB is the only managed AWS database solution that can’t be indexed.

**Q: What are the prerequisites or dependencies to using Amazon CloudSearch in my AMS account?**

Amazon CloudSearch depends on Amazon S3 working with Identity Providers to automatically analyze input data and determine the table fields. Access to Amazon S3 is not provided with this RFC, and must be requested separately in a service request.

# Use AMS SSP to provision Amazon CloudWatch Synthetics in your AMS account
<a name="cloud-synth"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon CloudWatch Synthetics capabilities directly in your AMS managed account. You can use Amazon CloudWatch Synthetics to create 'canaries' to monitor your endpoints and APIs.

Canaries are configurable scripts, written in Node.js or Python, that run on a schedule. They create Lambda functions in your account that use Node.js or Python as a framework. Canaries work over both HTTP and HTTPS protocols. Canaries check the availability and latency of your endpoints and can store load time data and UI screenshots. They monitor your REST APIs, URLs, and website content, and they can check for unauthorized changes from phishing, code injection and cross-site scripting.

Canaries follow the same routes and perform the same actions as a customer, making it possible for you to continually verify your customer experience even when you don't have any customer traffic on your applications. By using canaries, you can discover issues before your customers do. To learn more, see [ Amazon CloudWatch: Using synthetic monitoring](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html).

## Amazon CloudWatch Synthetics in AWS Managed Services FAQ
<a name="set-cws-faqs"></a>

**Q: How do I request access to Amazon CloudWatch Synthetics in my AMS account?**

Request access to Amazon CloudWatch Synthetics by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: 'customer\$1cw\$1synthetics\$1console\$1role' and 'customer\$1cw\$1synthetics\$1canary\$1lambda\$1role'. Once provisioned in your account, you must onboard the 'customer\$1cw\$1synthetics\$1console\$1role' role in your federation solution.

**Q: What are the restrictions to using Amazon CloudWatch Synthetics in my AMS account?**

There are no restrictions for the use of Amazon CloudWatch Synthetics in your AMS account. Creating roles for canaries outside of the AMS-provided service role 'customer\$1cw\$1synthetics\$1canary\$1lambda\$1role' is prohibited.

**Q: What are the prerequisites or dependencies to using Amazon CloudWatch Synthetics in my AMS account?**

Canaries create and use a default Amazon CloudWatch Synthetics S3 bucket: "cw-syn-results-*\$1\$1accountnumber\$1*-*\$1\$1default-region\$1*"

# Use AMS SSP to provision Amazon Cognito user pools in your AMS account
<a name="cognito-pool"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Cognito user pools capabilities directly in your AMS managed account. Amazon Cognito user pools provide a secure user directory that scales to hundreds of millions of users. As a fully managed service, Amazon Cognito user pools can be set up without any worries about standing up server infrastructure. This service enables you to manage a pool of final users that you can use to integrate with your internal applications. This service provides you an alternative to a customized database or a directory of final users for web or mobile applications. At the same time, Amazon Cognito user pools provides the full set of functionalities of a directory service like passwords policies, multi factor authentication, password recovery and self-sign up into services. It also allows the application to federate the access in other popular public services like OpenID, Facebook, Amazon or Google.

Amazon Cognito is divided into two main products. Amazon Cognito user pools and Amazon Cognito Identity Provider. This section focuses on Amazon Cognito user pools, which provide access to other AWS services like Amazon S3 or DynamoDB. The service allows you to use Amazon Cognito user pools, or a third party identity provider, to provide access to AWS services. It also provides access to AWS services using anonymous guest access. Because of the powerful nature of Amazon Cognito user pools, it would be managed manually on a case-by-case basis as an operation manual service, in order to avoid potential security breaks into the account. To learn more, see [Amazon Cognito User Pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html).

## Amazon Cognito user pools in AWS Managed Services FAQ
<a name="set-cognito-pool-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Cognito user pools in my AMS account?**

Implementation of Amazon Cognito user pools in AMS is a 2 step process:

1. Submit a Management \$1 Other \$1 Other \$1 Create (ct-1e1xtak34nx76) change type and request the creation of the Amazon Cognito user pools in your AMS Account. Include the following information:
   + AWS Region.
   + Name for the Cognito User Pool.
   + If the you want to use the Amazon Simple Email Service (Amazon SES) to send messages and notifications instead of the default internal Cognito mail service, then the customer should provide an already validated email address for the Amazon SES Service in the account. This address will be used for the "From" and "REPLY-TO" fields of the message. They must also indicate the Region where Amazon SES was activated (us-east-1, eu-west-1 or us-west-2).
   + If the you want to use SMS messages for one-time passwords and verification, then the customer should indicate so.

1. Request user access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM roles to your account: `customer_cognito_admin_role` and `customer_cognito_importjob_role`. After it's provisioned in your account, you must onboard the role in your federation solution. These roles allow you to manage the Amazon Cognito user pools, manage your users and groups in the pool, create importjobs for users, modify the notification and subscription messages, associate applications to the user pool, self-manage adding federation services to the pool, and delete already created pools. 

**Q: What are the restrictions to using Amazon Cognito user pools in my AMS account?**

You won't be able to create the Amazon Cognito user pools. That action requires the creation of IAM roles to leverage services used by Amazon Cognito, like Amazon SES and Amazon Simple Notification Service (Amazon SNS).

**Q: What are the prerequisites or dependencies to using Amazon Cognito user pools in my AMS account?**

If you want to use Amazon SES to send messages and notifications by email to your user pools, they should already activate the Amazon SES service in the account, and already validate the email address that should be used in the "FROM" and "REPLY-TO" fields of the sent emails. For more information about validating email address using Amazon SES, see [Verifying Email Addresses in Amazon SES](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-email-addresses.html).

# Use AMS SSP to provision Amazon Comprehend in your AMS account
<a name="comprehend"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Comprehend capabilities directly in your AMS managed account. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text, no machine learning experience is required. Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data. The service identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech; and automatically organizes a collection of text files by topic. You can also use AutoML capabilities in Amazon Comprehend to build a custom set of entities or text classification models that are tailored uniquely to your organization’s needs. To learn more, see [Amazon Comprehend](https://aws.amazon.com/comprehend/).

## Amazon Comprehend in AWS Managed Services FAQ
<a name="set-comprehend-faqs"></a>

**Q: How do I request access to Amazon Comprehend in my AMS account?**

Amazon Comprehend console and data access roles can be requested through the submission of two AMS Service RFCs:

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_comprehend_console_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Comprehend in my AMS account?**

Create New IAM Role functionality through the Amazon Comprehend console is restricted. Otherwise, full functionality of Amazon Comprehend is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Comprehend in my AMS account?**

Amazon S3 and AWS Key Management Service (AWS KMS) are required in order to use Amazon Comprehend, if Amazon S3 buckets are encrypted with AWS KMS keys.

# Use AMS SSP to provision Amazon Connect in your AMS account
<a name="connect"></a>

**Note**  
After careful consideration, we decided to end support for Amazon Connect Voice ID, effective May 20, 2026. Amazon Connect Voice ID will no longer accept new customers beginning May 20, 2025. As an existing customer with an account signed up for the service before May 20, 2025, you can continue to use Amazon Connect Voice ID features. After May 20, 2026, you will no longer be able to use Amazon Connect Voice ID.

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Connect capabilities directly in your AMS managed account. Amazon Connect is an omnichannel cloud contact center that helps companies provide superior customer service at a lower cost. Amazon Connect provides a seamless experience across voice and chat for customers and agents. This includes one set of tools for skills-based routing, powerful real-time and historical analytics, and easy-to-use intuitive management tools – all with pay-as-you-go pricing.

You can create one or more instances of the virtual contact center instances in either AMS multi-account landing zone or single-account landing zone accounts. You can use existing SAML 2.0 identity providers for agent access or use Amazon Connect native support for user life cycle management.

Additionally, you can claim toll free/direct dial phone numbers for each Amazon Connect instance from the Amazon Connect console. You can create rich contact flows to achieve the desired customer experience and routing using an easy-to-use graphical user interface. The contact flows can leverage AWS Lambda functions to integrate with on-premises data stores and API’s. You can also enable data streaming using Kinesis Streams and Firehose.

The call recordings, chat transcripts, and reports, are stored in an Amazon S3 bucket encrypted using an AWS KMS key. The contact flow logs can be saved to CloudWatch log groups.

To learn more, see [Amazon Connect](https://aws.amazon.com/connect/).

## Amazon Connect in AWS Managed Services FAQ
<a name="set-connect-faqs"></a>

**Q: How do I request access to Amazon Connect in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM roles to your account: `customer_connect_console_role` and `customer_connect_user_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Connect in my AMS account?**

There are no restrictions. Full functionality of Amazon Connect is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Connect in my AMS account?**
+ You must create an AWS KMS Key and an Amazon S3 bucket using standard AMS RFCs; the Amazon S3 bucket is required for storing call recordings and chat transcripts.
+ If you want to integrate with Active Directory (AD), an AD Connector is required for integration between AMS-hosted Amazon Connect instances and your on-premises directory services. AD Connector can be configured in your account by requesting a 'Management \$1 Other \$1 Other' RFC.
+ You can enable the following optional self-provisioned services based on your contact flow requirements.
  + **AWS Lambda**: You can use Lambda functions to extend the contact flows to leverage existing on-premises data stores or APIs. You can use the Lambda self-provisioned service to create the Lambda functions.
  + **Amazon Kinesis Data Streams**: You can create data streams to enable Data streaming to external applications. You can stream contact trace records or Agent Events.
  + **Amazon Kinesis Data Firehose**: You can create Data Firehose to stream high volume contact trace records to external applications.
  + **Amazon Lex**: You can leverage Amazon Lex Chatbots to create smart contact flows leveraging Amazon Alexa services for rich customer experience and automation.
+ **Q: How do I request to add list of countries for outbound or inbound calls?**

  To add a list of countries for outbound or inbound calls, submit a service request to AMS.

# Use AMS SSP to provision Amazon Data Firehose in your AMS account
<a name="kdf"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Data Firehose capabilities directly in your AMS managed account. Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and [Splunk](https://aws.amazon.com/kinesis/data-firehose/splunk/), enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. To learn more, see [What Is Amazon Data Firehose?](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html)

## Firehose in AWS Managed Services FAQ
<a name="set-kdf-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Data Firehose in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_kinesis_firehose_user_role`. After it's provisioned in your account, you must onboard the role in your federation solution. 

**Q: What are the restrictions to using Firehose in my AMS account?**

There are no restrictions. Full functionality of Amazon Data Firehose is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Firehose in my AMS account?**

New service-linked IAM roles must be requested for each delivery stream. You can also re-use a single service-linked role for all streams by updating the role policy with the required resource permissions (including S3 buckets/ KMS Keys / Lambda Functions / Kinesis streams).

After you have submitted the RFC to add Firehose, an AMS Operations engineer will reach out to you through a Service Request for the ARNs of resources that you would like to connect with Data Firehose (for example, AWS KMS, S3, Lambda, and Kinesis Streams).

# Use AMS SSP to provision Amazon DevOps Guru in your AMS account
<a name="devops-guru"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon DevOps Guru capabilities directly in your AMS managed account. Amazon DevOps Guru is a fully managed operations service that makes it easy for developers and operators to improve the performance and availability of their applications. DevOps Guru lets you offload the administrative tasks associated with identifying operational issues so that you can quickly implement recommendations to improve your application. DevOps Guru creates reactive insights you can use to improve your application now. It also creates proactive insights to help you avoid operational issues that might affect your application in the future. DevOps Guru applies machine learning to analyze your operational data and application metrics and events to identify behaviors that deviate from normal operating patterns. You are notified when DevOps Guru detects an operational issue or risk. For each issue, DevOps Guru presents intelligent recommendations to address current and predicted future operational issues.

To learn more, see [What is Amazon DevOps Guru](https://docs.aws.amazon.com/devops-guru/latest/userguide/welcome.html).

## Amazon DevOps Guru in AWS Managed Services FAQ
<a name="devops-guru-faqs"></a>

**Q: How do I request access to Amazon DevOps Guru in my AMS account?**

To request access, submit a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_devopsguru_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon DevOps Guru in my AMS account?**

There are no restrictions. Full functionality of Amazon DevOps Guru is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon DevOps Guru in my AMS account?**

There are no prerequisites. DevOps Guru leverages the following AWS services: Amazon CloudWatch Logs, RDS Insights, AWS X-Ray, AWS Lambda, and AWS CloudTrail.

# Use AMS SSP to provision Amazon DocumentDB (with MongoDB compatibility) in your AMS account
<a name="document-db"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon DocumentDB (with MongoDB compatibility) capabilities directly in your AMS managed account. Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. Amazon DocumentDB gives you the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale. Amazon DocumentDB implements the Apache 2.0 open source MongoDB 3.6 API by emulating the responses that a MongoDB client expects from a MongoDB server, allowing you to use your existing MongoDB drivers and tools with Amazon DocumentDB. In Amazon DocumentDB, the storage and compute are decoupled, allowing each to scale independently, and you can increase the read capacity to millions of requests per second by adding up to 15 low latency read replicas, regardless of the size of your data. Amazon DocumentDB is designed for 99.99% availability and replicates six copies of your data across three AWS Availability Zones (AZs). You can use AWS Database Migration Service (DMS) for free (for six months) to migrate your on-premises or Amazon Elastic Compute Cloud (Amazon EC2) MongoDB databases to Amazon DocumentDB with virtually no downtime. To learn more, see [Amazon DocumentDB (with MongoDB compatibility)](https://aws.amazon.com/documentdb/).

## Amazon DocumentDB in AWS Managed Services FAQ
<a name="set-document-db-faqs"></a>

**Q: How do I request access to Amazon DocumentDB in my AMS account?**

Amazon DocumentDB console and data access roles can be requested through the submission of two AMS RFCs, console access and data access:

Request access to Amazon DocumentDB by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_documentdb_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon DocumentDB in my AMS account?**

Amazon DocumentDB requires Amazon RDS-specific permissions. Because AMS fully manages Amazon RDS, the IAM role for Amazon DocumentDB includes some restrictions to actions on Amazon RDS. The following restrictions apply:
+ Access to the `DeleteDBInstance` and `DeleteDBCluster` APIs have been restricted. To use those deletion APIs, submit an RFC with the Management \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Update entity or policy (managed automation) change type (ct-27tuth19k52b4).
+ You can't add or remove tags from Amazon RDS instances.
+ You can't make your Amazon DocumentDB instance public.

**Q: What are the prerequisites or dependencies to using Amazon DocumentDB in my AMS account?**

Amazon S3 and AWS KMS are required in order to use Amazon DocumentDB, if Amazon S3 buckets are encrypted with AWS KMS keys.

# Use AMS SSP to provision Amazon DynamoDB in your AMS account
<a name="dynamo-db"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon DynamoDB (DynamoDB) capabilities directly in your AMS managed account. Amazon DynamoDB is a key value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active database with built-in security, backup and restore, and in-memory caching for internet scale applications. To learn more, see [Amazon DynamoDB](https://aws.amazon.com/dynamodb/).

Amazon DynamoDB Accelerator (DAX) is a write-through caching service that is designed to simplify the process of adding a cache to DynamoDB tables. DAX is intended for applications that require high-performance reads.

## DynamoDB in AWS Managed Services FAQ
<a name="set-dynamo-db-faqs"></a>

**Q: How do I request access to DynamoDB and DAX in my AMS account?**

Request access to DynamoDB and DAX by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles and policies to your account:
+ DynamoDB role name: `customer_dynamodb_role`

  DAX service role name: `customer_dax_service_role`
+ DynamoDB policy name: `customer_dynamodb_policy`

  DAX service policy: `customer_dax_service_policy`

Once provisioned in your account, you must onboard the `customer_dynamodb_role` in your federation solution.

**Q: What are the restrictions to using DynamoDB in my AMS account?**

All DynamoDB functionality are supported including DynamoDB Accelerator (DAX).

When creating alarms for any given table, the alarm name must be prefixed with "customer\$1"; for example, `customer-employee-table-high-put-latency`.

When creating an Amazon SNS topic for DynamoDB, it must be named: `dynamodb`.

To delete the Amazon SNS topic created by DynamoDB, submit a Management \$1 Other \$1 Other \$1 Update change type RFC.

**Q: What are the prerequisites or dependencies to using DynamoDB in my AMS account?**

There are no prerequisites or dependencies to use DynamoDB in your AMS account.

# Use AMS SSP to provision Amazon Elastic Container Registry in your AMS account
<a name="ecr"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Elastic Container Registry (Amazon ECR) capabilities directly in your AMS managed account. Amazon Elastic Container Registry is a fully-managed [Docker](https://aws.amazon.com/docker/) container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with [Amazon Elastic Container Service (Amazon ECS)](https://aws.amazon.com/ecs/), simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECS hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository. With Amazon ECR, there are no upfront fees or commitments. You pay only for the amount of data you store in your repositories and data transferred to the Internet.

To learn more, see [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/).

## Amazon Elastic Container Registry in AWS Managed Services FAQ
<a name="set-ecr-faqs"></a>

**Q: How do I request access to Amazon ECR in my AMS account?**

Request access to Amazon ECR by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: ` customer_ecr_console_role`, and `customer_ecr_poweruser_instance_profile` with associated IAM policies, `customer_ecr_console_policy` and `customer_ecr_poweruser_instance_profile_policy`, respectively. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon ECR in my AMS account?**

There are restrictions around AMS namespaces for the use of Amazon ECR in your AMS account. Container images may not be prefixed with "AMS-" or "Sentinel-".

**Q: What are the prerequisites or dependencies to using Amazon ECR in my AMS account?**

There are no prerequisites or dependencies to use Amazon ECR in your AMS account.

# Use AMS SSP to provision EC2 Image Builder in your AMS account
<a name="ec2-image-build"></a>

Use AMS Self-Service Provisioning (SSP) mode to access EC2 Image Builder capabilities directly in your AMS managed account. EC2 Image Builder is a fully managed AWS service that makes it easier to automate the creation, management, and deployment of customized, secure, and up-to-date "golden" server images that are pre-installed and pre-configured with software and settings to meet specific IT standards.

 You can use the AWS Management Console, AWS CLI, or APIs to create custom images in your AWS account. When you use the AWS Management Console, the Amazon EC2 Image Builder wizard guides you through steps to:
+ Provide starting artifacts
+ Add and remove software
+ Customize settings and scripts
+ Run selected tests
+ Distribute images to AWS Regions

The images you build are created in your account and can be configured for operating system patches on an ongoing basis. To learn more, see [EC2 Image Builder](https://aws.amazon.com/image-builder/).

## EC2 Image Builder in AWS Managed Services FAQ
<a name="set-ec2-image-build-faqs"></a>

Common questions and answers:

**Q: How do I request access to EC2 Image Builder in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. Through this RFC, the following IAM role will be provisioned in your account: ` customer_ec2_imagebuilder_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions for EC2 Image Builder?**

AMS does not support the use of Service Defaults for infrastructure configuration. You can create a new infrastructure configuration or use an existing one.

AMS does not currently support the creation of container recipes.

**Q: What are the prerequisites or dependencies to enable EC2 Image Builder?**
+ EC2 Image Builder service-linked role: You don't need to manually create a service-linked role. When you create your first Image Builder resource in the AWS Management Console, the AWS CLI, or the AWS API, Image Builder creates the service-linked role for you.
+ Instances used to build images and run tests using Image Builder must have access to the Systems Manager service. The SSM Agent will be installed on the source image if it is not already present, and it will be removed before the image is created.
+ AWS IAM: The IAM role that you associate with your instance profile must have permissions to run the build and test components included in your image. The following IAM role policies must be attached to the IAM role that is associated with the instance profiles: `EC2InstanceProfileForImageBuilder` and `AmazonSSMManagedInstanceCore`. The IAM role name should contain the `*imagebuilder*` keyword. 
+ If you configure logging, the instance profile specified in your infrastructure configuration must have `s3:PutObject` permissions for the target bucket (`arn:aws:s3:::{bucket-name}/*`). For example:

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject"
              ],
              "Resource": "arn:aws:s3:::{bucket-name}/*"
          }
      ]
  }
  ```

------
+ Create an SNS topic with name 'imagebuilder' to receive any alerts and notification from EC2 Image Builder.

# Use AMS SSP to provision Amazon ECS on AWS Fargate in your AMS account
<a name="amz-ecs-fargate"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon ECS on AWS Fargate capabilities directly in your AMS managed account. AWS Fargate is a technology that you can use with Amazon ECS to run containers (see [Containers on AWS](https://aws.amazon.com/what-are-containers)) without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale, clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.

To learn more, see [Amazon ECS on AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html).

## Amazon ECS on Fargate in AWS Managed Services FAQ
<a name="set-amz-ecs-fargate-faqs"></a>

**Q: How do I request access to Amazon ECS on Fargate in my AMS account?**

Request access to Amazon ECS on Fargate by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_ecs_fargate_console_role` (if no existing IAM role is provided to associate the ECS policy to), `customer_ecs_fargate_events_service_role`, `customer_ecs_task_execution_service_role`, `customer_ecs_codedeploy_service_role`, and `AWSServiceRoleForApplicationAutoScaling_ECSService`. Once provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using Amazon ECS on Fargate in my AMS account?**
+ Amazon ECS task monitoring and logging are considered your responsibility since container level activities occur above the hypervisor, and logging capabilities are limited by Amazon ECS on Fargate. As a user of Amazon ECS on Fargate, we recommend that you take the necessary steps to enable logging on your Amazon ECS tasks. For more information, see [ Enabling the awslogs Log Driver for Your Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html#enable_awslogs).
+ Security and malware protection at the container level are also considered to be your responsibility. Amazon ECS on Fargate doesn't include Trend Micro or preconfigured network security components.
+ This service is available for both multi-account landing zone and single-account landing zone AMS accounts.
+ Amazon ECS [Service Discovery](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html) is restricted by default in the self-provisioned role since elevated permissions are required to create Route 53 private hosted zones. To enable Service Discovery on a service, submit a Management \$1 Other \$1 Other \$1 Update change type. To provide the information required to enable Service Discovery for your Amazon ECS Service, see the [Service Discovery manual](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html).
+ AMS does not currently manage or restrict images used to deploy to containers onto Amazon ECS Fargate. You will be able to deploy images from Amazon ECR, Docker Hub, or any other private image repository. Therefore, we advised that public or any unsecured images not be deployed, since they may result in malicious activity on the account.

**Q: What are the prerequisites or dependencies to using Amazon ECS on Fargate in my AMS account?**
+ The following are dependencies of Amazon ECS on Fargate; however, no additional action is required to enable these services with your self-provisioned role:
  + CloudWatch logs
  + CloudWatch events
  + CloudWatch alarms
  + CodeDeploy
  + App Mesh
  + Cloud Map
  + Route 53
+ Depending on your use case, the following are resources that Amazon ECS relies on, and may require prior to using Amazon ECS on Fargate in your account:
  + Security group to be used with the Amazon ECS service. You can use the Deployment \$1 Advanced stack components \$1 Security Group \$1 Create (auto) (ct-3pc215bnwb6p7), or, if your security group requires special rules, use Deployment \$1 Advanced stack components \$1 Security Group \$1 Create (managed automation) (ct-1oxx2g2d7hc90). Note: The security group your select with Amazon ECS has to be created specifically for Amazon ECS where the Amazon ECS service or cluster reside. You can learn more in the **Security Group** section at [Setting Up with Amazon ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html) and [Security in Amazon Elastic Container Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/security.html).
  + Application load balancer (ALB), network load balancer (NLB), classic load balancer (ELB) for load balancing between tasks.
  + Target Groups for ALBs.
  + App mesh resources (for instance, Virtual Routers, Virtual Services, Virtual Nodes) to integrate with your Amazon ECS Cluster.
+ Currently, there is no way for AMS to automatically mitigate risk associated with supporting security groups' permissions when created outside of the standard AMS change types. We recommend that you request a specific security group for use with your Fargate cluster to limit the possibility of using a security group not designated for the use with Amazon ECS.

# Use AMS SSP to provision Amazon EKS on AWS Fargate in your AMS account
<a name="amz-eks"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon EKS on AWS Fargate capabilities directly in your AMS managed account. AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers (to understand containers, see [What are Containers?](https://aws.amazon.com/what-are-containers)). With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.

Amazon Elastic Kubernetes Service (Amazon EKS) integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon EKS-managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers. When you start a pod that meets the criteria for running on Fargate, the Fargate controllers running in the cluster recognize, update, and schedule the pod onto Fargate.

To learn more, see [Amazon EKS on AWS Fargate Now Generally Available](https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/) and [Amazon EKS Best Practices Guide for Security](https://aws.github.io/aws-eks-best-practices/security/docs/) (includes "Recommendations" such as "Review and revoke unnecessary anonymous access" and more).

**Tip**  
AMS has a change type, Deployment \$1 Advanced stack components \$1 Identity and Access Managment (IAM) \$1 Create OpenID Connect provider (ct-30ecvfi3tq4k3), that you can use with Amazon EKS. For an example, see [ Identity and Access Management (IAM) \$1 Create OpenID Connect Provider](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-identity-and-access-management-iam-create-openid-connect-provider.html).

## Amazon EKS on AWS Fargate in AWS Managed Services FAQ
<a name="set-amz-eks-faqs"></a>

**Q: How do I request access to Amazon EKS on Fargate in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account. 
+ `customer_eks_fargate_console_role`.

  After it's provisioned in your account, you must onboard the role in your federation solution.
+ These service roles give Amazon EKS on Fargate permission to call other AWS services on your behalf:
  + `customer_eks_pod_execution_role`
  + `customer_eks_cluster_service_role`

**Q: What are the restrictions to using Amazon EKS on Fargate in my AMS account?**
+ Creating [managed](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) or [self-managed](https://docs.aws.amazon.com/eks/latest/userguide/worker.html) EC2 nodegroups is not supported in AMS. If you have a requirement for using EC2 worker nodes, reach out to your AMS Cloud Service Delivery Manager(CSDM) or Cloud Architect(CA).
+ AMS does not include Trend Micro or preconfigured network security components for container images. You are expected to manage your own image scanning services to detect malicious container images prior to deployment.
+ EKSCTL is not supported due to CloudFormation interdependencies.
+ During cluster creation, you have permissions to disable cluster control plane logging. For more information, see [Amazon EKS control plane logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html). We advise that you enable all important API, Authentication, and Audit logging on cluster creation.
+ During cluster creation, cluster endpoint access for Amazon EKS clusters are defaulted to public; for more information, see [Amazon EKS cluster endpoint access control](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html). We recommend that Amazon EKS endpoints be set to private. If endpoints are required for public access, then it's a best practice to set them to public only for specific CIDR ranges.
+ AMS doesn't have a method to force and restrict images used to deploy to containers on Amazon EKS Fargate. You can deploy images from Amazon ECR, Docker Hub, or any other private image repository. Therefore, there is a risk of deploying a public image that might perform malicious activity on the account.
+ Deploying EKS clusters through the cloud development kit (CDK) or CloudFormation Ingest isn't supported in AMS.
+ You must create the required security group using [ct-3pc215bnwb6p7 Deployment \$1 Advanced stack components \$1 Security group \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-security-group-create.html) and reference in the manifest file for ingress creation. This is because the role `customer-eks-alb-ingress-controller-role` isn't authorized to create security groups.

**Q: What are the prerequisites or dependencies to using Amazon EKS on Fargate in my AMS account?**

In order to use the service, the following dependencies must be configured:
+ For authenticating against the service, both KUBECTL and aws-iam-authenticator must be installed; for more information, see [Managing cluster authentication](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html).
+ Kubernetes rely on a concept called "service accounts." In order to utilize the service accounts functionality inside of a kubernetes cluster on EKS, a Management \$1 Other \$1 Other \$1 Update RFC is required with the following inputs:
  + [Required] Amazon EKS Cluster name
  + [Required] Amazon EKS Cluster namespace where service account (SA) will be deployed.
  + [Required] Amazon EKS Cluster SA name.
  + [Required] IAM Policy name and permissions/document to be associated.
  + [Required] IAM Role name being requested.
  + [Optional] OpenID Connect provider URL. For more information, see
    +  [ Enabling IAM roles for service accounts on your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)
    +  [ Introducing fine-grained IAM roles for service accounts](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/)
+ We recommend that Config rules be configured and monitored for
  + Public cluster endpoints
  + Disabled API logging

  It is your responsibility to monitor and remediate these Config rules.

If you want to deploy an [ALB Ingress controller](https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html), submit a Management \$1 Other \$1 Other Update RFC to provision the necessary IAM role to be used with the ALB Ingress Controller pod. The following inputs are required for creating IAM resources to be associated with ALB Ingress Controller (include these with your RFC):
+ [Required] Amazon EKS Cluster name
+ [Optional] OpenID Connect provider URL
+ [Optional] Amazon EKS Cluster namespace where the application load balancer (ALB) ingress controller service will be deployed. [default: kube-system]
+ [Optional] Amazon EKS Cluster service account (SA) name. [default: aws-load-balancer-controller]

If you want to enable envelope secrets encryption in your cluster (which we recommend), provide the KMS key IDs you intend to use, in the description field of the RFC to add the service (Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct). To learn more about envelope encryption, see [ Amazon EKS adds envelope encryption for secrets with AWS KMS](https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/). 

# Use AMS SSP to provision Amazon EMR in your AMS account
<a name="amz-emr"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon EMR capabilities directly in your AMS managed account. Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With Amazon EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand.

You can create one or more instances of the Amazon EMR clusters in either AMS multi-account landing zone or single-account landing zone accounts to support both transient and persistent Amazon EMR clusters. You can also enable Kerberos authentication to enable authenticate users from on-premises Active Directory domain.

You can leverage multiple data stores with the Amazon EMR clusters to support use-case specific Hadoop tools and libraries. The Amazon EMR clusters can be created using OnDemand or Spot instances and configure autoscaling to manage capacity and reduce the cost.

The cluster log files can be archived to an Amazon S3 bucket for logging and debugging. You can also access the web interfaces hosted in the Amazon EMR cluster to support hadoop administration requirements or note book experiences for customers.

To learn more, see [Amazon EMR](https://aws.amazon.com/emr/).

## Amazon EMR in AWS Managed Services FAQ
<a name="set-amz-emr-faqs"></a>

**Q: How do I request access to Amazon EMR in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM roles to your account:
+ `customer_emr_cluster_instance_profile`
+ `customer_emr_cluster_autoscaling_role`
+ `customer_emr_console_role`
+ `customer_emr_cluster_service_role`

After it's provisioned in your account, you must onboard the customer\$1emr\$1console\$1role in your federation solution.

**Q: What are the restrictions to using Amazon EMR in my AMS account?**

While creating Amazon EMR on an EC2 cluster from the AWS console, we advise you to use the **Create Cluster – Advanced** option. Amazon EMR clusters must be created by adding the tag with the Key **"for-use-with-amazon-emr-managed-policies"** with Value **"true"**. Select the following configurations in the **Security** options:
+ Select custom roles for your cluster:
  + EMR Role : customer\$1emr\$1cluster\$1service\$1role
  + EC2 Instance Profile : customer\$1emr\$1cluster\$1instance\$1profile
  + Auto Scaling Role : customer\$1emr\$1cluster\$1autoscaling\$1role
+ EC2 Security groups:
  + Master : ams-emr-master-security-group
  + Core & Task : ams-emr-worker-security-group
  + Service Access : ams-emr-serviceaccess-security-group

**Q: What are the prerequisites or dependencies to using Amazon EMR in my AMS account?**

AMS creates default security groups for the Amazon EMR master, worker, and services nodes.

The launch templates and security groups to be used with Amazon EMR clusters must have the tag key **"for-use-with-amazon-emr-managed-policies"** with value **"true"**.

The default Amazon EMR cluster instance profile enables access to the resources such as s3 buckets and dynamodb tables with their names containing "emr". You can request additional IAM policies to use any additional resources to be used with Amazon EMR. The following resource ARN's can be used with Amazon EMR jobs using the **customer\$1emr\$1cluster\$1instance\$1profile**:
+ arn:aws:dynamodb:\$1:\$1:table/\$1emr\$1
+ arn:aws:kinesis:\$1:\$1:stream/\$1emr\$1
+ arn:aws:sns:\$1:\$1:\$1emr\$1arn:aws:sqs:\$1:\$1:\$1emr\$1
+ arn:aws:sqs:\$1:\$1:\$1emr\$1
+ arn:aws:sqs:\$1:\$1:AWS-ElasticMapReduce-\$1
+ arn:aws:sdb:\$1:\$1:domain:\$1emr\$1
+ arn:aws:s3:::\$1emr\$1

If kerberos authentication is required for the Amazon EMR cluster:
+ Provide the realm name to be used for each kerberized Amazon EMR cluster and the on-premise Active Directory IP addresses.
+ Infrastructure requirements:

  **Multi-Account Landing Zone (MALZ)**: Submit an RFC to create a new Managed application account or a new VPC in an existing application account.

  **Single-Account Landing Zone (SALZ)**: Submit an RFC to create a new subnet in your VPC.
+ Configure the incoming trust for the cluster’s realm on the on-premise Active Directory.
+ Submit an RFC to configure DNS zones for the realm in the Managed AD.
+ Realm configuration:

  **MALZ**: Submit a Management \$1 Other \$1 Other \$1 Update (ct-0xdawir96cy7k) RFC to update the VPC DHCP option set to use the realm name for domain name suffix.

  **SALZ**: Submit a Management \$1 Other \$1 Other \$1 Update (ct-0xdawir96cy7k) RFC to generate a new Amazon EMR AMI to use the specific realm for domain name suffix.

To deploy Amazon EMR studio, the role `customer_emr_cluster_service_role` has a prerequisite for an Amazon Simple Storage Service bucket. To create the bucket, use the automated CT `ct-1a68ck03fn98r` (Deployment \$1 Advanced stack components \$1 S3 storage \$1 Create). When you use this automated CT to create an Amazon S3 bucket for Amazon EMR, the bucket name must begin with the prefix `customer-emr-*`. And, you must create the bucket in the same AWS Region as the Amazon EMR cluster.

# Use AMS SSP to provision Amazon EventBridge in your AMS account
<a name="amz-eventbridge"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon EventBridge capabilities directly in your AMS managed account. Amazon EventBridge is a serverless event bus service that makes it easy to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services and routes that data to targets such as AWS Lambda. You can set up routing rules to determine where to send your data to build application architectures that react in real time to all of your data sources. EventBridge allows you to build event driven architectures, which are loosely coupled and distributed.

To learn more, see [Amazon EventBridge](https://aws.amazon.com/eventbridge/).

## EventBridge in AWS Managed Services FAQ
<a name="set-amz-eventbridge-faqs"></a>

**Q: How do I request access to EventBridge in my AMS account?**

Request access to EventBridge by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_eventbridge_role` and `customer_eventbridge_scheduler_execution_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

The execution role, `customer_eventbridge_scheduler_execution_role` is an IAM role that EventBridge Scheduler assumes to interact with other AWS services on your behalf. The permission policies attached to this role grant EventBridge Scheduler access to invoke targets.

**Note**  
By default, EventBridge Scheduler uses AWS owned keys for EventBridge to encrypt the data. To use a customer managed key for EventBridge to encrypt the data, submit the RFC using the Management \$1 AWS service \$1 Self-provisioned service \$1 [Add (managed automation)](https://docs.aws.amazon.com/managedservices/latest/ctref/management-aws-self-provisioned-service-add-review-required.html) change type (ct-3qe6io8t6jtny) for service provisioning.

**Q: What are the restrictions to using EventBridge in my AMS account?**

You must submit AMS RFCs and create the following resources: Service roles to trigger the batch job, SQS queue, CodeBuild, CodePipeline, and SSM commands.

**Q: What are the prerequisites or dependencies to using EventBridge in my AMS account?**

You must request an EventBridge service role with an RFC using the Deployment \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Create entity or policy (managed automation) change type (ct-3dpd8mdd9jn1r) prior to using EventBridge to trigger other AWS resources, such as AWS Batch, Lambda, Amazon SNS, Amazon SQS, or Amazon CloudWatch Logs resources. Specify the services to invoke when requesting your service role. To learn about permissions required to invoke targets, see [ Using Resource-Based Policies for EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html).

EventBridge is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in EventBridge. CloudTrail must be enabled and allowed to store the log files to S3 buckets. Note: All AMS accounts have CloudTrail enabled, so no action is needed.

**Q: The role customer\$1eventbridge\$1scheduler\$1execution\$1role has a prerequisite for an AWS Key Management Service Key (optional, if used for encryption). How do I adopt AWS KMS CMKs in data encryption at rest/transit? **

By default, EventBridge Scheduler encrypts event metadata and message data that it stores under an AWS owned key (encryption at rest). EventBridge Scheduler also encrypts data that passes between EventBridge Scheduler and other services using Transport Layer Security (TLS) (encryption in transit).

If your specific use case requires that you control and audit the encryption keys that protect your data on EventBridge Scheduler, you can use a customer managed key.

You must request an RFC using the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) change type prior to using Amazon EventBridge to onboard the AWS KMS permission.

# Use AMS SSP to provision Amazon Forecast in your AMS account
<a name="forecast"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Forecast (Forecast) capabilities directly in your AMS managed account. Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts.

**Note**  
AWS has closed new customer access to Amazon Forecast, effective July 29, 2024. Amazon Forecast existing customers can continue to use the service as normal. AWS continues to invest in security, availability, and performance improvements for Amazon Forecast, but AWS does not plan to introduce new features.  
If you want to use Amazon Forecast, reach out to your CSDM so that they can guide you further regarding how to [Transition your Amazon Forecast usage to Amazon SageMaker Canvas](https://aws.amazon.com/blogs/machine-learning/transition-your-amazon-forecast-usage-to-amazon-sagemaker-canvas/).

Based on the same technology used at Amazon.com, Forecast uses machine learning to combine time series data with additional variables to build forecasts. Forecast requires no machine learning experience to get started. You only need to provide historical data, plus any additional data that you believe may impact your forecasts. For example, the demand for a particular color of a shirt may change with the seasons and store location. This complex relationship is hard to determine on its own, but machine learning is ideally suited to recognize it. Once you provide your data, Forecast will automatically examine it, identify what is meaningful, and produce a forecasting model capable of making predictions that are up to 50% more accurate than looking at time series data alone.

To learn more, see [Amazon Forecast](https://aws.amazon.com/forecast/).

## Amazon Forecast in AWS Managed Services FAQ
<a name="set-forecast-faqs"></a>

**Q: How do I request access to Forecast in my AMS account?**

Request access to AWS Firewall Manager by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_forecast_admin_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Forecast in my AMS account?**

The default S3 bucket access only allows you to access buckets with the naming pattern 'customer-forecast-\$1'. If you have your own naming convention for data buckets, discuss bucket naming and related access setup with your Cloud Architect (CA). For example:
+ You could define your specific Amazon Forecast service role with naming like 'AmazonForecast-ExecutionRole-\$1' and associated proper S3 bucket access. See the Service role - AmazonForecast-ExecutionRole-Admin and IAM policy - customer\$1forecast\$1default\$1s3\$1access\$1policy, in the IAM console.
+ You may need to associate related S3 buckets access to IAM federation role. See the IAM policy - customer\$1forecast\$1default\$1s3\$1access\$1policy, in the IAM console.

**Q: What are the prerequisites or dependencies to using Forecast in my AMS account?**
+ Proper Amazon S3 bucket(s) must be created before using Forecast. Especially, the default S3 buckets access is with naming pattern ‘customer-forecast-\$1’
+ If you want to use naming patterns on S3 buckets other than 'customer-forecast-\$1', you must create a new service role with S3 access permissions on the buckets:

  1. A new service role to be created with naming 'AmazonForecast-ExecutionRole-\$1suffix\$1'.

  1. A new IAM policy to be created which is similar to customer\$1forecast\$1default\$1s3\$1access\$1policy and to be associated with the new service role and related federation admin role (e.g. 'customer\$1forecast\$1admin\$1role')

**Q: How can I enhance data security while using Amazon Forecast?**
+ For data encryption at rest, you can use AWS KMS to provision a customer-managed CMK to protect data storage on Amazon S3 service:
  + Enable default encryption on the bucket with the provision key and set up bucket policy to accept AWS KMS data encryption while putting data.
  + Enable the Amazon Forecast service role 'AmazonForecast-ExecutionRole-\$1' and federation admin role (e.g. 'customer\$1forecast\$1admin\$1role') as the AWS KMS key user.
+ For data encryption in transit, you can set up the HTTPS protocol, which is required while transferring objects on Amazon S3 bucket policy.
+ Further restrictions on access control, enable a bucket policy for approved access for the Amazon Forecast service role 'AmazonForecast-ExecutionRole-\$1' and admin role (e.g. 'customer\$1forecast\$1admin\$1role'). 

**Q: What are the best practices while using Amazon Forecast?**
+ You should have a good understanding of your data classification practices and map out the related data security needs while using S3 buckets with Amazon Forecast.
+ For Amazon S3 bucket configuration, we strongly advise you to enable HTTPS enforcement in your S3 bucket policy.
+ You must be aware of the admin role 'customer\$1forecast\$1admin\$1role' support permissive access (Get/Delete/Put S3 objects) on Amazon S3 buckets with naming of 'customer-forecast-\$1'. NOTE: If you require fine-grained access control for multiple teams, follow these practices:
  + Define your team-based access IAM identity (role/user) with least-privilege access to related Amazon S3 buckets.
  + Create team/project based AWS KMS CMKs grant proper access to corresponding IAM identities. (user access and 'AmazonForecast-ExecutionRole-\$1team/project\$1'.
  + Setup S3 bucket default encryption with the created AWS KMS CMKs.
  + Enforce S3 API traffics with HTTPS protocol on S3 bucket policy.
  + Enforce S3 bucket configuration for approved access for related IAM identities (user access and 'AmazonForecast-ExecutionRole-\$1team/project\$1' to the buckets.
+ If you want to use the 'customer\$1forecast\$1admin\$1role' for general purpose, consider points listed previously to protect S3 buckets.

**Q: Where is compliance information about Amazon Forecast?**

See the [AWS services Compliance Program](https://aws.amazon.com/compliance/services-in-scope/).

# Use AMS SSP to provision Amazon FSx in your AMS account
<a name="amz-fsx"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon FSx capabilities directly in your AMS managed account. Amazon FSx provides fully managed third-party file systems. Amazon FSx provides you with the native compatibility of third-party file systems with feature sets for workloads such as Windows-based storage, high-performance computing (HPC), machine learning, and electronic design automation (EDA). Amazon FSx automates the time-consuming administration tasks such as hardware provisioning, software configuration, patching, and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even more useful for a broader set of workloads.

Amazon FSx provides you with two file systems to choose from: Amazon FSx for Windows File Server for Windows-based applications and Amazon FSx for Lustre for compute-intensive workloads. To learn more, see [Amazon FSx](https://aws.amazon.com/fsx/).

## Amazon FSx in AWS Managed Services FAQ
<a name="set-amz-fsx-faqs"></a>

**Q: How do I request access to Amazon FSx in my AMS account?**

Request access to Amazon FSx by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_fsx_admin_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon FSx in my AMS account?**

There are no restrictions. Full functionality of the service is available.

**Q: What are the prerequisites or dependencies to using Amazon FSx in my AMS account?**

There are no prerequisites. However, for advance configurations like Multi-AZ, you must install and manage the DFS Replication and DFS Namespaces services. For more information, see [Deploying Multi-AZ File Systems](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/multi-az-deployments.html).

**Q: How do I integrate my Amazon FSx file system with my multi-account landing zone Managed AD?**

When creating an Amazon FSx file system, you can specify your MALZ Managed AD as the 'AWS Managed Microsoft Active Directory' for Windows Authentication. For more information see, [Using Amazon FSx with AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/fsx-aws-managed-ad.html)

You must also share the Managed AD to the application account first. Do this by submitting an RFC with the Management \$1 Directory Service \$1 Directory \$1 Share directory change type (ct-369odosk0pd9w).

**Q: Which users belong in the **AWS Delegated FSx Administrators** group?**

Only IT file server administrators. This group has **Full Access** privileges across all file shares.

**Q: Should I use the default file share, **share**, which is created when the FSx system is provisioned?**

No, we don't recommend using the the default file share, **share**, as provisioned. It grants **Full Access** to **Everyone**, which which violates the principle of least privilege. Instead, create smaller, custom file shares that match your business needs.

**Q: How can I create custom file shares for specific organizations in my business?**

See [File Shares](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/managing-file-shares.html) for instructions on creating custom file shares. Restrict access on each file share using the principle of least privilege.

# Use AMS SSP to provision Amazon FSx for OpenZFS in your AMS account
<a name="amz-fsx-open-zfs"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon FSx for OpenZFS capabilities directly in your AMS managed account. FSx for OpenZFS is a fully managed file storage service that makes it easy to move data residing in on-premises ZFS or other Linux-based file servers to AWS without changing your application code or how you manage data. It offers highly reliable, scalable, performant, and feature-rich file storage built on the open-source OpenZFS file system, providing the familiar features and capabilities of OpenZFS file systems with the agility, scalability, and simplicity of a fully managed AWS service. For developers building cloud-native applications, it offers simple, high-performance storage with rich capabilities for working with data.

FSx for OpenZFS file systems are broadly accessible from Linux, Windows, and macOS compute instances and containers using the industry-standard NFS protocol (v3, v4.0, v4.1, v4.2). Powered by AWS Graviton processors and the latest AWS disk and networking technologies (including AWS Scalable Reliable Datagram networking and the AWS Nitro system), FSx for OpenZFS delivers up to 1 million IOPS with latencies of hundreds of microseconds. With complete support for OpenZFS features like instant point-in-time snapshots and data cloning, FSx for OpenZFS makes it easy for you to replace your on-premises file servers with AWS storage that provides familiar file system capabilities and eliminates the need to perform lengthy qualifications and change or re-architect existing applications or tools. And, by combining the power of OpenZFS data management capabilities with the high performance and cost efficiency of the latest AWS technologies, FSx for OpenZFS enables you to build and run high-performance, data-intensive applications.

As a fully managed service, FSx for OpenZFS makes it easy to launch, run, and scale fully managed file systems on AWS that replace the file servers you run on premises while helping to provide better agility and lower costs. With FSx for OpenZFS, you no longer have to worry about setting up and provisioning file servers and storage volumes, replicating data, installing and patching file server software, detecting and addressing hardware failures, and manually performing backups. It also provides rich integration with other AWS services, such as AWS Identity and Access Management (IAM), AWS Key Management Service (AWS KMS), Amazon CloudWatch, and AWS CloudTrail.

Amazon FSx provides you with two file systems to choose from: Amazon FSx for Windows File Server for Windows-based applications and Amazon FSx for Lustre for compute-intensive workloads. To learn more, see [Amazon FSx](https://aws.amazon.com/fsx/).

## Amazon FSx for OpenZFS in AWS Managed Services FAQ
<a name="set-amz-fsx-open-zfs-faqs"></a>

**Q: How do I request access to use FSx for OpenZFS in my AMS account?**

Request access to Amazon FSx OpenZFS by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_fsx_ontap_admin_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using FSx for OpenZFS in my AMS account?**

Replacing the security group on the Amazon FSx elastic network interfaces (ENIs) requires you to submit Management \$1 Other \$1 Other \$1 Update RFCs since security groups are a critical perimeter for the AMS environment. That is the only restriction.

**Q: What are the prerequisites or dependencies to using FSx for OpenZFS in my AMS account?**

There are no prerequisites. However, you must have [Use AMS SSP to provision Amazon FSx in your AMS account](amz-fsx.md) installed.

# Use AMS SSP to provision Amazon FSx for NetApp ONTAP in your AMS account
<a name="amz-fsx-netapp-ontap"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon FSx for NetApp ONTAP capabilities directly in your AMS managed account. Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, performant, and feature-rich file storage built on NetApp's popular ONTAP file system. It provides the familiar features, performance, capabilities, and APIs of NetApp file systems with the agility, scalability, and simplicity of a fully managed AWS service.

Amazon FSx for NetApp ONTAP provides feature-rich, fast, and flexible shared file storage that’s broadly accessible from Linux, Windows, and macOS compute instances running in AWS or on premises. FSx for ONTAP offers high-performance SSD storage with sub-millisecond latencies, and makes it quick and easy to manage your data by enabling you to snapshot, clone, and replicate your files with the click of a button. It also automatically tiers your data to lower-cost, elastic storage, eliminating the need to provision or manage capacity and allowing you to achieve SSD levels of performance for your workload while only paying for SSD storage for a small fraction of your data. It provides highly available and durable storage with fully managed backups and support for cross-region disaster recovery, and supports popular data security and anti-virus applications that make it even easier to protect and secure your data. For customers who use NetApp ONTAP on-premises, FSx for ONTAP is an ideal solution to migrate, back up, or burst your file-based applications from on-premises to AWS without the need to change your application code or how you manage your data.

As a fully managed service, Amazon FSx for NetApp ONTAP makes it simple to launch and scale reliable, performant, and secure shared file storage in the cloud. With Amazon FSx for NetApp ONTAP, you no longer have to worry about setting up and provisioning file servers and storage volumes, replicating data, installing and patching file server software, detecting and addressing hardware failures, managing failover and failback, and manually performing backups. It also provides rich integration with other AWS services, such as AWS Identity and Access Management, Amazon WorkSpaces, AWS Key Management Service, and AWS CloudTrail.

Amazon FSx provides you with two file systems to choose from: Amazon FSx for Windows File Server for Windows-based applications and Amazon FSx for Lustre for compute-intensive workloads. To learn more, see [Amazon FSx](https://aws.amazon.com/fsx/).

## Amazon FSx for NetApp ONTAP in AWS Managed Services FAQ
<a name="set-amz-fsx-netapp-ontap-faqs"></a>

**Q: How do I request access to Amazon FSx for NetApp ONTAP in my AMS account?**

Request access to Amazon FSx for NetApp ONTAP by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_fsx_ontap_admin_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon FSx for NetApp ONTAP in my AMS account?**

Replacing the security group on the Amazon FSx for NetApp ONTAP elastic network interfaces (ENIs) requires you to submit Management \$1 Other \$1 Other \$1 Update RFCs since security groups are a critical perimeter for the AMS environment. That is the only restriction.

**Q: What are the prerequisites or dependencies to using Amazon FSx for NetApp ONTAP in my AMS account?**

There are no prerequisites. However, you must have [Use AMS SSP to provision Amazon FSx in your AMS account](amz-fsx.md) installed.

# Use AMS SSP to provision Amazon Inspector Classic in your AMS account
<a name="inspector"></a>

**Note**  
End of support notice: On May 20, 2026, AWS will end support for Amazon Inspector Classic. After May 20, 2026, you will no longer be able to access the Amazon Inspector Classic console or Amazon Inspector Classic resources. Amazon Inspector Classic will no longer be available to new accounts, and accounts that have not completed an assessment in the last six months. For all other accounts, access will remain valid until May 20, 2026, after which you will no longer be able to access the Amazon Inspector Classic console or Amazon Inspector Classic resources. For more information, see [Amazon Inspector Classic end of support](https://docs.aws.amazon.com/inspector/v1/userguide/inspector-migration.html).

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Inspector Classic capabilities directly in your AMS managed account. Amazon Inspector Classic is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector Classic automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector Classic produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports, which are available via the Amazon Inspector Classic console or API. To learn more, see [Amazon Inspector Classic](https://docs.aws.amazon.com/inspector/v1/userguide/inspector_introduction.html).

## Amazon Inspector in AWS Managed Services FAQ
<a name="set-inspector-faqs"></a>

**Q: How do I request access to Amazon Inspector Classic in my AMS account?**

Request access to Amazon Inspector Classic by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the `customer_inspector_admin_role` IAM role to your account. The role includes the AWS-managed AmazonInspectorFullAccess policy. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Inspector Classic in my AMS account?**

There are no restrictions. Full functionality of Amazon Inspector Classic is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Inspector Classic in my AMS account?**

There are no prerequisites or dependencies to use Amazon Inspector Classic in your AMS account.

## Use the new Amazon Inspector in AMS
<a name="inspector-v2-ams"></a>

You can now use the new Amazon Inspector in your AMS account.

For Amazon Inspector Classic, the `customer-inspector-admin-role-ssm-inspector-agent-policy` and `AmazonInspectorFullAccess` were required. However, there has been an update to the SSPS role `customer-inspector-admin-role`, which now includes an additional `policyAmazonInspector2FullAccess`. This new policy allows API permissions for the new version of Amazon Inspector.

# Use AMS SSP to provision Amazon Kendra in your AMS account
<a name="kendra"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Kendra capabilities directly in your AMS managed account. Amazon Kendra is an intelligent search service that uses natural language processing and advanced machine learning algorithms to return specific answers to search questions from your data. Unlike traditional keyword-based search, Amazon Kendra uses its semantic and contextual understanding capabilities to determine if a document is relevant to a search query. Amazon Kendra returns specific answers to questions, so your experience is close to interacting with a human expert. Amazon Kendra is highly scalable, capable of meeting performance demands, is tightly integrated with other AWS services such as Amazon S3 and Amazon Lex, and offers enterprise-grade security. To learn more, see [Amazon Kendra;](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html).

## Amazon Kendra in AWS Managed Services FAQ
<a name="set-kendra-faqs"></a>

**Q: How do I request access to Amazon Kendra in my AMS account?**

To request access to Amazon Inspector Classic, submit an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-3qe6io8t6jtny) change type. This RFC provisions the `customer_kendra_console_role` IAM role to your account. After provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Kendra in my AMS account?**

There are no restrictions. Full functionality of Amazon Kendra is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Kendra in my AMS account?**

There are no prerequisites or dependencies to get started with Amazon Kendra. However, depending on your specific use case, you might require access to other AWS services.

# Use AMS SSP to provision Amazon Kinesis Data Streams in your AMS account
<a name="kds"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Kinesis Data Streams (KDS) capabilities directly in your AMS managed account. Amazon Kinesis Data Streams is a highly scalable, and durable, real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events. The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more. To learn more, see [Amazon Kinesis Data Streams](https://aws.amazon.com/kinesis/data-streams/).

## Kinesis Data Streams in AWS Managed Services FAQ
<a name="set-kds-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Kinesis Data Streams in my AMS account?**

Request access to Amazon Kinesis Data Streams by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_kinesis_data_streaming_user_role`. After it's provisioned in your account, you must onboard the role in your federation solution. 

**Q: What are the restrictions to using Amazon Kinesis Data Streams in my AMS account?**

There are no restrictions. Full functionality of Amazon Kinesis Data Streams is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Kinesis Data Streams in my AMS account?**

There are no prerequisites or dependencies to use Amazon Kinesis Data Streams in your AMS account.

# Use AMS SSP to provision Amazon Kinesis Video Streams in your AMS account
<a name="kvs"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Kinesis Video Streams (KVS) capabilities directly in your AMS managed account. Amazon Kinesis Video Streams helps you to securely stream video from connected devices to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions, and elastically scales, all the infrastructure needed to ingest streaming video data from millions of devices. It also durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV. To learn more, see [Amazon Kinesis Video Streams](https://aws.amazon.com/kinesis/video-streams/).

## Amazon Kinesis Video Streams in AWS Managed Services FAQ
<a name="set-kvs-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Kinesis Video Streams in my AMS account?**

Request access to Amazon Kinesis Video Streams by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_kinesis_video_streaming_user_role`. After it's provisioned in your account, you must onboard the role in your federation solution. 

**Q: What are the restrictions to using Amazon Kinesis Video Streams in my AMS account?**

There are no restrictions. Full functionality of Amazon Kinesis Video Streams is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Kinesis Video Streams in my AMS account?**

There are no prerequisites or dependencies to use Amazon Kinesis Video Streams in your AMS account.

# Use AMS SSP to provision Amazon Lex in your AMS account
<a name="amz-lex"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Lex capabilities directly in your AMS managed account. Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions. With Amazon Lex, the same deep learning technologies that power Amazon Alexa are now available to any developer, enabling you to quickly and easily build sophisticated, natural language, conversational bots ﻿or chatbots﻿. To learn more, see [Amazon Lex](https://aws.amazon.com/lex/).

## Amazon Lex in AWS Managed Services FAQ
<a name="set-amz-lex-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Lex in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_lex_author_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Lex in my AMS account?**

Amazon Lex integration with Lambda is limited to Lambda functions without an "AMS-" prefix, in order to prevent any modifications to AMS infrastructure.

**Q: What are the prerequisites or dependencies to using Amazon Lex in my AMS account?**

There are no prerequisites or dependencies to use Amazon Lex in your AMS account.

# Use AMS SSP to provision Amazon MQ in your AMS account
<a name="mq-comp"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon MQ capabilities directly in your AMS managed account. Amazon MQ is a managed message broker service for Apache ActiveMQ that helps you to set up and operate message brokers in the cloud. Message brokers allow different software systems, often using different programming languages and on different platforms, to communicate and exchange information. Amazon MQ reduces your operational load by managing the provisioning, setup, and maintenance of ActiveMQ, a popular open-source message broker. Connecting your current applications to Amazon MQ uses industry standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. Using standards means that, in most cases, there’s no need to rewrite any messaging code when you migrate to AWS. To learn more, see [What Is Amazon MQ?](https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/welcome.html)

## Amazon MQ in AWS Managed Services FAQ
<a name="set-mq-comp-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon MQ in my AMS account?**

Utilization of Amazon MQ in your AMS account is a two-step process:

1. Provision the Amazon MQ Broker. To do this, submit a CFN Template, with the Amazon MQ Broker included, through an RFC with the Deployment \$1 Ingestion \$1 Stack from CloudFormation Template \$1 Create change type (ct-36cn2avfrrj9v), or submit an RFC with the Management \$1 Other \$1 Other \$1 Create change type (ct-1e1xtak34nx76) change type requesting that Amazon MQ Broker be provisioned in your account.

1. Access the Amazon MQ console. After the Amazon MQ Broker is provisioned, obtain access to the Amazon MQ console by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_mq_console_role`.

After the role is provisioned in your account, you must onboard it in your federation solution. 

**Q: What are the restrictions to using Amazon MQ in my AMS account?**

Full functionality of Amazon MQ is available in your AMS account; however, provisioning Amazon MQ Broker is not available through the policy due to the elevated permission required. See above for details on how to provision Amazon MQ broker in your accounts. 

**Q: What are the prerequisites or dependencies to using Amazon MQ in my AMS account?**

There are no prerequisites or dependencies to use Amazon MQ in your AMS account.

# Use AMS SSP to provision Amazon Managed Service for Apache Flink in your AMS account
<a name="kda"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Managed Service for Apache Flink capabilities directly in your AMS managed account. Managed Service for Apache Flink is the easiest way to analyze streaming data, gain actionable insights, and respond to your business and customer needs in real time. Amazon Managed Service for Apache Flink reduces the complexity of building, managing, and integrating streaming applications with other AWS services. SQL users can easily query streaming data or build entire streaming applications using templates and an interactive SQL editor. Java developers can quickly build sophisticated streaming applications using open source Java libraries and AWS integrations to transform and analyze data in real time. Amazon Managed Service for Apache Flink takes care of everything required to run your real-time applications continuously and scales automatically to match the volume and throughput of your incoming data. With Amazon Managed Service for Apache Flink, you only pay for the resources your streaming applications consume. There is no minimum fee or setup cost. To learn more, see [Amazon Managed Service for Apache Flink](https://aws.amazon.com/kinesis/data-analytics/).

## Managed Service for Apache Flink in AWS Managed Services FAQ
<a name="set-kda-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Managed Service for Apache Flink in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_kinesis_analytics_application_role`. After it's provisioned in your account, you must onboard the role in your federation solution. 

**Q: What are the restrictions to using Amazon Managed Service for Apache Flink in my AMS account?**
+ Configurations are limited to resources without ‘AMS-‘ or ’MC-’ prefixes to prevent any modifications to AMS infrastructure.
+ Permission to delete or create new Kinesis Data Streams or Firehose has been removed from the policy. We have another policy that allows that.

**Q: What are the prerequisites or dependencies to using Amazon Kinesis Data Streams in my AMS account?**

There are a few dependencies:
+ Amazon Managed Service for Apache Flink requires that Kinesis Data Streams or Firehose must be created prior to configuring an application with Managed Service for Apache Flink.
+ The resource-based policy permissions should indicate a particular input data source.

# Use AMS SSP to provision Amazon Managed Streaming for Apache Kafka in your AMS account
<a name="msk"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Managed Streaming for Apache Kafka (Amazon MSK) capabilities directly in your AMS managed account. Amazon Managed Streaming for Apache Kafka is a fully managed AWS streaming data service makes it easy for you to build and run applications that use Apache Kafka to process streaming data without needing to become an expert in operating Apache Kafka clusters. Amazon MSK manages the provisioning, configuration, and maintenance of Apache Kafka clusters and Apache ZooKeeper nodes for you. Amazon MSK also shows key Apache Kafka performance metrics in the AWS Console.

Amazon MSK provides multiple levels of security for your Apache Kafka clusters, including VPC network isolation, AWS IAM for control-plane API authorization, encryption at rest, TLS encryption in-transit, TLS based certificate authentication, SASL/SCRAM authentication secured by AWS Secrets Manager. To learn more, see [Amazon MSK](https://aws.amazon.com/msk/).

## Amazon MSK in AWS Managed Services FAQ
<a name="set-msk-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon MSK in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM policies and role to your account:
+ `customer-msk-admin-policy.json`
+ `AmazonMSKFullAccess`
+ `customer-msk-admin-role.json`

Once provisioned in your account you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon MSK?**

For Amazon MSK to deliver broker logs to the destinations that you configure, ensure that the `AmazonMSKFullAccess` policy is attached to your IAM role. So full access permissions are already in place.

**Q: What are the prerequisites or dependencies to using Amazon MSK?**

Before creating your MSK cluster, you must have a VPC and subnets within that VPC. By default, AMS has this covered as part of default [AMS VPC creation](https://docs.aws.amazon.com/msk/latest/developerguide/msk-create-cluster.html).

To learn about the limitation of Amazon MSK, refer to [Amazon MSK Limits](https://docs.aws.amazon.com/msk/latest/developerguide/limits.html).

# Use AMS SSP to provision Amazon Managed Service for Prometheus in your AMS account
<a name="pro"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Managed Service for Prometheus (AMP) capabilities directly in your AMS managed account. Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service for container metrics that makes it easier to securely monitor container environments at scale. With Amazon Managed Service for Prometheus, you can use the same open-source Prometheus data model and query language that you use today to monitor the performance of your containerized workloads, and also enjoy improved scalability, availability, and security without having to manage the underlying infrastructure.

Amazon Managed Service for Prometheusautomatically scales the ingestion, storage, and querying of operational metrics as workloads scale up and down. It integrates with AWS security services to enable fast and secure access to data. For more information, see [What is Amazon Managed Service for Prometheus?](https://docs.aws.amazon.com/prometheus/latest/userguide/what-is-Amazon-Managed-Service-Prometheus.html)

## Amazon Managed Service for Prometheus in AWS Managed Services FAQ
<a name="set-pro-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Managed Service for Prometheus in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer-prometheus-console-role`. After it's provisioned in your account, you must onboard the `customer-prometheus-console-role` role in your federation solution. 

**Q: What are the restrictions to using Amazon Managed Service for Prometheus in my AMS account?**

All features are supported.

**Q: What are the prerequisites or dependencies to using Amazon Managed Service for Prometheus in my AMS account?**

There are no prerequisites or dependencies to get started with Amazon Managed Service for Prometheus. However, depending on your specific use case, you might require access to other AWS services.

# Use AMS SSP to provision Amazon Personalize in your AMS account
<a name="personalize"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Personalize capabilities directly in your AMS managed account. Amazon Personalize is a machine learning service that makes it easy for developers to create individualized recommendations for customers using their applications.

Machine learning is being increasingly used to improve customer engagement by powering personalized product and content recommendations, tailored search results, and targeted marketing promotions. However, developing the machine-learning capabilities necessary to produce these sophisticated recommendation systems has been beyond the reach of most organizations today due to the complexity. Amazon Personalize allows developers with no prior machine learning experience to easily build sophisticated personalization capabilities into their applications, using machine learning technology perfected from years of use on Amazon.com.

With Amazon Personalize, you provide an activity stream from your application – clicks, page views, signups, purchases, and so forth – as well as an inventory of the items you want to recommend, such as articles, products, videos, or music. You can also choose to provide Amazon Personalize with additional demographic information from your users such as age, or geographic location. Amazon Personalize will process and examine the data, identify what is meaningful, select the right algorithms, and train and optimize a personalization model that is customized for your data. All data analyzed by Amazon Personalize is kept private and secure, and only used for your customized recommendations. You can start serving personalized recommendations via a simple API call. You pay only for what you use, and there are no minimum fees and no upfront commitments.

To learn more, see [Amazon Personalize](https://aws.amazon.com/personalize/).

## Amazon Personalize in AWS Managed Services FAQ
<a name="personalize-faqs"></a>

**Q: How do I request access to Amazon Personalize in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type, and you need to specify which S3 bucket contains the data to be used by AWS personalize to generate the recommendations. This RFC provisions the following IAM roles to your account: `customer_personalize_console_role` and `customer_personalize_service_role`.
+ Once the `customer_personalize_console_role` is provisioned in your account, you must onboard the role in your federation solution. You can also attach the `customer_personalize_console_policy` to another existing role other than `Customer_ReadOnly_Role`. 
+ After the `customer_personalize_service_role` is provided to your account, then you can refer its ARN when creating a new dataset group.

At this time, AMS Operations will also deploy this service role in your account: `aws_code_pipeline_service_role_policy`.

**Q: What are the restrictions to using Amazon Personalize in my AMS account?**

Amazon Personalize configuration is limited to resources without 'ams-' or 'mc-' prefixes, to prevent any modifications to AMS infrastructure.

**Q: What are the prerequisites or dependencies to using Amazon Personalize in my AMS account?**
+ If the S3 bucket where data is stored is encrypted, the KMS key ID must be provided, so we can allow the role used by Amazon Personalize to decrypt the bucket.

  Amazon Personalize does not support the default KMS S3 key. If required to use KMS, create a custom key and add the following policy to it by opening an RFC with change type KMS Key \$1 Create (Managed automation):

------
#### [ JSON ]

****  

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Id": "key-consolepolicy-3",
      "Statement": [
          {
              "Sid": "Enable IAM User Permissions",
              "Effect": "Allow",
              "Principal": {
                  "Service": "personalize.amazonaws.com"
              },
              "Action": "kms:*",
              "Resource": "*"
          }
      ]
  }
  ```

------
+ An S3 bucket must be created with the following bucket policy. Do this by submitting an RFC with change type S3 Storage \$1 Create Policy. This policy allows Amazon Personalize to access data; that bucket will contain the data to be used by Amazon Personalize.

------
#### [ JSON ]

****  

  ```
  {
  "Version":"2012-10-17",		 	 	 
  "Id": "PersonalizeS3BucketAccessPolicy",
  "Statement": [
  {
  "Sid": "PersonalizeS3BucketAccessPolicy",
  "Effect": "Allow",
  "Principal": {
  "Service": "personalize.amazonaws.com"
  },
  "Action": [
  "s3:GetObject",
  "s3:ListBucket"
  ],
  "Resource": [
  "arn:aws:s3:::bucket-name",
  "arn:aws:s3:::bucket-name/*"
  ]
  }
  ]
  }
  ```

------

# Use AMS SSP to provision Amazon Quick in your AMS account
<a name="quicksight"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Quick capabilities directly in your AMS managed account. Quick is a fast, cloud-powered business intelligence service that delivers insights to everyone in your organization. As a fully managed service, Quick lets you easily create and publish interactive dashboards that include machine learning (ML) insights. To learn more, see [Amazon Quick](https://aws.amazon.com/quicksight/).

## Quick in AWS Managed Services FAQ
<a name="set-quicksight-faqs"></a>

Common questions and answers:

**Q: How do I request access to Quick in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_quicksight_console_admin_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Quick in my AMS account?**
+ AWS resource settings on Quick won’t be accessible to you because of the IAM policy dependency. However, the AMS team enables each resource for you in response to your request to enable the service.
+ Resource access for individual users and groups are not supported in this model because this feature enables users to alter IAM permissions that could compromise AMS infrastructure.
+ The ability to invite IAM identities from within QuickSight is not supported due to the risk involved altering IAM objects.
+ Quick service offers two editions: Enterprise and Standard. Both provide a single sign-on (SSO) option that is supported on AMS. However, the Enterprise Edition has an option to integrate Quick with Active Directory (AD). Quick on AMS does not support integration with AD due to incompatibilities between AMS account structure and the Quick trust requirements.

**Q: What are the prerequisites or dependencies to using Quick in my AMS account?**
+ When AMS receives this RFC to add Quick, you are sent a service request for additional information; provide them the following:
  + Quick account name (for example, `CustomerName-quicksight`
  + Quick Edition (Standard versus Enterprise)
  + The AWS Region in which to enable the Quick service (defaults to your AMS AWS Region).
  + A notification email address for Quick account.
  + (Optional) The S3 bucket where data files to be analyzed are located.
  + The VPC and subnet IDs that connect to Quick support a feature to add a VPC connection, which enables private connectivity between Quick and resources inside the account.

An AMS operator performs the sign up process on your behalf and configures two QuickSight functionalities:
+  [Auto discovery](https://docs.aws.amazon.com/quicksight/latest/user/autodiscover-aws-data-sources.html) to data sources.
+  [VPC connections](https://docs.aws.amazon.com/quicksight/latest/user/working-with-aws-vpc.html).

**Note**  
These actions need to be performed by an AMS operator because elevated IAM and VPC permissions are required during the sign-in process.

# Use AMS SSP to provision Amazon Rekognition in your AMS account
<a name="rekognition"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Rekognition capabilities directly in your AMS managed account. Amazon Rekognition makes it easy to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise to use. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

With Amazon Rekognition Custom Labels, you can identify objects and scenes in images that are specific to your business needs. For example, you can build a model to classify specific machine parts on your assembly line or to detect unhealthy plants. Amazon Rekognition Custom Labels takes care of the model development heavy lifting for you, so no machine learning experience is required. You simply need to supply images of objects or scenes you want to identify, and the service handles the rest.

To learn more, see [Amazon Rekognition](https://aws.amazon.com/rekognition/).

## Amazon Rekognition in AWS Managed Services FAQ
<a name="set-rekognition-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon Rekognition in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_rekognition_console_role & customer_rekognition_service_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon Rekognition in my AMS account?**

Full functionality of Amazon Rekognition is available with the Amazon Rekognition self-provisioned service role.

**Q: What are the prerequisites or dependencies to using Amazon Rekognition in my AMS account?**

If you use Kinesis Video Streams that provide the source streaming video for an Amazon Rekognition Video stream processor or a data stream as a destination to write data to Kinesis Data Streams, kindly provide AMS with a `kinesisStreamName` when creating the RFC.

# Use AMS SSP to provision Amazon SageMaker AI in your AMS account
<a name="sagemaker"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon SageMaker AI capabilities directly in your AMS managed account. SageMaker AI provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker AI is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost. To learn more, see [Amazon SageMaker AI](https://aws.amazon.com/sagemaker/).

## SageMaker AI in AWS Managed Services FAQ
<a name="set-sagemaker-faqs"></a>

Common questions and answers:

**Q: How do I request access to SageMaker AI in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_sagemaker_admin_role` and service role `AmazonSageMaker-ExecutionRole-Admin`. After SageMaker AI is provisioned in your account, you must onboard the `customer_sagemaker_admin_role` role in your federation solution. The service role cannot be accessed by you directly; the SageMaker AI service uses it while doing various actions as described here: [Passing Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-pass-role).

**Q: What are the restrictions to using SageMaker AI in my AMS account?**
+ The following use cases are not supported by the AMS Amazon SageMaker AI IAM role:
  + SageMaker AI Studio is not supported at this time.
  + SageMaker AI Ground Truth to manage private workforces is not supported since this feature requires overly permissive access to Amazon Cognito resources. If managing a private workforce is required, you can request a custom IAM role with combined SageMaker AI and Amazon Cognito permissions. Otherwise, we recommend using public workforce (backed by Amazon Mechanical Turk), or AWS Marketplace service providers, for data labeling.
+ Creating VPC Endpoints to support API calls to SageMaker AI services (aws.sagemaker.\$1region\$1.notebook, com.amazonaws.\$1region\$1.sagemaker.api & com.amazonaws.\$1region\$1.sagemaker.runtime) is not supported as permissions can’t be scoped down to SageMaker AI related services only. To support this use case, submit a Management \$1 Other \$1 Other RFC to create related VPC endpoints.
+ SageMaker AI endpoint auto scaling is not supported as SageMaker AI requires `DeleteAlarm` permissions on any ("\$1") resource. To support endpoint auto scaling, submit a Management \$1 Other \$1 Other RFC to setup auto scaling for a SageMaker AI endpoint.

**Q: What are the prerequisites or dependencies to using SageMaker AI in my AMS account?**
+ The following use cases require special configuration prior to use:
  + If an S3 bucket will be used to store model artifacts and data, then you must request an S3 bucket named with the required keywords ("SageMaker", "Sagemaker", "sagemaker" or "aws-glue") with a Deployment \$1 Advanced stack components \$1 S3 storage \$1 Create RFC.
  + If Elastic File Store (EFS) will be used, then EFS storage must be configured in the same subnet, and allowed by security groups.
  + If other resources require direct access to SageMaker AI services (notebooks, API, runtime, and so on), then configuration must be requested by:
    + Submitting an RFC to create a security group for the endpoint (Deployment \$1 Advanced stack components \$1 Security group \$1 Create (auto)).
    + Submitting a Management \$1 Other \$1 Other \$1 Create RFC to set up related VPC endpoints.

**Q: What are the supported naming conventions for resources that the `customer_sagemaker_admin_role` can access directly?** (The following are for update and delete permissions; if you require additional supported naming conventions for your resources, reach out to an AMS Cloud Architect for consultation.)
+ Resource: Passing `AmazonSageMaker-ExecutionRole-*` role
  + Permissions: The SageMaker AI self-provisioned service role supports your use of the SageMaker AI service role (`AmazonSageMaker-ExecutionRole-*`) with AWS Glue, AWS RoboMaker, and AWS Step Functions.
+ Resource: Secrets on AWS Secrets Manager
  + Permissions: Describe, Create, Get, Update secrets with a `AmazonSageMaker-*` prefix.
  + Permissions: Describe, Get secrets when the `SageMaker` resource tag is set to `true`.
+ Resource: Repositories on AWS CodeCommit
  + Permissions: Create/ delete repositories with a `AmazonSageMaker-*` prefix.
  + Permissions: Git Pull/Push on repositories with following prefixes, `*sagemaker*`, `*SageMaker*`, and `*Sagemaker*`.
+ Resource: Amazon ECR (Amazon Elastic Container Registry) Repositories
  + Permissions: Permissions: Set, delete repository policies, and upload container images, when the following resource naming convention is used, `*sagemaker*`.
+ Resource: Amazon S3 buckets
  + Permissions: Get, Put, Delete object, abort multipart upload S3 objects when resources have the following prefixes: `*SageMaker*`, `*Sagemaker*`, `*sagemaker*` and `aws-glue`.
  + Permissions: Get S3 objects when the `SageMaker` tag is set to `true`.
+ Resource: Amazon CloudWatch Log Group
  + Permissions: Create Log Group or Stream, Put Log Event, List, Update, Create , Delete log delivery with following prefix: `/aws/sagemaker/*`.
+ Resource: Amazon CloudWatch Metric
  + Permissions: Put metric data when the following prefixes are used: `AWS/SageMaker`, `AWS/SageMaker/`, `aws/SageMaker`, `aws/SageMaker/`, `aws/sagemaker`, `aws/sagemaker/`, and `/aws/sagemaker/.`.
+ Resource: Amazon CloudWatch Dashboard
  + Permissions: Create/Delete dashboards when the following prefixes are used: `customer_*`.
+ Resource: Amazon SNS (Simple Notification Service) topic
  + Permissions: Subscribe/Create topic when following prefixes are used: `*sagemaker*`, `*SageMaker*`, and `*Sagemaker*`.

**Q: What’s the difference between `AmazonSageMakerFullAccess` and `customer_sagemaker_admin_role`?**

The `customer_sagemaker_admin_role` with the `customer_sagemaker_admin_policy` provides almost the same permissions as AmazonSageMakerFullAccess except:
+ Permission to connect with AWS RoboMaker, Amazon Cognito, and AWS Glue resources.
+ SageMaker AI endpoint autoscaling. You must submit a RFC with Management \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Update entity or policy (managed automation) change type (ct-27tuth19k52b4) to elevate autoscaling permissions temporarily, or permanently, as autoscaling requires permissive access on CloudWatch service.

**Q: How do I adopt AWS KMS customer managed key in data encryption at rest?**

You must ensure that the key policy has been set up properly on the customer managed keys so that related IAM users or roles can use the keys. For more information, see the [AWS KMS Key Policy document](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-users).

# Use AMS SSP to provision Amazon Simple Email Service in your AMS account
<a name="amz-ses"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Simple Email Service (Amazon SES) capabilities directly in your AMS managed account. Amazon Simple Email Service is a cloud-based email sending service designed to help digital marketers and application developers, send marketing, notification, and transactional emails.

You can use the SMTP interface or one of the AWS SDKs to integrate Amazon SES directly into your existing applications. You can also integrate the email sending capabilities of Amazon SES into the software you already use, such as ticketing systems and email clients.

To learn more, see [Amazon Simple Email Service](https://aws.amazon.com/ses/).

## Amazon SES in AWS Managed Services FAQ
<a name="set-amz-ses-faqs"></a>

**Q: How do I request access to Amazon SES in my AMS account?**

Request access to Amazon SES by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_ses_admin_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the prerequisites or dependencies to using Amazon SES in my AMS account?**
+ You must configure an S3 bucket policy to allow Amazon SES to publish events to the bucket.
+ You must use a default (AWS SES), or configure, a CMK key to allow Amazon SES to encrypt emails and push events to other service resources such as Amazon S3, Amazon SNS, Lambda, and Firehose, belonging to the account.

**Q: What are the restrictions to using Amazon SES in my AMS account?**

You must raise RFCs to create the following resources:
+ An SMTP user and IAM service role with PutEvents permission, to a Kinesis Firehose stream.
+ You must create new AWS resources such as S3 bucket, Firehose stream, SNS topic by using AMS change types in order for your Amazon SES rules and configuration sets' destinations to work with those resources.
+ SMTP credentials. To request new SMTP credentials, use the Change Type (Management \$1 Other \$1 Other \$1 Create). AMS creates the credentials and adds them to Secrets Manager for you.

# Use AMS SSP to provision Amazon Simple Workflow Service in your AMS account
<a name="workflow"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Simple Workflow Service (Amazon SWF) capabilities directly in your AMS managed account. Amazon Simple Workflow Service helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud. If your application's steps take more than 500 milliseconds to complete, you need to track the state of processing, or you need to recover or retry if a task fails, Amazon SWF can help you. To learn more, see [Amazon Simple Workflow Service](https://aws.amazon.com/swf/).

## Amazon SWF in AWS Managed Services FAQ
<a name="set-workflow-faqs"></a>

Common questions and answers:

**Q: How do I request access to Amazon SWF in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_swf_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Amazon SWF in my AMS account?**

The Lambda `InvokeFunction` permissions have been included in this service however, the AMS `customer_deny_policy` that is added to all AMS customer roles explicitly denies access to AMS Lambda functions and AMS-owned resources. In order to tag or untag resources within Amazon SWF, submit a Management \$1 Other \$1 Other Change Type.

**Q: What are the prerequisites or dependencies to using Amazon SWF in my AMS account?**

Amazon SWF is dependent on the AWS Lambda service, therefore, permissions to invoke Lambda have been provided as a part of this role and no additional permissions are required to invoke Lambda from Amazon SWF. Otherwise, there are no prerequisites to using Amazon SWF.

# Use AMS SSP to provision Amazon Textract in your AMS account
<a name="textract"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Textract capabilities directly in your AMS managed account. Amazon Textract is a fully managed machine learning service that automatically extracts printed text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. To learn more, see [Amazon Textract](https://aws.amazon.com/textract/).

## Amazon Textract in AWS Managed Services FAQ
<a name="set-textract-faqs"></a>

Common questions and answers:

**Q: How do I request Amazon Textract to be set up in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM roles to your account: `customer_textract_console_role`, `customer_textract_human_review_execution_role`, and `customer_ec2_textract_instance_profile`. Once provisioned in your account, you must onboard the role `customer_textract_console_role` in your federation solution.

**Q: What are the restrictions to using Amazon Textract in my AMS account?**

There are no restrictions for the use of Amazon Textract in your AMS account.

**Q: What are the prerequisites or dependencies to using Amazon Textract in my AMS account?**

You must request the creation of an S3 bucket by submitting an RFC Deployment \$1 Advanced stack components \$1S3 storage \$1 Create (ct-1a68ck03fn98r).

# Use AMS SSP to provision Amazon Transcribe in your AMS account
<a name="transcribe"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Amazon Transcribe capabilities directly in your AMS managed account. Amazon Transcribe is a fully managed and continuously trained automatic speech recognition service that automatically generates time-stamped text transcripts from audio files. Amazon Transcribe makes it easy for developers to add speech-to-text capabilities to their applications. Audio data is virtually impossible for computers to search and analyze. Therefore, recorded speech needs to be converted to text before it can be used in applications. Historically, customers had to work with transcription providers that required them to sign expensive contracts and were hard to integrate into their technology stacks to accomplish this task. Many of these providers use outdated technology that does not adapt well to different scenarios, like low-fidelity phone audio common in contact centers, which results in poor accuracy.

Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech into text, quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, automate closed captioning and subtitling, and generate metadata for media assets to create a fully searchable archive. You can use Amazon Transcribe Medical to add medical speech-to-text capabilities to clinical documentation applications. To learn more, see [Amazon Transcribe](https://aws.amazon.com/transcribe/).

## Amazon Transcribe in AWS Managed Services FAQ
<a name="set-transcribe-faqs"></a>

Common questions and answers:

**Q: How do I request Amazon Transcribe to be set up in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_transcribe_role`. Once provisioned in your account, you must onboard the role in your federation solution. 

**Q: What are the restrictions to using Amazon Transcribe in my AMS account?**

You must use 'customer-transcribe\$1' as the prefix for your buckets when working with transcribe, unless RA and specified otherwise.

You are not able to create an IAM role within Amazon transcribe.

You cannot use a service-managed S3 bucket for output data in default SSPS (if this is needed, please reach out to your account CA).

You must submit Risk Acceptance if you want to use customer-managed KMS Keys that do not fall under the AMS namespace.

**Q: What are the prerequisites or dependencies to using Amazon Transcribe in my AMS account?**

S3 must have access to the buckets with the name 'customer-transcribe\$1'. KMS is required in order to use Amazon Transcribe if your S3 buckets are encrypted with KMS keys. If a bucket doesn’t need to be encrypted "KMStranscribeAllow" can be removed.

# Use AMS SSP to provision Amazon WorkSpaces in your AMS account
<a name="workspaces"></a>

Use AMS Self-Service Provisioning (SSP) mode to access WorkSpaces capabilities directly in your AMS managed account. WorkSpaces enables you to provision virtual, cloud-based Microsoft Windows or Amazon Linux desktops for your users, known as WorkSpaces. WorkSpaces eliminates the need to procure and deploy hardware or install complex software. You can quickly add or remove users as your needs change. Users access their WorkSpaces by using a client application from a supported device or, for Windows WorkSpaces, a web browser, and they log in by using their existing on-premises Active Directory (AD) credentials.

To learn more, see [Amazon WorkSpaces](https://aws.amazon.com/workspaces/).

## WorkSpaces in AWS Managed Services FAQ
<a name="set-workspaces-faqs"></a>

Common questions and answers:

**Q: How do I request access to WorkSpaces in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_workspaces_console_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using WorkSpaces in my AMS account?**

Full functionality of Workspaces is available with the Amazon WorkSpaces self-provisioned service role.

**Q: What are the prerequisites or dependencies to using WorkSpaces in my AMS account?**
+ WorkSpaces are limited by AWS Region; therefore, the AD Connector must be configured in the same AWS Region where the WorkSpaces instances are hosted.

  Customers can connect WorkSpaces to customer AD using one of the following two methods:

  1. Using AD connector to proxy authentication to on-premises Active Directory service (preferred):

     Configure Active Directory (AD) Connector in your AMS account prior to integrating your WorkSpaces instance with your on-premises directory service. The AD Connector acts as a proxy for your existing AD users (from your domain) to connect to WorkSpaces using existing on-premises AD credentials. This is preferred because WorkSpaces are directly joined to the customer's on-prem domain, which acts as both Resource and User forest, leading to more control on the customer side.

     For more information, see [ Best Practices for Deploying Amazon WorkSpaces (Scenario 1)](https://docs.aws.amazon.com/whitepapers/latest/best-practices-deploying-amazon-workspaces/scenario-1-using-ad-connector-to-proxy-authentication-to-on-premises-active-directory-service.html).

  1. Using AD Connector with AWS Microsoft AD, Shared Services VPC, and a one-way trust to on-premises:

     You can also authenticate users with your on-premises directory by first establishing a one-way outgoing trust from AMS-managed AD to your on-premises AD. WorkSpaces will join AMS-managed AD using an AD Connector. WorkSpaces access permissions will then be delegated to the WorkSpaces instances through the AMS-managed AD, without the need to establish a two-way trust with your on-premises environment. In this scenario, the User forest will be in the customer AD and the Resource forest will be in the AMS-managed AD (changes to AMS-managed AD can be requested via RFC). Note that the connectivity between WorkSpaces VPC and the MALZ Shared Services VPC running AMS-managed AD is established via Transit Gateway.

     For more information, see [ Best Practices for Deploying Amazon WorkSpaces (Scenario 6)](https://docs.aws.amazon.com/whitepapers/latest/best-practices-deploying-amazon-workspaces/scenario-6-aws-microsoft-ad-shared-services-vpc-and-a-one-way-trust-to-on-premises.html).
**Note**  
The AD Connector can be configured by submitting a Management \$1 Other \$1 Other \$1 Create change type RFC with the prerequisite AD configuration details; for more information, see [Create an AD Connector](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/create_ad_connector.html). If method 2 is used to create a Resource forest in AMS-managed AD, submit another Management \$1 Other \$1 Other \$1 Create change type RFC in AMS shared-services account by running the AMS-managed AD.

# Use AMS SSP to provision AMS Code services in your AMS account
<a name="code-services"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AMS Code services capabilities directly in your AMS managed account. AMS Code services is a proprietary bundling of AWS code management services as detailed next. You can choose to deploy all of the services in AMS with AMS Code services, or you can deploy them in AMS individually.

AMS Code services includes the following services:
+ AWS CodeCommit: A fully managed [source control](https://aws.amazon.com/devops/source-control) service that hosts secure Git-based repositories. It makes it so teams can collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools. To learn more, see [AWS CodeCommit](https://aws.amazon.com/codecommit/)

  To deploy this in your AMS account independently of AMS Code services, see [Use AMS SSP to provision AWS CodeCommit in your AMS account](codecommit.md).
+ AWS CodeBuild: A fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools. With CodeBuild, you are charged by the minute for the compute resources you use. To learn more, see [AWS CodeBuild](https://aws.amazon.com/codebuild/)

  To deploy this in your AMS account independently of AMS Code services, see [Use AMS SSP to provision AWS CodeBuild in your AMS account](code-build.md).
+ AWS CodeDeploy: A fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2 and your on-premises servers. AWS CodeDeploy helps you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate software deployments, eliminating the need for error-prone manual operations. The service scales to match your deployment needs. To learn more, see [AWS CodeDeploy](https://aws.amazon.com/codedeploy/)

  To deploy this in your AMS account independently of AMS Code services, see [Use AMS SSP to provision AWS CodeDeploy in your AMS account](code-deploy.md).
+ AWS CodePipeline: A fully managed [continuous delivery](https://aws.amazon.com/devops/continuous-delivery/) service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. You can easily integrate AWS CodePipeline with third-party services such as GitHub or with your own custom plugin. With AWS CodePipeline, you only pay for what you use. There are no upfront fees or long-term commitments. To learn more, see [AWS CodePipeline](https://aws.amazon.com/codepipeline/)

  To deploy this in your AMS account independently of AMS Code services, see [Use AMS SSP to provision AWS CodePipeline in your AMS account](code-pipeline.md).

## AMS Code services in AWS Managed Services FAQ
<a name="set-code-services-faqs"></a>

**Q: How do I request access to AMS Code services in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_code_suite_console_role`. After provisioned in your account, you must onboard the role in your federation solution. At this time AMS Operations will also deploy the `customer_codebuild_service_role`, `customer_codedeploy_service_role`, `aws_code_pipeline_service_role` service roles in your account for CodeBuild, CodeDeploy and CodePipeline services. If additional IAM permissions for the are required for the `customer_codebuild_service_role` are needed, submit an AMS service request.

**Note**  
You can also add these services separately; for information, see [Use AMS SSP to provision AWS CodeBuild in your AMS account](code-build.md), [Use AMS SSP to provision AWS CodeDeploy in your AMS account](code-deploy.md), and [Use AMS SSP to provision AWS CodePipeline in your AMS account](code-pipeline.md), respectively.

**Q: What are the restrictions to using AMS Code services in my AMS account?**
+ AWS CodeCommit: The triggers feature on CodeCommit is disabled given the associated rights to create SNS topics. Directly authenticating against CodeCommit is restricted; users should authenticate with Credential Helper. Some KMS commands are also restricted: kms:Encrypt, kms:Decrypt, kms:ReEncrypt, kms:GenereteDataKey, kms:GenerateDataKeyWithoutPlaintext, and kms:DescribeKey.
+ CodeBuild: For AWS CodeBuild console admin access, permissions are limited at the resource level; for example, CloudWatch actions are limited on specific resources and the `iam:PassRole` permission is controlled.
+ CodeDeploy: Currently CodeDeploy supports deployments on Amazon EC2/On-premises only. Deployments on ECS and Lambda through CodeDeploy is not supported.
+ CodePipeline: CodePipeline features, stages, and providers are limited to the following:
  + Deploy Stage: Amazon S3 and AWS CodeDeploy
  + Source Stage: Amazon S3, AWS CodeCommit, Bit Bucket, and GitHub
  + Build Stage: AWS CodeBuild and Jenkins
  + Approval Stage: Amazon SNS
  + Test Stage: AWS CodeBuild, Jenkins, BlazeMeter, Ghost Inspector UI Testing, Micro Focus StormRunner Load, Runscope API Monitoring
  + Invoke Stage: Step Functions and Lambda
**Note**  
AMS Operations deploys the `customer_code_pipeline_lambda_policy` in your account; it must be attached with the Lambda execution role for Lambda invoke stage. Provide the Lambda service/execution role name that you want this policy added with. If there is no custom Lambda service/execution role, then AMS creates a new role named `customer_code_pipeline_lambda_execution_role`, that is a copy of ` customer_lambda_basic_execution_role` along with `customer_code_pipeline_lambda_policy`.

**Q: What are the prerequisites or dependencies to using AMS Code services in my AMS account?**
+ CodeCommit: If S3 buckets are encrypted with AWS KMS keys, S3 and AWS KMS are required to use AWS CodeCommit.
+ CodeBuild: If additional IAM permissions are required for the defined AWS CodeBuild service role, request them through an AMS service request.
+ CodeDeploy: None.
+ CodePipeline: None. AWS supported services—AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy—must be launched prior to, or along with, the launch of CodePipeline. However this is done by an AMS engineer.

# Use AMS SSP to provision AWS Amplify in your AMS account
<a name="amplify"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Amplify capabilities directly in your AMS managed account. The AWS Amplify is a complete solution that allows frontend web and mobile developers to easily build, connect, and host fullstack applications. Amplify provides flexibility to leverage the breadth of AWS services as your use cases evolve. Amplify provides products to build fullstack iOS, Android, Flutter, Web, and React Native apps. To learn more, see [AWS Amplify](https://docs.amplify.aws/console).

## AWS Amplify in AWS Managed Services FAQ
<a name="set-amplify-faqs"></a>

Common questions and answers:

**Q: How do I request AWS Amplify to be set up in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_amplify_console_role`. After provisioned to your account, you must onboard the role in your federation solution.

Additionally, you must provide a Risk Acceptance because AWS Amplify has infrastructure-mutating permissions. To do this, work with your Cloud Service Delivery Manager (CSDM).

**Q: What are the restrictions to using AWS Amplify in my AMS account?**

You must use `'amplify*'` as the prefix for your buckets when working with Amplify, unless RA and specified otherwise.

**Q: What are the prerequisites or dependencies to using AWS Amplify in my AMS account?**

There are no prerequisites for the use of AWS Amplify in your AMS account.

**Malz environments only**: The default onboarded role for Amplify is "customer\$1amplify\$1console\$1role". To use a custom role, first deploy the IAM entities. Then, create an additional RFC to add your custom role to the Service Control Policy for Application Accounts allow list.

# Use AMS SSP to provision AWS AppSync
<a name="app-sync"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS AppSync capabilities directly in your AMS managed account. AWS AppSync simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources. AWS AppSync is a managed service that uses GraphQL to make it easy for applications to get exactly the data they need.

With AWS AppSync, you can build scalable applications, including those requiring real-time updates, on a range of data sources such as NoSQL data stores, relational databases, HTTP APIs, and your custom data sources with AWS Lambda. For mobile and web apps, AWS AppSync additionally provides local data access when devices go offline, and data synchronization with customizable conflict resolution, when they are back online. To learn more, see [AWS AppSync](https://aws.amazon.com/appsync/).

## AWS AppSync in AWS Managed Services FAQ
<a name="set-app-sync-faqs"></a>

Common questions and answers:

**Q: How do I request access AWS AppSync in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM roles to your account: `customer_appsync_service_role` and `customer_appsync_author_role`. Once provisioned in your account, you must onboard the `customer_appsync_author_role` in your federation solution.

**Q: What are the restrictions to using the AWS AppSync?**
+ When creating a Data Source on AppSync the customer need to specify the previously created service role, creation of a new role is not allowed and therefore will return an access denied
+ AppSync roles are configured to restrict permissions to resources containing 'AMS-' or 'MC-' prefixes to prevent any modifications to AMS infrastructure.

**Q: What are the prerequisites or dependencies to using AWS AppSync?**

The service allows multiple other services to be used as a data source, The basic permissions to use them as such is included in the service role (`customer_appsync_service_role`), but you must manually select the service role when using the service.

# Use AMS SSP to provision AWS App Mesh in your AMS account
<a name="app-mesh"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS App Mesh capabilities directly in your AMS managed account. AWS App Mesh provides application level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh standardizes how your services communicate, giving you end-to-end visibility and ensuring high-availability for your applications.

AWS App Mesh makes it easy to run services by providing consistent visibility and network traffic controls for services built across multiple types of compute infrastructure. App Mesh removes the need to update application code to change how monitoring data is collected or traffic is routed between services. App Mesh configures each service to export monitoring data and implements consistent communications control logic across your application. This makes it easy to quickly pinpoint the exact location of errors and automatically re-route network traffic when there are failures or when code changes need to be deployed. To learn more, see [AWS App Mesh](https://aws.amazon.com/app-mesh/).

## AWS App Mesh in AWS Managed Services FAQ
<a name="set-app-mesh-faqs"></a>

Common questions and answers:

**Q: How do I request access AWS App Mesh in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_app_mesh_console_role`. After it is provisioned in your account, you must onboard the role in your federation solution. 

**Q: What are the restrictions to using the AWS App Mesh?**

Full functionality of AWS App Mesh is available in your AMS account.

**Q: What are the prerequisites or dependencies to using AWS App Mesh?**

There are no prerequisites or dependencies to use AWS App Mesh in your AMS account.

# Use AMS SSP to provision AWS Audit Manager in your AMS account
<a name="audit-mgr"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Audit Manager capabilities directly in your AMS managed account. Audit Manager helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards. Audit Manager automates evidence collection to make it easier to assess if your policies, procedures, and activities are operating effectively. When it is time for an audit, Audit Manager helps you manage stakeholder reviews of your controls and helps you build audit-ready reports with significantly less manual effort. To learn more, see [Audit Manager](https://aws.amazon.com/audit-manager/).

## AWS Audit Manager in AWS Managed Services FAQ
<a name="set-audit-mgr-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Audit Manager in my AMS account?**

You can request access through the submission of the AWS Services RFC Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny). This RFC provisions the following IAM role in your account: `customer-audit-manager-admin-Role`. After provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Audit Manager?**

There are no restrictions for the use of AWS Audit Manager in your AMS account. Full functionality for AWS Audit Manager is provided.

**Q: What are the prerequisites or dependencies to using AWS Audit Manager?**

1. You need to provide AMS with the s3 bucket where you want reports/assessments to reside.

1. If you want to have encryption with the service, you need to provide AMS with the KMS CMK ARN to use.

1. If you want to send an SNS notifications to a Topic, you must provide the name of the topic or arn.

1. **(Optional)** There is an additional prerequisite if you want to enable Organizations as part of your multi-account landing zone in Audit Manager and you want a delegated administrator account: In the description field for RFC (Management \$1 AWS service \$1 Compatible Service\$1 Add), mention that you want to use the delegated administrator account as part of Audit Manager Setup and provide the below details:
   + KMS CMK ARN (used to set up Audit Manager, initially)
   + Delegated administrator account ID for Audit Manager to use as part of this multi-account landing zone (can be a MALZ application account)

# Use AMS SSP to provision AWS Batch in your AMS account
<a name="batch"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Batch capabilities directly in your AMS managed account. AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (such as CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. To learn more, see [AWS Batch](https://aws.amazon.com/batch/).

## AWS Batch in AWS Managed Services FAQ
<a name="set-batch-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Batch in my AMS account?**

1. To request access to AWS Batch, submit the RFC Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct). This RFC provisions the following IAM roles and policies in your account:

IAM roles:
+ `customer_batch_console_role`
+ `customer_batch_ecs_instance_role`
+ `customer_batch_events_service_role`
+ `customer_batch_service_role`
+ `customer_batch_ecs_task_role`

Policies:
+ `customer_batch_console_role_policy`
+ `customer_batch_service_role_policy`
+ `customer_batch_events_service_role_policy`

2. After provisioned in your account, you must onboard the role `customer_batch_console_role` in your federation solution.

**Q: What are the restrictions to using AWS Batch?**

When creating the Compute Environment, you should tag EC2 instances as "customer\$1batch" or "customer-batch". If the instances are not tagged, instances will not be terminated by batch when the job completes.

**Q: What are the prerequisites or dependencies to using AWS Batch?**

There are no prerequisites or dependencies to use AWS Batch in your AMS account.

# Use AMS SSP to provision AWS Certificate Manager in your AMS account
<a name="acm"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Certificate Manager (ACM) capabilities directly in your AMS managed account. AWS Certificate Manager is a service that lets you provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.

With AWS Certificate Manager, you can request a certificate, deploy it on ACM-integrated AWS resources, such as Elastic Load Balancers, Amazon CloudFront distributions, and APIs on API Gateway, and let AWS Certificate Manager handle certificate renewals. It also enables you to create private certificates for your internal resources and manage the certificate lifecycle centrally. Public and private certificates provisioned through AWS Certificate Manager for use with ACM-integrated services are free. You pay only for the AWS resources you create to run your application. With [AWS Private Certificate Authority](https://aws.amazon.com/certificate-manager/private-certificate-authority/), you pay monthly for the operation of the AWS Private CA and for the private certificates you issue. To learn more, see [AWS Certificate Manager - AWS Documentation](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html).

## ACM in AWS Managed Services FAQ
<a name="set-acm-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Certificate Manager in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_acm_create_role`. You can use this role to create and manage ACM certificates. After it's provisioned in your account, you must onboard the role in your federation solution. 

ACM certificates can be created using the following change types, even if you haven't added the `customer_acm_create_role` IAM role:
+  [ ACM \$1 Create Public Certificate](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-acm-create-public-certificate.html)
+  [ ACM \$1 Create Private Certificate](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-acm-create-private-certificate.html)
+  [ ACM Certificate with additional SANs \$1 Create](https://docs.aws.amazon.com/managedservices/latest/ctref/deployment-advanced-acm-certificate-with-additional-sans-create.html)

**Q: What are the restrictions to using the AWS Certificate Manager?**

You must submit a Request for Change (RFC) to AMS to delete or modify existing certificates, as those actions require full admin access (use the Management \$1 Advanced stack components \$1 ACM \$1 Delete certificate change type (ct-1q8q56cmwqj9m)). Note that the IAM policy can't exclude rights based on tag names (mc\$1, ams\$1, etc). Certificates do not incur a cost, so deleting unused certificates is not time sensitive.

**Q: What are the prerequisites or dependencies to using Certificate Manager?**

Existing public DNS name, and access to create DNS CNAME records, but those do not need to be hosted in the managed account.

# Use AMS SSP to provision AWS Private Certificate Authority in your AMS account
<a name="acm-priv-ca"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Private Certificate Authority capabilities directly in your AMS managed account. Private certificates are used for identifying and securing communication between connected resources on private networks, such as servers, mobile, and IoT devices and applications. AWS Private CA is a managed private CA service that helps you easily and securely manage the lifecycle of your private certificates. AWS Private CA provides you a highly-available private CA service without the upfront investment and ongoing maintenance costs of operating your own private CA. AWS Private CA extends ACM’s certificate management capabilities to private certificates, enabling you to create and manage public and private certificates centrally. You can easily create and deploy private certificates for your AWS resources using the AWS Management Console or the ACM API. For EC2 instances, containers, IoT devices, and on-premises resources, you can easily create and track private certificates and use your own client-side automation code to deploy them. You also have the flexibility to create private certificates and manage them yourself for applications that require custom certificate lifetimes, key algorithms, or resource names To learn more, see [AWS Private CA](https://aws.amazon.com/certificate-manager/private-certificate-authority/).

## AWS Private CA in AWS Managed Services FAQ
<a name="set-app-sync-faqs"></a>

Common questions and answers:

**Q: How do I request access AWS Private CA in my AMS account?**

Request access through the submission of the AWS Services RFC (Management \$1 AWS service \$1 Compatible Service). Through this RFC the following IAM role will be provisioned in your account: `customer_acm_pca_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using the AWS Private CA?**

Currently, AWS Resource Access Manager (AWS RAM) cannot be used to share your AWS Private CA cross-account.

**Q: What are the prerequisites or dependencies to using AWS Private CA?**

1. If you plan to create a CRL, you need an S3 bucket to store it in. AWS Private CA automatically deposits the CRL in the Amazon S3 bucket you designate and updates it periodically. It is a pre requisite that the S3 bucket has the below bucket policy before you can set-up a CRL. In order to proceed with this request; create a RFC with ct-0fpjlxa808sh2 (Management \$1 Advanced stack components \$1 S3 storage \$1 Update policy) as follows:
+ Provide the S3 bucket name or ARN.
+ Copy the below policy onto RFC and replace `bucket-name` with your desired S3 bucket name.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "Service":"acm-pca.amazonaws.com"
         },
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:GetBucketAcl",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::bucket-name/*",
            "arn:aws:s3:::bucket-name"
         ]
      }
   ]
}
```

------

2. If the above S3 bucket is encrypted, then the Service Principal acm-pca.amazonaws.com requires permissions to decrypt. In order to proceed with this request; create a RFC with ct-3ovo7px2vsa6n (Management \$1 Advanced stack components \$1 KMS key \$1 Update) as follows:
+ Provide the KMS Key ARN on which the policy must be updated.
+ Copy the below policy onto RFC and replace `bucket-name` with your desired S3 bucket name.

```
{
   "Sid":"Allow ACM-PCA use of the key",
   "Effect":"Allow",
   "Principal":{
      "Service":"acm-pca.amazonaws.com"
   },
   "Action":[
      "kms:GenerateDataKey",
      "kms:Decrypt"
   ],
   "Resource":"*",
   "Condition":{
      "StringLike":{
         "kms:EncryptionContext:aws:s3:arn":[
            "arn:aws:s3:::bucket_name/acm-pca-permission-test-key",
            "arn:aws:s3:::bucket_name/acm-pca-permission-test-key-private",
            "arn:aws:s3:::bucket_name/audit-report/*",
            "arn:aws:s3:::bucket_name/crl/*"
         ]
      }
   }
}
```

3. AWS Private CA CRLs don't support the S3 setting "Block public access to buckets and objects granted through new access control lists (ACLs)". You must disable this setting with the S3 account and bucket in order to allow the AWS Private CA to write CRLs as mentioned in [ How to securely create and store your CRL for ACM Private CA](https://aws.amazon.com/blogs/security/how-to-securely-create-and-store-your-crl-for-acm-private-ca/) If you would like to disable, create a new RFC with ct-0xdawir96cy7k (Management \$1 Other \$1 Other \$1 Update) and attach a Risk Acceptance. If you have any questions on risk acceptance, reach out to your Cloud Architect.

# Use AMS SSP to provision AWS CloudEndure in your AMS account
<a name="cloud-endure"></a>

**Note**  
Following the successful launch of AWS Application Migration Service, the CloudEndure Migration service is now end of life in all AWS Regions. We recommend customers use AWS Application Migration Service for lift and shift migrations to GovCloud Regions and to the Commercial Regions. For information, see [What Is AWS Application Migration Service?](https://docs.aws.amazon.com/mgn/latest/ug/what-is-application-migration-service.html).  
If you want to use the AWS Application Migration Service, reach out to your CA so they can guide you.

Use AMS Self-Service Provisioning (SSP) mode to access AWS CloudEndure capabilities directly in your AMS managed account. AWS CloudEndure migration simplifies, expedites, and automates large-scale migrations from physical, virtual, and cloud-based infrastructure to AWS. CloudEndure Disaster Recovery (DR) protects against downtime and data loss from any threat, including ransomware and server corruption.

## AWS CloudEndure in AWS Managed Services FAQ
<a name="cloud-endure-faqs"></a>

**Q: How do I request access to CloudEndure in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM User to your account: `customer_cloud_endure_user`. After it's provisioned in your account, the access key and secret key for the user is shared in AWS Secrets Manager.

These policies are provisioned to the account as well: `customer_cloud_endure_policy` and `customer_cloud_endure_deny_policy`.

Additionally, you must provide a Risk Acceptance as the CloudEndure DR solution for application integration has infrastructure-mutating permissions. To do this, work with your cloud service delivery manager (CSDM).

**Q: What are the restrictions to using CloudEndure in my AMS account?**

The cloud endure replication and conversion instances can be launched only in the subnet you indicate. 

**Q: What are the prerequisites or dependencies to using CloudEndure in my AMS account?** Share the following via RFC bidirectional correspondence:
+ VPC Subnet details for Replication and Conversion instances to be launched.
+ The KMS Key Amazon Resource Name (ARN) if the EBS volumes are encrypted.

# Use AMS SSP to provision AWS CloudHSM in your AMS account
<a name="cloud-hsm"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS CloudHSM capabilities directly in your AMS managed account. AWS CloudHSM helps you meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. AWS, and AWS Marketplace partners, offer a variety of solutions for protecting sensitive data within the AWS platform, but for some applications and data subject to contractual or regulatory mandates for managing cryptographic keys, additional protection may be necessary. AWS CloudHSM complements existing data protection solutions and allows you to protect your encryption keys within HSMs that are designed and validated to government standards for secure key management. AWS CloudHSM allows you to securely generate, store, and manage cryptographic keys used for data encryption in a way that keys are accessible only by you. To learn more, see [AWS CloudHSM](https://aws.amazon.com/cloudhsm/).

## AWS CloudHSM in AWS Managed Services FAQ
<a name="set-cloud-hsm-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS CloudHSM in my AMS account?**

Utilization of in your AMS account is a two-step process:

1. Request an AWS CloudHSM cluster. Do this by submitting an RFC with the Management \$1 Other \$1 Other \$1 Create (ct-1e1xtak34nx76) change type. Include the following details:
   + AWS Region.
   + VPC ID/ARN. Provide a VPC ID/VPC ARN that is in the same account as the RFC that you submit.
   + Specify at least two Availability Zones for the cluster.
   + Amazon EC2 instance ID that will connect to the HSM cluster.

1. Access the AWS CloudHSM console. Do this by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_cloudhsm_console_role`.

After the role is provisioned in your account, you must onboard it in your federation solution.

**Q: What are the restrictions to using AWS CloudHSM in my AMS account?**

Access to the AWS CloudHSM console doesn't provide you with the ability to create, terminate or restore your cluster. To do those things, submit a Management \$1 Other \$1 Other \$1 Create change type (ct-1e1xtak34nx76) change type.

**Q: What are the prerequisites or dependencies to using AWS CloudHSM in my AMS account?**

You must allow TCP traffic using port 2225 through a client Amazon EC2 instance within a VPC, or use Direct Connect VPN for on-premise servers that want access to the HSM cluster. AWS CloudHSM is dependent on Amazon EC2 for security groups and network interfaces. For log monitoring or auditing, HSM relies on CloudTrail (AWS API operations) and CloudWatch Logs for all local HSM device activity.

**Q: Who will apply updates to the AWS CloudHSM client and related software libraries?**

You are responsible for applying the library and client updates. You'll want to monitor the [CloudHSM version history](https://docs.aws.amazon.com/cloudhsm/latest/userguide/client-history.html) page for releases, and then apply updates using the [CloudHSM client upgrade](https://docs.aws.amazon.com/cloudhsm/latest/userguide/client-upgrade.html).

**Note**  
Software patches for the HSM appliance are always automatically applied by the AWS CloudHSM service.

# Use AMS SSP to provision AWS CodeBuild in your AMS account
<a name="code-build"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS CodeBuild capabilities directly in your AMS managed account. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools. With CodeBuild, you are charged by the minute for the compute resources you use. To learn more, see [AWS CodeBuild](https://aws.amazon.com/codebuild/).

**Note**  
To onboard CodeCommit, CodeBuild, CodeDeploy, and CodePipeline with a single RFC, submit the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type and request the three services: CodeBuild, CodeDeploy and CodePipeline. Then, all three roles, `customer_codebuild_service_role`, `customer_codedeploy_service_role`, and `aws_code_pipeline_service_role` are provisioned in your account. After provisioning in your account, you must onboard the role in your federation solution.

## CodeBuild in AWS Managed Services FAQ
<a name="set-code-build-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS CodeBuild in my AMS account?**

Utilization of AWS CodeBuild in your AMS account is a two-step process:

1. Provision the `CodeBuild Service Role` for build process to coordinate with AWS S3 buckets, Amazon CloudWatch and Log groups

1. Request access to the CodeBuild console

You can request that both be set up in your AMS account by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS CodeBuild in my AMS account?**

For AWS CodeBuild console administrator access, permissions are limited at resource level; for example, CloudWatch actions are limited on specific resources and the `iam:PassRole` permission is controlled.

**Q: What are the prerequisites or dependencies to using CodeBuild in my AMS account?**

If additional IAM permissions are required for the defined AWS CodeBuild service role, request them through an AMS service request.

# Use AMS SSP to provision AWS CodeCommit in your AMS account
<a name="codecommit"></a>

**Note**  
AWS has closed new customer access to AWS CodeCommit, effective July 25, 2024. AWS CodeCommit existing customers can continue to use the service as normal. AWS continues to invest in security, availability, and performance improvements for AWS CodeCommit, but we do not plan to introduce new features.  
To migrate AWS CodeCommit Git repositories to other Git providers, reach out to your cloud architect (CA) for guidance. For more information on migrating your Git repositories, see [How to migrate your AWS CodeCommit repository to another Git provider](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/).

Use AMS Self-Service Provisioning (SSP) mode to access AWS CodeCommit capabilities directly in your AMS managed account. AWS CodeCommit is a fully managed [source control](https://aws.amazon.com/devops/source-control/) service that hosts secure Git-based repositories. It helps teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools. To learn more, see [AWS CodeCommit](https://aws.amazon.com/codecommit/).

**Note**  
To onboard CodeCommit, CodeBuild, CodeDeploy, and CodePipeline with a single RFC, submit the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type and request the three services: CodeBuild, CodeDeploy and CodePipeline. Then, all three roles, `customer_codebuild_service_role`, `customer_codedeploy_service_role`, and `aws_code_pipeline_service_role` are provisioned in your account. After provisioning in your account, you must onboard the role in your federation solution.

## CodeCommit in AWS Managed Services FAQ
<a name="set-codecommit-faqs"></a>

**Q: How do I request access to CodeCommit in my AMS account?**

AWS CodeCommit console and data access roles can be requested through the submission of two AWS Service RFCs, console access, and data access:
+ Request access to AWS CodeCommit by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_codecommit_console_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

  Data access (such as Training and Entity Lists) require separate CTs for each data source specifying the S3 data source (mandatory), output bucket (mandatory) and KMS (optional). There are no limitations to AWS CodeCommit job creation as long as all data sources have been granted access roles. To request data access, submit an RFC with the Management \$1 Other \$1 Other \$1 Create (ct-1e1xtak34nx76).

**Q: What are the restrictions to using AWS CodeCommit in my AMS account?**

Triggers feature on CodeCommit are disabled given the associated rights to create SNS topics. Directly authenticating against CodeCommit is restricted, users should authenticate with Credential Helper. Some KMS commands are also restricted: `kms:Encrypt`, `kms:Decrypt`, `kms:ReEncrypt`, `kms:GenereteDataKey`, `kms:GenerateDataKeyWithoutPlaintext`, and `kms:DescribeKey`.

**Q: What are the prerequisites or dependencies to using AWS CodeCommit in my AMS account?**

If S3 buckets are encrypted with KMS keys, S3 and KMS are required to use AWS CodeCommit.

# Use AMS SSP to provision AWS CodeDeploy in your AMS account
<a name="code-deploy"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS CodeDeploy capabilities directly in your AMS managed account. AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy helps you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate software deployments, eliminating the need for error-prone manual operations. The service scales to match your deployment needs. To learn more, see [AWS CodeDeploy](https://aws.amazon.com/codedeploy/).

**Note**  
To onboard CodeCommit, CodeBuild, CodeDeploy, and CodePipeline with a single RFC, submit the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type and request the three services: CodeBuild, CodeDeploy and CodePipeline. Then, all three roles, `customer_codebuild_service_role`, `customer_codedeploy_service_role`, and `aws_code_pipeline_service_role` are provisioned in your account. After provisioning in your account, you must onboard the role in your federation solution.

## CodeDeploy in AWS Managed Services FAQ
<a name="set-code-deploy-faqs"></a>

**Q: How do I request access to CodeDeploy in my AMS account?**

Request access to CodeDeploy by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_codedeploy_console_role` and `customer_codedeploy_service_role`. After it's provisioned in your account, you must onboard the `customer_codedeploy_console_role` role in your federation solution.

**Q: What are the restrictions to using CodeDeploy in my AMS account?**

Currently we are only supporting Compute Platform as — Amazon EC2/On-premises. Blue/Green Deployments are not supported.

**Q: What are the prerequisites or dependencies to using CodeDeploy in my AMS account?**

There are no prerequisites or dependencies to use CodeDeploy in your AMS account.

# Use AMS SSP to provision AWS CodePipeline in your AMS account
<a name="code-pipeline"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS CodePipeline capabilities directly in your AMS managed account. AWS CodePipeline is a fully managed [continuous delivery](https://aws.amazon.com/devops/continuous-delivery/) service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. You can easily integrate AWS CodePipeline with third-party services such as GitHub or with your own custom plugin. With AWS CodePipeline, you only pay for what you use. There are no upfront fees or long-term commitments. To learn more, see [AWS CodePipeline](https://aws.amazon.com/codepipeline/).

**Note**  
To onboard CodeCommit, CodeBuild, CodeDeploy, and CodePipeline with a single RFC, submit the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type and request the three services: CodeBuild, CodeDeploy and CodePipeline. Then, all three roles, `customer_codebuild_service_role`, `customer_codedeploy_service_role`, and `aws_code_pipeline_service_role` are provisioned in your account. After provisioning in your account, you must onboard the role in your federation solution.  
CodePipeline in AMS does not support "Amazon CloudWatch Events" for Source Stage because it needs elevated permissions to create the service role and policy, which bypasses the least-privileges model and AMS change management process.

## CodePipeline in AWS Managed Services FAQ
<a name="set-code-pipeline-faqs"></a>

**Q: How do I request access to CodePipeline in my AMS account?**

Request access to CodePipeline by submitting a service request for the `customer_code_pipeline_console_role` in the relevant account. After it's provisioned in your account, you must onboard the role in your federation solution.

At this time, AMS Operations will also deploy this service role in your account: `aws_code_pipeline_service_role_policy`.

**Q: What are the restrictions to using CodePipeline in my AMS account?**

Yes. CodePipeline features, stages, and providers are limited to the following:

1. Deploy Stage: Limited to Amazon S3, and AWS CodeDeploy

1. Source Stage: Limited to Amazon S3, AWS CodeCommit, BitBucket, and GitHub

1. Build Stage: Limited to AWS CodeBuild, and Jenkins

1. Approval Stage: Limited to Amazon SNS

1. Test Stage: Limited to AWS CodeBuild, Jenkins, BlazeMeter, Ghost Inspector UI Testing, Micro Focus StormRunner Load, and Runscope API Monitoring

1. Invoke Stage: Limited to Step Functions, and Lambda
**Note**  
AMS Operations will deploy `customer_code_pipeline_lambda_policy` in your account; it must be attached with the Lambda execution role for Lambda invoke stage. Please provide the Lambda service/execution role name that you want this policy added with. If there is no custom Lambda service/execution role, AMS will create a new role named `customer_code_pipeline_lambda_execution_role`, which will be a copy of `customer_lambda_basic_execution_role` along with `customer_code_pipeline_lambda_policy`.

**Q: What are the prerequisites or dependencies to using CodePipeline in my AMS account?**

AWS supported services AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy must be launched prior to, or along with, the launch of CodePipeline.

# Use AMS SSP to provision AWS Compute Optimizer in your AMS account
<a name="compute-optimizer"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Compute Optimizer capabilities directly in your AMS managed account. AWS Compute Optimizer recommends optimal AWS Compute resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Over-provisioning compute (Amazon EC2 and ASGs) can lead to unnecessary infrastructure cost and under-provisioning compute can lead to poor application performance. Compute Optimizer helps you choose the optimal Amazon EC2 instance types, including those that are part of an Amazon EC2 Auto Scaling group, based on your utilization data. To learn more, see [AWS Compute Optimizer](https://aws.amazon.com/compute-optimizer/).

## Compute Optimizer in AWS Managed Services FAQ
<a name="set-compute-optimizer-faqs"></a>

**Q: How do I request access to Compute Optimizer in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_compute_optimizer_readonly_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Compute Optimizer in my AMS account?**

There are no restrictions. Full functionality of AWS Compute Optimizer is available in your AMS account.

**Q: What are the prerequisites or dependencies to using Compute Optimizer in my AMS account?**
+ You must submit an RFC (Management \$1 Other \$1 Other \$1 Update) authorizing AMS Ops to enable the service in the account. During deployment, a service linked role (SLR) is created to allow metrics gathering and report generation. The SLR is labeled "AWSServiceRoleForComputeOptimizer". For more information, see [Using Service-Linked Roles for AWS Compute Optimizer](https://docs.aws.amazon.com/compute-optimizer/latest/ug/using-service-linked-roles.html)
+ CloudWatch metrics must be enabled for the following metrics:
  + **CPU utilization**: The percentage of allocated Amazon EC2 compute units that are in use on the instance. This metric identiﬁes the processing power required to run an application upon a selected instance.
  + **Memory utilization**: The amount of memory that has been used in some way during the sample period. This metric identiﬁes the memory required to run an application upon a selected instance. Memory utilization is analyzed only for resources that have the uniﬁed CloudWatch agent installed on them. For more information, see Enabling Memory Utilization with the CloudWatch Agent (p. 10).
  + **Network in**: The number of bytes received on all network interfaces by the instance. This metric identiﬁes the volume of incoming network traﬃc to a single instance.
  + **Network out**: The number of bytes sent out on all network interfaces by the instance. This metric identiﬁes the volume of outgoing network traﬃc from a single instance.
  + **Local disk input/output (I/O)**: The number of input/output operations for the local disk. This metric identiﬁes the performance of the root volume of an instance

# Use AMS SSP to provision AWS DataSync in your AMS account
<a name="data-sync"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS DataSync capabilities directly in your AMS managed account. AWS DataSync moves large amounts of data online between on-premises storage and Amazon S3, Amazon Elastic File System (Amazon Elastic File System) or Amazon FSx. Manual tasks related to data transfers can slow down migrations and burden IT operations. DataSync eliminates or automatically handles many of these tasks, including scripting copy jobs, scheduling and monitoring transfers, validating data, and optimizing network utilization. The DataSync software agent connects to your Network File System (NFS) and Server Message Block (SMB) storage, so you don’t have to modify your applications. DataSync can transfer hundreds of terabytes and millions of files at speeds up to 10 times faster than open-source tools, over the internet or AWS Direct Connect links. You can use DataSync to migrate active data sets or archives to AWS, transfer data to the cloud for timely analysis and processing, or replicate data to AWS for business continuity. 

To learn more, see [AWS DataSync](https://aws.amazon.com/datasync/).

## DataSync in AWS Managed Services FAQ
<a name="data-sync-faqs"></a>

**Q: How do I request access to DataSync in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_datasync_console_role`.

After provisioned in your account, you must onboard the roles in your federation solution.

The CloudWatch log group to use in order to stream task logs is "/aws/datasync".

**Q: What are the restrictions to using DataSync in my AMS account?**

Full functionality of AWS DataSync is available in your AMS account.

**Q: What are the prerequisites or dependencies to using DataSync in my AMS account?**
+ Amazon S3 ARNs (Amazon Resource Names) are required for all S3 buckets associated with DataSync tasks that will be performed using the DataSync service role `customer_datasync_service_role`.
+ VPC Endpoints and security groups for DataSync agents must be requested with an RFC with the Management \$1 Other \$1 Other \$1 Create (ct-1e1xtak34nx76) change type prior to using VPC Endpoints.
+ AWS DataSync agents run in AMS as an appliance. The AWS DataSync agent is patched and updated by the service; for details, see [AWS DataSync FAQ](https://aws.amazon.com/datasync/faqs/).
+ To launch an AWS DataSync agent, submit an RFC with the Management \$1 Other \$1 Other \$1 Create (ct-1e1xtak34nx76) change type, requesting the agent be deployed. Provide the AWS DataSync Amazon EC2 AMI ID, instance type, subnet, security group; and either reference an existing Amazon EC2 keypair or request the creation of a new keypair.
**Note**  
AMS provisions the AWS DataSync agent manually on behalf of customer, and doesn't require the WIGS ingestion process on the AWS DataSync Amazon EC2 AMI.

# Use AMS SSP to provision AWS Device Farm in your AMS account
<a name="device-farm"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Device Farm capabilities directly in your AMS managed account. AWS Device Farm is an application testing service that lets you improve the quality of your web and mobile apps by testing them across an extensive range of desktop browsers and real mobile devices; without having to provision and manage any testing infrastructure. The service enables you to run your tests concurrently on multiple desktop browsers or real devices to speed up the execution of your test suite, and generates videos and logs to help you quickly identify issues with your app. 

To learn more, see [AWS Device Farm](https://aws.amazon.com/device-farm/).

## AWS Device Farm in AWS Managed Services FAQ
<a name="device-farm-faqs"></a>

**Q: How do I request access to AWS Device Farm in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_devicefarm_role`.

Once provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using AWS Device Farm in my AMS account?**

Full access to the AWS Device Farm service is provided with the exception of using the AMS namespace in the 'Name' tag.

**Q: What are the prerequisites or dependencies to using AWS Device Farm in my AMS account?**

None.

# Use AMS SSP to provision AWS Elastic Disaster Recovery in your AMS account
<a name="elastic-disaster-recovery"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Elastic Disaster Recovery capabilities directly in your AMS managed account. AWS Elastic Disaster Recovery minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. You can increase IT resilience when you use AWS Elastic Disaster Recovery to replicate on-premises or cloud-based applications running on supported operating systems. Use the AWS Management Console to configure replication and launch settings, monitor data replication, and launch instances for drills or recovery.

To learn more, see [AWS Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/).

## AWS Elastic Disaster Recovery in AWS Managed Services FAQ
<a name="elastic-disaster-recovery-faqs"></a>

**Q: How do I request access to AWS Elastic Disaster Recovery in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_drs_console_role`.

After its provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Elastic Disaster Recovery in my AMS account?**

There are no restrictions to use AWS Elastic Disaster Recovery in your AMS account.

**Q: What are the prerequisites or dependencies to using AWS Elastic Disaster Recovery in my AMS account?**
+ After you have access to the console role, you must initialize the Elastic Disaster Recovery service to create the needed IAM roles within the account.
  + You must submit change type Management \$1 Applications \$1 IAM instance profile \$1 Create (managed automation) change type ct-0ixp4ch2tiu04 RFC to create a clone of the `customer-mc-ec2-instance-profile` instance profile and attach the `AWSElasticDisasterRecoveryEc2InstancePolicy` policy. You must specify which machines to attach the new policy to.
  + If the instance isn't using the default instance profile, then AMS can attach `AWSElasticDisasterRecoveryEc2InstancePolicy` through automation.
+ You must use a customer-owned KMS key for cross-account recovery. The source account's KMS key must be updated following the policy to allow target account access. For more information, see [Share the EBS encryption key with the target account](https://docs.aws.amazon.com/drs/latest/userguide/multi-account.html#multi-account-ebs).
+ The KMS key policy must be updated to allow the allow `customer_drs_console_role` to view the policy if you don't want to switch roles to view.
+ For cross-account, cross-Region disaster recovery, AMS must set up the source and target account as Trusted Accounts and deploy the [Failback and in-AWS right-sizing roles](https://docs.aws.amazon.com/drs/latest/userguide/trusted-accounts-failback-role.html) through CloudFormation.

# Use AMS SSP to provision AWS Elemental MediaConvert in your AMS account
<a name="amz-elemental-media-convert"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Elemental MediaConvert capabilities directly in your AMS managed account. AWS Elemental MediaConvert is a file-based video transcoding service with broadcast-grade features. It enables you to create video-on-demand (VOD) content for broadcast and multiscreen delivery at scale. The service combines advanced video and audio capabilities with a simple web services interface and pay-as-you-go pricing. With AWS Elemental MediaConvert, you can focus on delivering compelling media experiences without having to worry about the complexity of building and operating your own video processing infrastructure.

To learn more, see [AWS Elemental MediaConvert](https://aws.amazon.com/mediaconvert/).

## MediaConvert in AWS Managed Services FAQ
<a name="set-amz-ecs-faqs"></a>

**Q: How do I request access to MediaConvert in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_mediaconvert_author_role`. Once provisioned in your account, you must onboard the role in your federation solution.

A second role will be provided, `customer_MediaConvert_Default_Role`, that is used by MediaConvert in order to read from the source S3 bucket and write the output to the destination S3 bucket, and also to invoke the API gateway in case you need digital rights management (DRM).

**Q: What are the restrictions to using MediaConvert in my AMS account?**

There are no restrictions for the use of MediaConvert in AMS.

**Q: What are the prerequisites or dependencies to using MediaConvert in my AMS account?**

There are no prerequisites or dependencies to use MediaConvert in your AMS account.

# Use AMS SSP to provision AWS Elemental MediaLive in your AMS account
<a name="elemental-media-live"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Elemental MediaLive capabilities directly in your AMS managed account. AWS Elemental MediaLive is a broadcast-grade live video processing service. It enables you to create high-quality video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smartphones, and set-top boxes. The service works by encoding your live video streams in real-time, taking a larger-sized live video source and compressing it into smaller versions for distribution to your viewers. With AWS Elemental MediaLive, you can easily set up streams for both live events and 24x7 channels with advanced broadcasting features, high availability, and pay-as-you-go pricing. AWS Elemental MediaLive lets you focus on creating compelling live video experiences for your viewers without the complexity of building and operating broadcast-grade video processing infrastructure.

To learn more, see [AWS Elemental MediaLive](https://aws.amazon.com/medialive/).

## MediaLive in AWS Managed Services FAQ
<a name="elemental-media-live-faqs"></a>

**Q: How do I request access to MediaLive in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_medialive_author_role`. 

As a part of this RFC, a second role is deployed into your account; `customer_medialive_service_role` role, this role can be assigned to your Media Live channels and inputs to interact with other services such as Amazon S3, MediaStore, and CloudWatch Logs.

After the roles are provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using MediaLive in my AMS account?**

There are no restrictions for the use of MediaLive in AMS.

**Q: What are the prerequisites or dependencies to using MediaLive in my AMS account?**

There are no prerequisites or dependencies to use MediaLive in your AMS account.

# Use AMS SSP to provision AWS Elemental MediaPackage in your AMS account
<a name="amz-elemental-media-package"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Elemental MediaPackage capabilities directly in your AMS managed account. AWS Elemental MediaPackage reliably prepares and protects your video for delivery over the internet. From a single video input, AWS Elemental MediaPackage creates video streams formatted to play on connected TVs, mobile phones, computers, tablets, and game consoles. It makes it easy to implement popular video features for viewers (start-over, pause, rewind, and so on.), like those commonly found on DVRs. AWS Elemental MediaPackage can also protect your content using Digital Rights Management (DRM). AWS Elemental MediaPackage scales automatically in response to load, so your viewers will always get a great experience without you having to accurately predict in advance the capacity you’ll need. 

To learn more, see [AWS Elemental MediaPackage](https://aws.amazon.com/mediapackage/).

## MediaPackage in AWS Managed Services FAQ
<a name="set-amz-elemental-media-package-faqs"></a>

**Q: How do I request access to AWS Elemental MediaPackage in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_mediapackage_author_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

A second role will be provided, `customer_mediapackage_service_role`, that can be assigned to your Media Live channels and inputs to interact with other services such as S3 and Secrets Manager.

**Q: What are the restrictions to using MediaPackage in my AMS account?**

There are no restrictions for the use of MediaPackage in AMS.

**Q: What are the prerequisites or dependencies to using MediaPackage in my AMS account?**

There are no prerequisites or dependencies to use MediaPackage in your AMS account.

# Use AMS SSP to provision AWS Elemental MediaStore in your AMS account
<a name="elemental-media-store"></a>

**Note**  
After careful consideration, AWS has made the decision to discontinue MediaStore, effective November 13, 2025. If you are an active customer of MediaStore, you can use MediaStore as normal until November 13, 2025, when support for the service will end. After this date, you will no longer be able to use MediaStore or any of the capabilities provided by this service.

Use AMS Self-Service Provisioning (SSP) mode to access AWS Elemental MediaStore capabilities directly in your AMS managed account. AWS Elemental MediaStore is an AWS storage service optimized for media. It gives you the performance, consistency, and low latency required to deliver live streaming video content. AWS Elemental MediaStore acts as the origin store in your video workflow. Its high performance capabilities meet the needs of the most demanding media delivery workloads, combined with long-term, cost-effective storage. To learn more, see [AWS Elemental MediaStore](https://aws.amazon.com/mediastore/).

## MediaStore in AWS Managed Services FAQ
<a name="elemental-media-store-faqs"></a>

**Q: How do I request access to MediaStore in my AMS account?**

Request access to MediaStore by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_mediastore_author_role`. As a part of this RFC, a second role is deployed into your account; `MediaStoreAccessLogs` role, which is used by the MediaStore service to log activity in CloudWatch, if you choose to enable that feature. After it's provisioned in your account, you must onboard the roles in your federation solution.

At this time, AMS Operations will also deploy this service role in your account: `aws_code_pipeline_service_role_policy`.

**Q: What are the restrictions to using MediaStore in my AMS account?**

There are no restrictions for the use of MediaStore in AMS.

**Q: What are the prerequisites or dependencies to using MediaStore in my AMS account?**

There are no prerequisites or dependencies to use MediaStore in your AMS account.

# Use AMS SSP to provision AWS Elemental MediaTailor in your AMS account
<a name="amz-elemental-media-tailor"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Elemental MediaTailor capabilities directly in your AMS managed account. AWS Elemental MediaTailor lets video providers insert individually targeted advertising into their video streams without sacrificing broadcast-level quality-of-service. With AWS Elemental MediaTailor, viewers of your live or on-demand video each receive a stream that combines your content with ads personalized to them. But unlike other personalized ad solutions, with AWS Elemental MediaTailor your entire stream – video and ads – is delivered with broadcast-grade video quality to improve the experience for your viewers. AWS Elemental MediaTailor delivers automated reporting based on both client and server-side ad delivery metrics, to accurately measure advertising impressions and viewer behavior. You can easily monetize unexpected high-demand viewing events with no up-front costs using AWS Elemental MediaTailor. It also improves ad delivery rates, helping you make more money from every video, and it works with a wider variety of content delivery networks, ad decision servers, and client devices.

To learn more, see [AWS Elemental MediaTailor](https://aws.amazon.com/mediatailor/).

## MediaTailor in AWS Managed Services FAQ
<a name="set-amz-elemental-media-tailor-faqs"></a>

**Q: How do I request access to MediaTailor in my AMS account?**

Request access to MediaTailor by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer-mediatailor-role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using MediaTailor in my AMS account?**

There are no restrictions for the use of MediaTailor in AMS.

**Q: What are the prerequisites or dependencies to using MediaTailor in my AMS account?**

There are no prerequisites or dependencies to use MediaTailor in your AMS account.

# Use AMS SSP to provision AWS Global Accelerator in your AMS account
<a name="global-acc"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Global Accelerator capabilities directly in your AMS managed account. Global Accelerator is a network layer service in which you create accelerators to improve availability and performance for internet applications used by a global audience. To learn more, see [Global Accelerator](https://aws.amazon.com/global-accelerator/).

## Global Accelerator in AWS Managed Services FAQ
<a name="set-global-acc-faqs"></a>

Common questions and answers:

**Q: How do I request Global Accelerator to be set up in my AMS account?**

Request access through the submission of the AWS Services RFC (Management \$1 AWS service \$1 Self-provisioned Service). Through this RFC, the following IAM roles will be provisioned in your account: `customer_global_accelerator_console_role`. Once provisioned in your account you must onboard the console role in your federation solution.

**Q: What are the restrictions to using Global Accelerator in my AMS account?**

Global Accelerator is a global service that supports endpoints in multiple AWS Regions, which are listed in the [AWS Region Table](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/).

**Q: What are the prerequisites or dependencies to using Global Accelerator in my AMS account?**

When you set up your accelerator with Global Accelerator, you associate the static IP addresses to regional endpoints in one or more AWS Regions. For standard accelerators, the endpoints are Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses. For custom routing accelerators, endpoints are virtual private cloud (VPC) subnets with one or more EC2 instances.

# Use AMS SSP to provision AWS Glue in your AMS account
<a name="glue"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Glue capabilities directly in your AMS managed account. AWS Glue is a fully managed extract, transform, and load (ETL) service that helps you to prepare and load your data for analytics. You can create and run an ETL job with a few clicks in the AWS Management Console. You point AWS Glue to your data stored on AWS, and AWS Glue discovers your data and stores the associated metadata (e.g. table definition and schema) in the AWS Glue Data Catalog. Once cataloged, your data is immediately searchable, queryable, and available for ETL actions. To learn more, see [AWS Glue](https://aws.amazon.com/glue/).

## AWS Glue in AWS Managed Services FAQ
<a name="set-glue-faqs"></a>

Common questions and answers:

**Q: How do I request AWS Glue to be set up in my AMS account?**

Request access to AWS Glue by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM roles to your account:
+ `customer_glue_console_role`
+ `customer_glue_service_role`

The preceding roles include the following attached policies:
+ `customer_glue_secrets_manager_policy`
+ `customer_glue_deny_policy`

 After the roles are provisioned in your account, you must onboard them in your federation solution.

For access to Crawlers, Jobs, and Development endpoints (roles needed for specific use cases), submit an RFC with the Deployment \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Create entity or policy (ct-3dpd8mdd9jn1r).

**Q: What are the restrictions to using AWS Glue in my AMS account?**

There are no restrictions. Full functionality of AWS Glue is available in your AMS account. For an interactive environment where you can author and test ETL scripts, use Notebooks on AWS Glue Studio. AWS Glue Interactive Sessions and Job Notebooks are serverless features of AWS Glue that you can use in AWS Glue and that make use of the AWS Glue service role.

**AWS Glue prior to 2.0:** AWS Glue Notebooks are a non-managed resource that launches Amazon EC2 instances in an account. It's a best practice to launch your own Amazon EC2 instances and install the software necessary to support a notebook environment and development. For more information, see [ Tutorial: Set Up a Local Apache Zeppelin Notebook to Test and Debug ETL Scripts](https://docs.aws.amazon.com/glue/latest/dg/dev-endpoint-tutorial-local-notebook.html) and [ Using Development Endpoints for Developing Scripts](https://docs.aws.amazon.com/glue/latest/dg/dev-endpoint.html).

**Q: What are the prerequisites or dependencies to using AWS Glue in my AMS account?**

AWS Glue has a dependency on Amazon S3, CloudWatch, and CloudWatch Logs. Transitive dependencies vary based on data sources, and other AWS Glue service features may be interacting with (example: Amazon Redshift, Amazon RDS, Athena).

# Use AMS SSP to provision AWS Lake Formation in your AMS account
<a name="lake-formation"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Lake Formation capabilities directly in your AMS managed account. AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

Creating a data lake with Lake Formation is as simple as defining data sources and what data access and security policies you want to apply. Lake Formation then helps you collect and catalog data from databases and object storage, move the data into your new Amazon S3 data lake, clean and classify your data using machine learning algorithms, and secure access to your sensitive data. Your users can access a centralized data catalog (for details, see [AWS Glue FAQ](https://aws.amazon.com/glue/faqs/#AWS_Glue_Data_Catalog/)) that describes available data sets and their appropriate usage. Your users then leverage these data sets with their choice of analytics and machine learning services, like [Amazon Redshift](https://aws.amazon.com/redshift/), [Amazon Athena](https://aws.amazon.com/athena/), and (in beta) [Amazon EMR](https://aws.amazon.com/emr/) for Apache Spark. Lake Formation builds on the capabilities available in [AWS Glue](https://aws.amazon.com/glue/).

To learn more, see [AWS Lake Formation](https://aws.amazon.com/lake-formation/).

## Lake Formation in AWS Managed Services FAQ
<a name="set-lake-formation-faqs"></a>

**Q: How do I request access to AWS Lake Formation in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer_lakeformation_data_analyst_role`. After it's provisioned in your account, you must onboard the roles in your federation solution. 

Additionally, the following two roles are optional:
+ `customer_lakeformation_admin_role`
+ `customer_lakeformation_workflow_role`

For admin permissions, you can choose to onboard the role `customer_lakeformation_admin_role` as part of the same SSPS change type (ct-3qe6io8t6jtny).

If you want to create Blueprints in the AWS Lake Formation Console, you need to submit a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type and explicitly add to deploy the `customer_lakeformation_workflow_role`. In the RFC, you must provide the S3 bucket name if the bucket is a source when Blueprints are created. S3 bucket is applicable if the Blueprint type is AWS CloudTrail, Classic Load Balancer Logs or Application Load Balancer Logs.

**Q: What are the restrictions to using AWS Lake Formation in my AMS account?**

Full functionality of Lake Formation is available in AMS.

**Q: What are the prerequisites or dependencies to using AWS Lake Formation in my AMS account?**

Lake Formation integrates with the AWS Glue service, therefore AWS Glue users can access only the databases and tables on which they have Lake Formation permissions. Additionally AWS Athena and Amazon Redshift users can only query the AWS Glue databases and tables on which they have Lake Formation permissions.

# Use AMS SSP to provision AWS Lambda in your AMS account
<a name="lambda"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Lambda capabilities directly in your AMS managed account. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume, there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or back-end service, all with zero administration. upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services, or call it directly from any Web or mobile app. To learn more, see [AWS Lambda](https://aws.amazon.com/lambda/).

## Lambda in AWS Managed Services FAQ
<a name="set-lambda-faqs"></a>

**Q: How do I request access to AWS Lambda in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM roles to your account: `customer_lambda_admin_role` and `customer_lambda_basic_execution_role`. After it's provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using AWS Lambda in my AMS account?**
+ A Lambda function is designed to be invoked by event sources. For a list of services that can be used as a Lambda event source, see [Using AWS Lambda with Other Services](https://docs.aws.amazon.com/lambda/latest/dg/lambda-services.html). Not all of these services are currently available in AMS accounts. If you require a service that isn't available, then work with your AMS CSDM to file an exception.
+ By default AMS provides you with a basic Lambda initiation role containing the `AWSLambdaBasicExecutionRole` and `AWSXrayWriteOnlyAccess` permissions; for information, see [AWS Lambda Initiation Role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html). If you require additional permissions, such as the ability to provision Lambda functions within your AMS VPC, submit an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation)(ct-3qe6io8t6jtny) change type.

**Q: What are the prerequisites or dependencies to using AWS Lambda in my AMS account?**

There are no prerequisites or dependencies to get started with AWS Lambda; however, depending on your specific use case, you might require access to other AWS services to create event sources, or additional permissions for your function to perform various actions. If additional permissions are needed, submit an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) change type (ct-3qe6io8t6jtny).

**Q: What do I need to do to run a Lambda function in any of my accounts?**

To deploy a Lambda function in a core account, use the following guidelines:
+ Make sure that SSPS for AWS Lambda is onboarded.
+ There are no specific restrictions prohibiting this deployment under the AMS responsibilities, as long as your AMS resources are protected and compliant.
+ If you want AMS to create the Lambda function, then you must first use the SSPS role provided for AWS Lambda. Then, if you still want AMS assistance to deploy or support the function, contact your CA and start the out of scope (OOS) process.

# Use AMS SSP to provision AWS License Manager in your AMS account
<a name="license-manager"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS License Manager capabilities directly in your AMS managed account. AWS License Manager integrates with AWS services to simplify the management of licenses across multiple AWS accounts, IT catalogs, and on-premises, through a single AWS account. AWS License Manager lets administrators create customized licensing rules that emulate the terms of their licensing agreements, and then enforces these rules when an instance of Amazon EC2 gets launched. The rules in AWS License Manager enable you to limit a licensing breach by physically stopping the instance from launching or by notifying administrators about the infringement. To learn more, see [AWS License Manager](https://aws.amazon.com/license-manager/).

## License Manager in AWS Managed Services FAQ
<a name="set-license-manager-faqs"></a>

Common questions and answers:

**Q: How do I request AWS License Manager to be set up in my AMS account?**

Request access to AWS License Manager by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_license_manager_role`. Once the License Manager IAM role is provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS License Manager in my AMS account?**

You're able to associate AWS License Manager rules to the AMIs you own (filtered under "Owned by me"). If you choose to enforce a limit association to an AMI (example: can only support 100 vCPU of this AMI) and exhaust the limit, future launches with that AMI are blocked and return an error stating "No licenses available." This is the intended behavior of this service (not allowing license exhaustion). In the event you exhaust the limit but need to launch the AMI again, you must modify the rule configured in AWS License Manager.

**Q: What are the prerequisites or dependencies to using AWS License Manager in my AMS account?**

There are no prerequisites or dependencies to use AWS License Manager in your AMS account.

# Use AMS SSP to provision AWS Migration Hub in your AMS account
<a name="migration-hub"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Migration Hub capabilities directly in your AMS managed account. AWS Migration Hub provides a single location where you can track the progress of application migrations across multiple AWS and partner solutions. Using Migration Hub allows you to choose the AWS and partner migration tools that best fit your needs, while providing visibility into the status of migrations across your application portfolio. Migration Hub also provides key metrics and progress for individual applications, regardless of which tools are being used to migrate them. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects. To learn more, see [AWS Migration Hub](https://aws.amazon.com/migration-hub/).

## Migration Hub in AWS Managed Services FAQ
<a name="set-migration-hub-faqs"></a>

Common questions and answers:

**Q: How do I request access to Migration Hub in my AMS account?**

Request access to Migration Hub by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_migrationhub_author_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions for Migration Hub?**

None.

**Q: What are the prerequisites to enable Migration Hub?**

There are no prerequisites to start using Migration Hub in your AMS account. However, permissions outside Migration Hub might be required during the management of the service, such as writing permissions to Amazon S3 to upload server information.

# Use AMS SSP to provision AWS Outposts in your AMS account
<a name="outposts"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Outposts capabilities directly in your AMS managed account. AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a consistent hybrid experience. AWS Outposts is good for workloads that require low latency access to on-premises systems, local data processing, or local data storage. To learn more, see [AWS Outposts](https://aws.amazon.com/outposts/).

## AWS Outposts in AWS Managed Services FAQ
<a name="set-outposts-faqs"></a>

Common questions and answers:

**Q: How do I request AWS Outposts to be set up in my AMS account?**

Request access to AWS Outposts by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_outposts_role`. Once the role is provisioned in your account, you must onboard it in your federation solution.

**Q: What are the restrictions to using AWS Outposts in my AMS account?**

There are no restrictions for the use of AWS Outposts in your AMS account.

**Q: What are the prerequisites or dependencies to using AWS Outposts in my AMS account?**

There are no prerequisites or dependencies to use AWS Outposts in your AMS account.

# Use AMS SSP to provision AWS Resilience Hub in your AMS account
<a name="res-hub"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Resilience Hub capabilities directly in your AMS managed account. AWS Resilience Hub helps you proactively prepare and protect your AWS applications from disruptions. The Resilience Hub offers resiliency assessment and validation that integrate into your software development lifecycle to uncover resiliency weaknesses. Resilience Hub helps you estimate whether or not your applications can meet the recovery time objective (RTO) and recovery point objective (RPO) targets, and helps resolve issues before they are released into production. After you deploy an AWS application into production, you can use Resilience Hub to continue tracking the resiliency posture of your application. If an outage occurs, Resilience Hub sends a notification to the operator to launch the associated recovery process.

## AWS Resilience Hub in AWS Managed Services FAQ
<a name="set-res-hub-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Resilience Hub in my AMS account?**

Request access to Resilience Hub by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles and policies to your account:

**IAM roles**
+ `customer_resiliencehub_console_role`
+ `customer_resiliencehub_service_role`

**Policies**
+ `customer_resiliencehub_console_policy`
+ `customer_resiliencehub_service_policy`

After the role is provisioned in your account, you must onboard the role `customer_resiliencehub_console_role` in your federation solution.

**Q: What are the restrictions to using AWS Resilience Hub in my AMS account?**

There are no restrictions. Full functionality of Resilience Hub is available in your AMS acount.

**Q: What are the prerequisites or dependencies to using AWS Resilience Hub in my AMS account?**

There are no prerequisites or dependencies to use Resilience Hub in your AMS account.

# Use AMS SSP to provision AWS Secrets Manager in your AMS account
<a name="secrets-manager"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Secrets Manager capabilities directly in your AMS managed account. AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to the Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. Also, the service is extensible to other types of secrets, including API keys and OAuth tokens. To learn more, see [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/).

**Note**  
By default, AMS operators can access secrets in AWS Secrets Manager that are encrypted using the account's default AWS KMS key (CMK). If you want your secrets to be inaccessible to AMS Operations, use a custom CMK, with an AWS Key Management Service (AWS KMS) key policy that defines permissions appropriate to the data stored in the secret.

## Secrets Manager in AWS Managed Services FAQ
<a name="set-secrets-manager-faqs"></a>

**Q: How do I request access to AWS Secrets Manager in my AMS account?**

Request access to Secrets Manager by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM roles to your account: `customer_secrets_manager_console_role` and `customer-rotate-secrets-lambda-role`. The `customer_secrets_manager_console_role` is used as an Admin role to provision and manage the secrets, and `customer-rotate-secrets-lambda-role` is used as the Lambda execution role for the Lambda functions that rotate the secrets. After it's provisioned in your account, you must onboard the `customer_secrets_manager_console_role` role in your federation solution.

**Q: What are the restrictions to using AWS Secrets Manager in my AMS account?**

Full functionality of AWS Secrets Manager is available in your AMS account, along with automatic rotation functionality of secrets. However, note that setting up your rotation using 'Create a new Lambda function to perform rotation' is not supported because it requires elevated permissions to create the CloudFormation stack (IAM Role and Lambda function creation), which bypasses the Change Management process. AMS Advanced only supports 'Use an existing Lambda function to perform rotation' where you manage your Lambda functions to rotate secrets using the AWS Lambda SSPS Admin role. AMS Advanced doesn't create or manage Lambda to rotate the secrets.

**Q: What are the prerequisites or dependencies to using AWS Secrets Manager in my AMS account?**

The following namespaces are reserved for use by AMS and are unavailable as part of direct access to AWS Secrets Manager:
+ arn:aws:secretsmanager:\$1:\$1:secret:ams-shared/\$1
+ arn:aws:secretsmanager:\$1:\$1:secret:customer-shared/\$1
+ arn:aws:secretsmanager:\$1:\$1:secret:ams/\$1

## Sharing keys using Secrets Manager (AMS SSPS)
<a name="set-secrets-manager-sharing"></a>

Sharing secrets with AMS in the plain text of an RFC, service request, or incident report, results in an information disclosure incident and AMS redacts that information from the case and requests that you regenerate the keys.

You can use [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) (Secrets Manager) under this namespace, `customer-shared`.

![\[Secrets Manager workflow.\]](http://docs.aws.amazon.com/managedservices/latest/onboardingguide/images/secretsManager.png)


### Sharing Keys using Secrets Manager FAQ
<a name="set-secrets-manager-sharing-faqs"></a>

**Q: What type of secrets must be shared using Secrets Manager?**

A few examples are pre-shared keys for VPN creation, confidential keys such as Authentication keys (IAM, SSH), License keys and Passwords.

**Q: How can I share the keys with AMS using Secrets Manager?**

1. Login to the AWS Management console using your federated access and the appropriate role:

   for SALZ, the `Customer_ReadOnly_Role`

   for MALZ, `AWSManagedServicesChangeManagementRole`.

1. Navigate to the [AWS Secrets Manager console](https://console.aws.amazon.com/secretsmanager/home) and click **Store a new secret**.

1. Select **Other type of secrets**.

1. Enter the secret value as a plain-text and use the default KMS encryption. Click **Next**.

1. Enter the secret name and description, the name always starts with **customer-shared/**. For example **customer-shared/mykey2022**. Click **Next**.

1. Leave automatic rotation disabled, Click **Next**.

1. Review and click **Store** to save the secret.

1. Reply to us with the secret name through the Service request, RFC, or incident report, so we can identify and retrieve the secret.

**Q: What permissions are required for sharing the keys using Secrets Manager?**

**SALZ**: Look for the `customer_secrets_manager_shared_policy` managed IAM policy and verify that the policy document is the same as the one attached in the creation steps below. Confirm that the policy is attached to the following IAM Roles: `Customer_ReadOnly_Role`.

**MALZ**: Validate that the `AMSSecretsManagerSharedPolicy`, is attached to the `AWSManagedServicesChangeManagementRole` role that allows you the `GetSecretValue` action in the `ams-shared`namespace.

Example:

```
{
 "Action": "secretsmanager:*",
 "Resource": [
 "arn:aws:secretsmanager:*:*:secret:ams-shared/*",
 "arn:aws:secretsmanager:*:*:secret:customer-shared/*"
 ],
 "Effect": "Allow",
 "Sid": "AllowAccessToSharedNameSpaces"
 }
```

**Note**  
The requisite permissions are granted when you add AWS Secrets Manager as a self-service provisioned service.

# Use AMS SSP to provision AWS Security Hub CSPM in your AMS account
<a name="sec-hub"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Security Hub CSPM capabilities directly in your AMS managed account. AWS Security Hub CSPM provides you with a comprehensive view of your security state within AWS and your compliance with security industry standards and best practices. Security Hub CSPM centralizes and prioritizes security and compliance findings from across AWS accounts, services, and supported third-party partners to help you analyze your security trends and identify the highest priority security issues. To learn more, see [AWS Security Hub CSPM](https://aws.amazon.com/security-hub/).

## Security Hub CSPM in AWS Managed Services FAQ
<a name="set-sec-hub-faqs"></a>

**Q: How do I request access to AWS Security Hub CSPM in my AMS account?**

Request access to Security Hub CSPM by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_securityhub_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using Security Hub CSPM in my AMS account?**

Archiving functionality has been noted as a potential security and operational risk and has been restricted as a part of the self-provisioned service Security role.

**Q: What are the prerequisites or dependencies to using AWS Security Hub CSPM in my AMS account?**

There are no prerequisites or dependencies to use AWS Security Hub CSPM in your AMS account.

# Use AMS SSP to provision AWS Service Catalog AppRegistry in your AMS account
<a name="service-catalog-appregistry"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AppRegistry capabilities directly in your AMS managed account. AppRegistry enables application search, reporting, and management actions from a central location. Builders seldom create applications in a single AWS account. They typically separate application resources by lifecycle phases, such as development, test, and production. AppRegistry allows you to group and view all your resource collections across the AWS accounts that you define.

With AppRegistry, you can store your AWS applications, the collection of resources that are associated with your applications, and application attribute groups. To learn more, see [What is AppRegistry](https://docs.aws.amazon.com/servicecatalog/latest/arguide/intro-app-registry.html).

## FAQ: AWS Service Catalog AppRegistry in AMS
<a name="service-catalog-appregistry-faqs"></a>

**Q: How do I request access to AWS Service Catalog AppRegistry in my AMS account?**

Request access to AppRegistry by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (managed automation) (ct-3qe6io8t6jtny) change type. This RFC provisions the following IAM role to your account: `customer-appregistry-console-role`. After provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Service Catalog AppRegistry in my AMS account?**

Full access to the AppRegistry service is provided with the exception of using the AMS namespace in the `'Name'` tag.

**Q: What are the prerequisites or dependencies to using AWS Service Catalog AppRegistry in my AMS account?**

There are no prerequisites or dependencies to use AppRegistry in your AMS account.

# Use AMS SSP to provision AWS Shield Advanced in your AMS account
<a name="aws-shield"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Shield Advanced capabilities directly in your AMS managed account. AWS Shield Advanced is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Shield Advanced provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced; AMS offers Shield Advanced. To learn more, see [Shield Advanced](https://aws.amazon.com/shield/).

All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring, network and transport layer DDoS attacks that target your website or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.

For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced.

In addition to the network and transport layer protections that come with AWS Shield Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall. AWS Shield Advanced also gives you 24x7 access to the AWS Shield Response Team (SRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing (Elastic Load Balancing), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 charges.

## Shield Advanced in AWS Managed Services FAQ
<a name="aws-shield-faqs"></a>

**Q: How do I request access to Shield Advanced in my AMS account?**

Request access to Shield Advanced by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM roles to your account: `customer_shield_role` and `aws_drt_shield_role`. Once provisioned in your account, you must onboard the roles in your federation solution.

After the roles are deployed into your account, you can use the `customer_shield_role` to confirm your subscription to AWS Shield Advanced in your account.

**Note**  
Note that there is a monthly fee and a one-year commitment associated with the use of AWS Shield Advanced. Additionally, using AWS Shield Advanced in AMS authorizes AMS to escalate to the AWS Shield (SRT), who may make changes to your web application firewall (AWS WAF) rules during escalated distributed denial of service (DDoS) incidents. These changes will be made in coordination with AMS.

**Q: What are the restrictions to using Shield Advanced in my AMS account?**

Although not a restriction, you should understand that using Shield Advanced deploys the `aws_drt_shield_role`, which allows AWS Shield teams (SRT) to make emergency changes to AWS WAF rules inside of AMS accounts during escalated DDoS incidents. This is recommended by AMS for the fastest remediation of DDoS attacks, and would occur after an AMS escalation to the SRT.

**Q: What are the prerequisites or dependencies to using Shield Advanced in my AMS account?**

There are no prerequisites or dependencies to use Shield Advanced in your AMS account.

# Use AMS SSP to provision AWS Snowball Edge in your AMS account
<a name="snowball"></a>

Use AMS Self-Service Provisioning (SSP) mode to access Snowball Edge capabilities directly in your AMS managed account. Snowball Edge is a petabyte-scale data transport solution that uses devices designed to be secure, to transfer large amounts of data into and out of the AWS Cloud. Snowball Edge addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. You can use Snowball Edge to migrate analytics data, genomics data, video libraries, image repositories, backups, and to archive part of data center shutdowns, tape replacement or application migration projects. Transferring data with Snowball Edge is simple, fast, more secure, and can be as little as one-fifth the cost of transferring data by way of high-speed Internet.

With Snowball Edge, you don’t need to write any code or purchase any hardware to transfer your data. Start by using the AWS Management Console to [Create an Import Job](https://docs.aws.amazon.com/snowball/latest/ug/create-import-job.html) for Snowball, and a Snowball device will be automatically shipped to you. Once it arrives, attach the device to your local network, download and run the Snowball Client ("Client") to establish a connection, and then use the Client to select the file directories that you want to transfer to the device. The Client then encrypts and transfers the files to the device at high speed. Once the transfer is complete and the device is ready to be returned, the E Ink shipping label automatically updates and you can track the job status with Amazon Simple Notification Service (Amazon SNS), text messages, or directly in the Console. To learn more, see [AWS Snowball Edge](https://aws.amazon.com/snowball/).

## Snowball Edge in AWS Managed Services FAQ
<a name="set-snowball-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Snowball Edge in my AMS account?**

Implementation of Snowball Edge in AMS is a two-step process:

1. Submit a Management \$1 Other \$1 Other \$1 Create (ct-1e1xtak34nx76) change type and request a service role for Snowball Edge for your AMS Account.

1. Request user access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM roles to your account: `customer_snowball_console_role`, `customer_snowball_export_role`, and `customer_snowball_import_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Snowball Edge in my AMS account?**

Full functionality of the AWS Snowball Edge is available in your AMS account.

**Q: What are the prerequisites or dependencies to using AWS Snowball Edge in my AMS account?**

You must have the service role account as noted above.

# Use AMS SSP to provision AWS Step Functions in your AMS account
<a name="step"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Step Functions capabilities directly in your AMS managed account. AWS Step Functions is a Web service that enables you to coordinate the components of distributed applications and microservices by using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a reliable way to coordinate components and step through the functions of your application. Step Functions offers a graphical console to visualize the components of your application as a series of steps. It automatically triggers and tracks each step, and retries when there are errors, so your application runs in order and as expected, every time. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems quickly. To learn more, see [AWS Step Functions](https://aws.amazon.com/step-functions/).

## Step Functions in AWS Managed Services FAQ
<a name="set-step-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Step Functions in my AMS account?**

Request access to AWS Step Functions by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_step_functions_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Step Functions in my AMS account?**

Full functionality of the AWS Step Functions is available in your AMS account.

**Q: What are the prerequisites or dependencies to using AWS Step Functions in my AMS account?**

At runtime, the role used by Step Functions must have access to the services used by the step function. For example, a step function could depend on Lambda functions. Someone authoring a step function is likely to be creating Lambda functions at the same time and would have to request access to that service as well.

# Use AMS SSP to provision AWS Systems Manager Parameter Store in your AMS account
<a name="sys-man-param-store"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Systems Manager Parameter Store capabilities directly in your AMS managed account. AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter. Highly scalable, available, and durable, Parameter Store is backed by the AWS Cloud. To learn more, see [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html).

**Note**  
If you want a dedicated secrets store with lifecycle management, use [Use AMS SSP to provision AWS Secrets Manager in your AMS account](secrets-manager.md) instead of Parameter Store. Secrets Manager helps you meet your security and compliance requirements by enabling you to rotate secrets automatically. Secrets Manager offers built-in integration for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, that's extensible to other types of secrets by customizing Lambda functions.

## AWS Systems Manager Parameter Store in AWS Managed Services FAQ
<a name="set-sys-man-param-store-faqs"></a>

Common questions and answers:

**Q: How do I request access to Systems Manager Parameter Store in my AMS account?**

Request access to AWS Systems Manager Parameter Store by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_systemsmanager_parameterstore_console_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Systems Manager Parameter Store in my AMS account?**

You are required to use AWS Managed keys; access is restricted from creating custom KMS keys. However, if a custom key is required, submit an RFC to create a customer-managed key (CMK) using the Deployment \$1 Advanced Stack Components \$1 KMS Key \$1 Create change type (ct-1d84keiri1jhg) with this IAM role, `customer_systemsmanager_parameterstore_console_role` as the value for the `IAMPrincipalsRequiringDecryptPermissions` and `IAMPrincipalsRequiringEncryptPermissionsPrincipal` parameters. After the KMS Key is created, you can create a Secure String using it.

**Q: What are the prerequisites or dependencies to using AWS Systems Manager Parameter Store in my AMS account?**

There are no prerequisites; however, SSM Parameter Store is dependent on KMS to create a Secure String so you can encrypt and decrypt their Values stored in Parameter Store.

# Use AMS SSP to provision AWS Systems Manager Automation in your AMS account
<a name="sys-man-runbook"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Systems Manager Automation capabilities directly in your AMS managed account. AWS Systems Manager Automation simplifies common maintenance and deployment tasks of Amazon Elastic Compute Cloud instances and other AWS resources using runbooks, actions and service quotas. It enables you to build, execute and monitor automations at scale. A Systems Manager Automation is a type of Systems Manager document that defines the actions that Systems Manager performs on your managed instances. A runbook you use to perform common maintenance and deployment tasks such as running commands or automation scripts within your managed instances. Systems Manager includes features that help you target large groups of instances by using Amazon Elastic Compute Cloud tags, and velocity controls that help you roll out changes according to the limits you define. The runbooks are written using JavaScript Object Notation (JSON) or YAML. Using the Document Builder in the Systems Manager Automation console, however, you can create a runbook without having to author in native JSON or YAML. Alternatively you can use Systems Manager-provided runbooks with pre-defined steps that suits your needs. To learn more, see [Working with runbooks](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-documents.html) in AWS Systems Manager documentation. 

**Note**  
Although Systems Manager Automation supports 20 action types that can be used in the runbook, a limited number of actions you can use while authoring runbook to be used in your AMS Advanced account. Similarly, a limited number of Systems Manager-provided runbook can be used either directly or from within your own runbook. For details, see the restrictions in the following FAQ.

## AWS Systems Manager Automation in AWS Managed Services FAQ
<a name="set-sys-man-runbook-faqs"></a>

Common questions and answers:

**Q: How do I request access to Systems Manager Automation in my AMS account?**

Request access to AWS Systems Manager Automation by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_systemsmanager_automation_console_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the limitations to using AWS Systems Manager Automation in my AMS account?**

 You are required to author your runbook, with limited set of Systems Manager supported actions for automation, only to run commands and/or scripts within your managed instances. The actions that are available to you along with any restrictions are outlined as below.


**AWS Systems Manager Automation Limitations**  

| Action | Description | Limitation | 
| --- | --- | --- | 
| aws:assertAwsResourceProperty –  | Assert an AWS resource state or event state | Only EC2 instances | 
| aws:aws:branch –  | Run conditional automation steps | No limitation | 
| aws:createTags –  | Create tags for AWS resources | Only to SSM automation runbooks that you author  | 
| aws:executeAutomation –  | Run another automation | Only the automation runbook that you author  | 
| aws:executeScript –  | Run a script | Only script that does not make any API call to any services | 
| aws:pause – | Pause an automation | No limitation | 
| aws:runCommand –  | Run a command on a managed instance | Only using System Manager provided document - AWS-RunShellScript and AWS-RunPowerShellScript | 
| aws:sleep –  | Delay an automation | No limitation | 
| aws:waitForAwsResourceProperty –  | Wait on an AWS resource property | Only EC2 instances | 

You can also chose to run command or script directly with Systems Manager provided runbook AWS-RunShellScript and AWS-RunPowerShellScript using the 'Run Command' feature from within the Systems Manager console. You can also nest these runbooks within your runbook that caters for additional pre and/or post validation or any complex automation logic.

The role adheres to least privilege principle and only provides permission required to author, execute and retrieve execution details of runbooks aimed to executing command and/or scripts within your managed instances. It does not provide permission for any other capabilities that AWS Systems Manager service provides. While the feature allows you to author automation runbooks, execution of the runbooks can not be targeted for AMS owned resources.

**Q: What are the prerequisites or dependencies to using AWS Systems Manager Automation in my AMS account?**

There are no prerequisites; however, you must ensure your internal process and/or compliance controls are adhered to while authoring runbooks. We also recommend to thoroughly test runbooks before executing them against production resources.

**Q: Can the Systems Manager policy `customer_systemsmanager_automation_policy` be attached to other IAM roles?**

No, unlike other self-provision enabled services, this policy can only be assigned to the provisioned default role `customer_systemsmanager_automation_console_role`.

 Unlike the policies of other SSPS roles, this SSM SSPS policy cannot be shared with other custom IAM roles, because this AMS service is only for running commands or automation scripts within your managed instances. If these permissions were allowed to be attached to other custom IAM roles, potentially with permissions on other services, the scope of allowed actions could extend to managed services, and potentially lower the security posture of your account.

To evaluate any requests for change (RFCs) against our AMS technical standards, work with your respective Cloud Architect or Service Delivery Manager, see [RFC security reviews](https://docs.aws.amazon.com/managedservices/latest/ctref/rfc-security.html).

**Note**  
AWS Systems Manager allows you to use runbooks that are shared with your account. We recommend you exercise caution and perform a due-diligence check when using shared runbooks and make sure to review the content to understand the command/scripts they run before executing the runbooks. For details refer to [Best practices for shared SSM documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-before-you-share.html).

# Use AMS SSP to provision AWS Transfer Family in your AMS account
<a name="transfer-sftp"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Transfer Family (Transfer Family) capabilities directly in your AMS managed account. AWS Transfer Family is a fully managed AWS service that enables you to transfer files over Secure File Transfer Protocol (SFTP), into and out of Amazon Simple Storage Service (Amazon S3) storage. SFTP is also known as Secure Shell (SSH) File Transfer Protocol. SFTP is used in data exchange workflows across different industries such as financial services, healthcare, advertising, and retail, among others.

With AWS SFTP, you get access to an SFTP server in AWS without the need to run any server infrastructure. You can use this service to migrate your SFTP-based workflows to AWS while maintaining your end users' clients and configurations as is. You first associate your hostname with the SFTP server endpoint, then add your users and provision them with the right level of access. After you do, your users' transfer requests are serviced directly out of your AWS SFTP server endpoint. To learn more, see [AWS Transfer for SFTP](https://aws.amazon.com/aws-transfer-family), also [Create an SFTP-enabled server](https://docs.aws.amazon.com/transfer/latest/userguide/create-server-sftp.html).

## AWS Transfer for SFTP in AWS Managed Services FAQ
<a name="set-transfer-sftp-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Transfer for SFTP in my AMS account?**

Request access to AWS Transfer for SFTP by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). Through this RFC the following IAM roles, and a policy, are provisioned in your account:
+ `customer_transfer_author_role`. This role is designed for you to manage the SFTP service through the console.
+ `customer_transfer_sftp_server_logging_role`. This role is designed to be attached on the SFTP Server. It allows the SFTP server to pull logs into CloudWatch.
+ `customer_transfer_sftp_user_role`. This role is designed to be attached on the SFTP users. It allows the SFTP users to interact with the S3 bucket.
+ `policy customer_transfer_scope_down_policy`. This policy is a scope-down policy that can be applied to the SFTP User to limit their access on the S3 bucket to their home folders.
+ `customer_transfer_sftp_efs_user_role`. This role is designed to be attached on the SFTP users. It allows the SFTP users to interact with the EFS file system.

After it's provisioned in your account, you must onboard the roles in your federation solution.

**Q: What are the restrictions to using AWS Transfer for SFTP in my AMS account?**

AWS Transfer for SFTP configuration is limited to resources without "AMS-" or "MC-" prefixes to prevent any modifications to AMS infrastructure.

**Q: What are the prerequisites or dependencies to using AWS Transfer for SFTP in my AMS account?**
+ You must have an Amazon S3 bucket with a name that contains the keyword "transfer" before creating the AWS Transfer for SFTP server and users.
+ To use a "Customer Identify Provider," you must deploy the API Gateway, Lambda function, and your user repository (AD, Secrets Manager, and so on). For more information, see [ Enable password authentication for AWS Transfer for SFTP using AWS Secrets Manager](https://aws.amazon.com/blogs/storage/enable-password-authentication-for-aws-transfer-for-sftp-using-aws-secrets-manager/) and [Working with Identity Providers](https://docs.aws.amazon.com/transfer/latest/userguide/authenticating-users.html).

# Use AMS SSP to provision AWS Transit Gateway in your AMS account
<a name="transit-gateway"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Transit Gateway capabilities directly in your AMS managed account. AWS Transit Gateway is a service that enables you to connect your Amazon Virtual Private Cloud (VPCs) and your on-premises networks to a single gateway. As you grow the number of workloads running on AWS, you need to be able to scale your networks across multiple accounts and Amazon VPCs to keep up with the growth. Today, you can connect pairs of Amazon VPCs using peering. However, managing point-to-point connectivity across many Amazon VPCs, without the ability to centrally manage the connectivity policies, can be operationally costly and cumbersome. For on-premises connectivity, you need to attach your AWS VPN to each individual Amazon VPC. This solution can be time consuming to build and hard to manage when the number of VPCs grows into the hundreds. To learn more, see [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/).

## AWS Transit Gateway in AWS Managed Services FAQ
<a name="set-transit-gateway-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Transit Gateway in my AMS account?**

Request access to AWS Transit Gateway by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_tgw_console_role`. Once provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Transit Gateway in my AMS account?**

Full functionality of AWS Transit Gateway is available in your AMS single-account landing zone account for the exception of route table modifications for Transit Gateway routing. Request route table changes by submitting a Management \$1 Other \$1 Other \$1 Create change type (ct-1e1xtak34nx76).

**Note**  
This service is only supported for single-account landing zone (SALZ), not multi-account landing zone (MALZ).

**Q: What are the prerequisites or dependencies to using AWS Transit Gateway in my AMS account?**

There are no prerequisites or dependencies to use AWS Transit Gateway in your AMS account.

# Use AMS SSP to provision AWS WAF - Web Application Firewall in your AMS account
<a name="set-waf"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS WAF capabilities directly in your AMS managed account. AWS WAF is a web application firewall (AWS WAF) that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow, or block, to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting; and rules that are designed for your specific application.

To learn more, see [AWS WAF - Web Application Firewall](https://aws.amazon.com/waf/).

AMS doesn't support monitoring (CloudWatch alarms / events / MMS alerts) for AWS WAF. Due to the nature of AWS WAF, you must create custom rules for your applications; AMS can't quantify and create alarms for you, without context of your application. To learn more, see [AWS WAF - Web Application Firewall](https://aws.amazon.com/waf/).

## AWS WAF in AWS Managed Services FAQ
<a name="set-waf-faqs"></a>

Common questions and answers:

**Q: How do I request AWS WAF to be set up in my AMS account?**

Request access to AWS WAF by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_waf_role`. After the AWS WAF IAM role is provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS WAF?**

After permissions are provisioned, you have the full functionality of AWS WAF.

**Q: What are the prerequisites or dependencies to using AWS WAF?**

There are no prerequisites or dependencies to use AWS WAF in your AMS account.

# Use AMS SSP to provision AWS Well-Architected Tool in your AMS account
<a name="well-arch"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS Well-Architected Tool capabilities directly in your AMS managed account. The AWS Well-Architected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices. The tool is based on the [AWS Well-Architected Framework](https://aws.amazon.com/architecture/well-architected/), developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure. This framework provides a consistent approach for you to evaluate architectures, has been used in tens of thousands of workload reviews conducted by the AWS solutions architecture team, and provides guidance to help implement designs that scale with application needs over time. To learn more, see [AWS Well-Architected Tool](https://aws.amazon.com/well-architected-tool/).

## AWS WA Tool in AWS Managed Services FAQ
<a name="set-well-arch-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS Well-Architected Tool in my AMS account?**

Request access to AWS Well-Architected Tool by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM role to your account: `customer_well_architected_tool_console_admin_role`. After it's provisioned in your account, you must onboard the role in your federation solution.

**Q: What are the restrictions to using AWS Well-Architected Tool in my AMS account?**

Full functionality of the AWS Well-Architected Tool is available in your AMS account.

**Q: What are the prerequisites or dependencies to using AWS Well-Architected Tool in my AMS account?**

There are no prerequisites or dependencies to use AWS Well-Architected Tool in your AMS account.

# Use AMS SSP to provision AWS X-Ray in your AMS account
<a name="comp-xray"></a>

Use AMS Self-Service Provisioning (SSP) mode to access AWS X-Ray (X-Ray) capabilities directly in your AMS managed account. AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing, to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications, to complex microservices applications consisting of thousands of services. To learn more, see [AWS X-Ray](https://aws.amazon.com/xray/).

## X-Ray in AWS Managed Services FAQ
<a name="xray-faqs"></a>

Common questions and answers:

**Q: How do I request access to AWS X-Ray in my AMS account?**

Request access by submitting a Management \$1 AWS service \$1 Self-provisioned service \$1 Add (ct-1w8z66n899dct) change type. This RFC provisions the following IAM role to your account: `customer_xray_console_role`. After it's provisioned in your account, you must onboard the role in your federation solution. Additionally, you must have the `customer_xray_daemon_write_instance_profile` to push data from your Amazon EC2 instances to X-Ray. This instance profile is created when you receive the `customer_xray_console_role`.

You can submit a service request to AMS Operations to assign the `customer_xray_daemon_write_policy` to the existing instance profile, or you can use the instance profile that is created when AMS Operations enables X-Ray for you.

**Q: What are the restrictions to using AWS X-Ray in my AMS account?**

Full functionality of AWS X-Ray is available in your AMS account except for encryption with AWS KMS key (KMS key). AWS X-Ray encrypts all trace data by default. By default, X-Ray encrypts traces and related data at rest. If you need to encrypt data at rest with a key, you can choose either AWS-managed KMS key (aws/xray) or KMS Customer-Managed key. For KMS Customer-Managed key for X-Ray encryption, submit a Management \$1 Other \$1 Other \$1 Create change type (ct-1e1xtak34nx76).

**Q: What are the prerequisites or dependencies to using AWS X-Ray in my AMS account?**

AWS X-Ray has a dependency on Amazon S3, CloudWatch, and CloudWatch Logs, which are already implemented in AMS accounts. Transitive dependencies vary based on data sources and other AWS service AWS X-Ray that features may be interacting with (for example, Amazon Redshift, Amazon RDS, Athena).

# Use AMS SSP to provision VM Import/Export in your AMS account
<a name="vm-im-ex"></a>

Use AMS Self-Service Provisioning (SSP) mode to access VM Import/Exportcapabilities directly in your AMS managed account. VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization infrastructure, allowing you to deploy workloads across your IT infrastructure. To learn more, see [VM Import/Export](https://aws.amazon.com/ec2/vm-import/).

## VM Import/Export in AWS Managed Services FAQ
<a name="set-vm-im-ex-faqs"></a>

Common questions and answers:

**Q: How do I request access to VM Import/Export in my AMS account?**

Request access to VM Import/Export by submitting an RFC with the Management \$1 AWS service \$1 Self-provisioned service \$1 Add change type (ct-1w8z66n899dct). This RFC provisions the following IAM policy to your account: `customer_vmimport_policy`. After it's provisioned in your account, you must onboard the role in your federation solution.

An additional role, the **VM Import/Export Service** role, is required for the service to perform actions in your account.

**Q: What are the restrictions to using VM Import/Export in my AMS account?**
+ Functionality to import custom machine images and data volumes is both available in AMS VM Import/Export. However, permissions to S3 have been scoped down to limit actions to buckets matching the name `customer-vmimport-*` in order to limit access to information within the account.
+ Image and snapshot import is supported in AMS VM Import/Export. However, instance import and instance export functionality is not available due to security measures.
+ Additionally, export functionality has been disabled to mitigate the risk of exporting restricted and sensitive data.

**Q: What are the prerequisites or dependencies to using VM Import/Export in my AMS account?**
+ You must provide a supported disk image to import into the AWS environment. For information, see [VM Import/Export Requirements](https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html).
+ VM Import/Export isn't accessible through the AWS console. You must access this service through the AWS CLI, AWS Tools for PowerShell, or the AWS SDKs. Or, you can request an instance profile by submitting change type ct-117rmp64d5mvb: Deployment \$1 Advanced stack components \$1 Identity and Access Management (IAM) \$1 Create EC2 instance profile. This instance profile allows the tools to perform commands from an instance.

# Customer Managed mode
<a name="ams-modes-customer-section"></a>

AWS Managed Services (AMS) Customer Managed mode provides a governance model that is flexible and can be adapted to your requirements. This can be considered a fallback option for services and applications that AMS is unable to operate for you. AMS does not operate infrastructure hosted in accounts created under this mode. However, you can leverage centralized multi-account management in this mode. The following Multi-Account Landing Zone features can be leveraged in this mode:
+ Automated Account deployment
+ Connectivity through Transit Gateway in networking account
+ AMS Config Rules library
+ Store copies of logs in logging account
+ Aggregation of customer managed Guard Duty alerts to Security account
+ Consolidated Billing
+ Enablement of custom Service Control Policies.

For example: If you want to run workloads on Ubuntu Pro, which is not an Operating System managed by AMS, you could use a customer managed account for hosting it. You can also consolidate workloads through customer managed accounts, to take advantage of the bulk discount on Reserved Instances/Sharing Plans available through sharing across an AWS organization.

# Getting started with Customer Managed mode
<a name="cust-man-mode-get-start"></a>

The AMS Customer Managed mode is available through a special multi-account landing zone Application account.

For details, including how to create a Customer Managed Application account, see [Customer Managed application accounts](https://docs.aws.amazon.com/managedservices/latest/userguide/application-account-cust-man.html).

# AMS and AWS Service Catalog
<a name="ams-service-catalog-section"></a>

Service Catalog in AWS Managed Services (AMS) allows organizations to create and manage catalogs of AWS information technology (IT) services and enables IT administrators to create, manage, and distribute catalogs of approved products to end users in their accounts, who can then access the products they need in a personalized portal of services. Administrators can control which users have access to each product to enforce compliance with organizational business policies. Administrators can also set up roles so that end users only require IAM access to Service Catalog in order to deploy approved resources. Service Catalog allows your organization to benefit from increased agility and reduced costs because end users can find and launch only the products they need from a catalog that you control.

Service Catalog provides you with an alternative to the AMS request for change (RFC) process for provisioning and updating resources in your AMS managed account(s). AMS manages all of the infrastructure operations tasks needed to run AWS at scale for all infrastructure resources provisioned through Service Catalog including security, compliance, provisioning, availability, patch, monitoring, alerting, reporting, incident response, and cost optimization. Utilizing Service Catalog in your AMS managed account provides you with a mechanism to centrally manage commonly deployed IT services and helps you achieve consistent governance while enabling users to quickly deploy only the approved IT services they need into their managed environments.

# Getting started with Service Catalog
<a name="serv-cat-get-start"></a>

To get started with Service Catalog in AMS, submit a service request through the AMS console to request access to Service Catalog. Upon submission of the request, three IAM roles will be deployed into your account(s) along with an AMS managed stack containing the CloudFormation macro that invokes the AMS `Transform` (described previously) so we can register the products in our systems, and to perform operations against the infrastructure provisioned through Service Catalog. The three IAM roles deployed include a role for IT admins to manage products as Service Catalog admins; a role for application owners and end-users to configure, launch, and manage products; and a role that will be used as a launch constraint, that defines the permissions that Service Catalog will use while launching or updating the your product.

# Service Catalog in AMS before you begin
<a name="ams-service-catalog-section-faq"></a>

**Does Service Catalog replace the existing AMS request for change (RFC) process?**  
In accounts where Service Catalog is enabled, it will act as the change management system in which you provision and update IT services in your AMS account through your predefined product catalog; AMS will provide a default portfolio/product catalog, and your IT admins can create and configure your own. Service Catalog will only acknowledge stacks provisioned through Service Catalog. Likewise, services provisioned through Service Catalog will not be modifiable through the AMS RFC process as modification outside of Service Catalog will drift the stack from the approved product configuration. 

**Can I see stacks provisioned through service catalog in the AMS Console?**  
Yes. You can view all stacks provisioned through service catalog in the AMS console. Stacks provisioned through service catalog are easily identifiable by the stack ID of "SC-". Although stacks are viewable in the AMS console you will not be able to update through the AMS RFC process. Access to the AMS change management system (RFCs) is limited to access request, patch orchestration and back-up RFCs only.

**If I provision and/or update a stack through Service Catalog will there be a corresponding RFC in the AMS Console?**  
The only RFC that will show in the AMS console is an RFC to register the stack with AMS when a stack is initially provisioned. This RFC is filed automatically by the AMS validation process that is triggered when a stack is launched through Service Catalog. All other provisioning and changes are tracked directly in Service Catalog and are viewable in the Service Catalog console. Furthermore, you can use the **Provisioned Product Plan** feature in Service Catalog to view the list of changes that will be made to the resources in advance of provisioning or updating the product.

**Do I have to do anything specific for provisioning products in my AMS managed account?**  
Yes. All Service Catalog products provisioned in AMS accounts must contain this line of JSON in the CFN template that defines that product:  

```
"Transform":{"Name":"AmsStackTransform","Parameters":{"StackId":{"Ref":"AWS::StackId"}}}
```
This snippet of CloudFormation code triggers the AMS validations required before the resource can be provisioned in your AMS managed account. It is your responsibility to include this line of code as part of the product definition. If it is not included, provisioning will fail and the following error message will be displayed: "Failed to create product. This account is managed by AMS. All products in AMS accounts must have the AMS `Transform` code in the template."

**Is there any Service Catalog functionality not available and/or limited for AMS customers at launch?**  
Yes, the following SC features are not available for AMS customers at initial launch:  
+ Account Creation through Service Catalog
+ Ability to launch all AWS Services through Service Catalog into an AMS-managed account. AWS Service availability is limited to AMS supported services (managed and self-provisioned). For more information on AMS-supported services, see the AMS service description.
+ Service Catalog IT service manager (ITSM) connectors will not communicate with AMS incident reports, and service requests.
+ Ability to leverage Service Catalog quick starts and reference architectures without modification. Remember that Service Catalog products for AMS accounts must contain this line of JSON code:

  ```
  "Transform":{"Name":"AmsStackTransform","Parameters":{"StackId":{"Ref":"AWS::StackId"}}}
  ```

  in the CNF template. Note that this line is *not* part of a typical AWS CloudFormation template and must be explicitly added.
+ Terraform is not currently supported by AMS for provisioning Service Catalog products.
+ AWS CFN stacksets are not supported in AMS.
+ You cannot create custom IAM roles.
+ Service Actions are limited to:
  + [ AWS-RebootRdsInstance](https://console.aws.amazon.com/systems-manager/documents/AWS-RebootRdsInstance/description?region=us-east-1)
  + [ AWS-RestartEC2Instance](https://console.aws.amazon.com/systems-manager/documents/AWS-RestartEC2Instance/description?region=us-east-1)
  + [ AWS-StartEC2Instance](https://console.aws.amazon.com/systems-manager/documents/AWS-StartEC2Instance/description?region=us-east-1)
  + [ AWS-StartRdsInstance](https://console.aws.amazon.com/systems-manager/documents/AWS-StartRdsInstance/description?region=us-east-1)
  + [ AWS-StopEC2Instance](https://console.aws.amazon.com/systems-manager/documents/AWS-StopEC2Instance/description?region=us-east-1)
  + [ AWS-StopRdsInstance](https://console.aws.amazon.com/systems-manager/documents/AWS-StopRdsInstance/description?region=us-east-1)
  + [ AWS-CreateImage](https://console.aws.amazon.com/systems-manager/documents/AWS-CreateImage/description?region=us-east-1)
  + [ AWS-CreateRdsSnapshot](https://console.aws.amazon.com/systems-manager/documents/AWS-CreateRdsSnapshot/description?region=us-east-1)
  + [ AWS-CreateSnapshot](https://console.aws.amazon.com/systems-manager/documents/AWS-CreateSnapshot/description?region=us-east-1)
**Note**  
When creating service actions, you can configure the execution role to be the end user's permissions, the launch role, or a custom IAM role of your choosing. The selected execution role must have sufficient permissions to perform the service action, and have a TrustPolicy that allows it to be assumed by Service Catalog, otherwise that service action will fail at execution time. We recommend using the AWSManagedServicesServiceCatalogLaunchRole, which has the correct permissions and trust policy to be used as a service action.

**What will I still need to use the AMS RFC system for?**  
At general availability (GA) you will still need to use RFCS to run the following actions:  
+ Configuring Patch Orchestrator
+ Configuring Back up policies
+ Requesting instance access
+ Creating and assigning security groups that fall outside AMS guidelines.
+ Performing workload ingest (WIGS)
+ Creating IAM roles

**Can I use the Service Catalog CLI to access Service Catalog in my AMS managed account?**  
Yes, Service Catalog APIs are available and enabled through the CLI. Actions from the management of Service Catalog artifacts through the provisioning and terminating of those artifacts, are available. For more information, see [AWS Service Catalog Resources](https://aws.amazon.com/servicecatalog/resources/), or download the latest AWS SDK or CLI.

**Who creates, manages, and distributes customers' catalogs of approved products?**  
The customer's catalog administrator and/or IT administrator, or assigned resource, is responsible for the management of your Service Catalog catalogs and approved products.

**Can I use AMS AMIs?**  
AMS AMIs vended after March 2020 can be deployed through AWS Service Catalog.

**How do I migrate to AMS using Service Catalog?**  
To migrate your workload to AMS using Service Catalog you begin by following the [Workload Ingest](https://docs.aws.amazon.com/managedservices/latest/appguide/ams-workload-ingest.html) (WIGs) process to create an AMI in AMS. You use the AMI produced by WIGS to create a product in Service Catalog. How to do this is detailed in [AWS Service Catalog - Getting Started](https://docs.aws.amazon.com/servicecatalog/latest/adminguide/getstarted.html).