

# RISE with SAP on AWS Cloud
<a name="rise"></a>

RISE with SAP S/4HANA Cloud, private edition is a cloud ERP offering from SAP. Along with ERP, it includes Business Process Intelligence, Business Platform and Analytics, and Business Networks. SAP maintains responsibility for the holistic service level agreement, cloud operations, and technical support for RISE. You can choose your own cloud service provider in RISE with SAP.

SAP S/4 HANA Cloud, private edition is a single-tenant setup where different customer environments are isolated by AWS accounts and a dedicated Virtual Private Cloud (VPC).

**Important**  
SAP owns and manages the AWS account where RISE with SAP is deployed, and is responsible for the AWS services used to serve your SAP landscape on AWS.

SAP is responsible for security in the cloud in RISE with SAP. For more information, see [AWS Cloud Security – Shared Responsibility Model](https://aws.amazon.com/compliance/shared-responsibility-model/) and [SAP and Hyperscalers: Clarifying Security in the Cloud](https://spc.2bm.dk/wp-content/uploads/2021/08/SAP-and-Hyperscalers_-Clarifying-Security-in-the-Cloud.pdf). In addition to the security provided by SAP, you can also implement additional security for your SAP landscape. See the [Security](security-rise.md) section for more details.

In your AWS account managed by SAP, SAP manages the AWS services required to run your SAP landscape on AWS. You can still utilize AWS services to extend RISE with SAP in your own AWS account that is not managed by SAP. For example, you can create a data lake with Amazon AppFlow or AWS Glue. See the [Extensions](extensions-rise.md) section for more details.

**Note**  
You must create a separate AWS account or use your existing AWS account that is not managed by SAP for creating extensions with AWS services.

SAP avails Support for AWS account that is managed by SAP. You are not required to establish additional Support for the AWS account managed by SAP.

This documentation is focused on RISE with SAP S/4HANA Cloud, private edition and SAP S/4HANA Cloud, private edition, tailored option. The following topics are covered in this document.

**Topics**
+ [Connectivity](connectivity-rise.md)
+ [Security](security-rise.md)
+ [Reliability](reliability-rise.md)
+ [Observability](rise-observability.md)
+ [Change Management](rise-change-management.md)
+ [Data Integration and Analytics](rise-data-integration-analytics.md)
+ [Agentic AI](rise-agenticai.md)
+ [AWS and SAP JRA](rise-jra.md)
+ [Extensions](extensions-rise.md)

# Connectivity
<a name="connectivity-rise"></a>

You must establish connectivity between AWS cloud where your RISE with SAP solution is running and on-premises data centers. You also need a connection for direct data transfer (to avoid routing data via your on-premises locations) and communication between SAP systems and your applications running on AWS cloud. The following image provides an example overview of connectivity to RISE with SAP VPC.

![\[An example RISE with SAP VPC connection between an SAP-managed account and on-premises data centers\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-connectivity.png)


See the following topics for further details:

**Topics**
+ [Roles and responsibility for establishing connectivity](rise-responsibility.md)
+ [Connecting to RISE from on-premises networks](rise-connection-on-premises.md)
+ [Connecting to RISE from your AWS account](rise-accounts.md)
+ [Connect to nearest Direct Connect POP (including Local Zone)](rise-local-zone.md)
+ [Decision tree on connectivity to RISE](rise-decision-tree.md)
+ [Other considerations](other-considerations.md)

# Roles and responsibility for establishing connectivity
<a name="rise-responsibility"></a>

Under RISE with SAP, the SAP Enterprise Cloud Services (ECS) team manages the SAP S/4HANA Private Cloud Environment. The *Supplemental Terms and Conditions* provided by SAP has a section on Excluded Tasks. You are responsible for running such tasks. You can also use a third-party service provider to manage the excluded tasks for you. For further details, see [SAP Product Policies](https://www.sap.com/about/agreements/policies.html).

The primary task required for deploying RISE with SAP is to establish network connectivity to RISE with SAP VPC on AWS. As per the RISE with SAP agreement, you are responsible for establishing a connection to RISE.

We recommend that you spend time understanding the available options on how to connect your on-premises network and/or existing AWS accounts to RISE with SAP VPC on AWS. Review the subsequent sections for more information.

# Connecting to RISE from on-premises networks
<a name="rise-connection-on-premises"></a>

Connectivity to RISE with SAP on AWS from on-premises is supported using AWS VPN or AWS Direct Connect or a combination of the two.

**Topics**
+ [Connecting to RISE using AWS VPN](rise-connection-vpc.md)
+ [Connecting to RISE using AWS Direct Connect](rise-connection-direct-connect.md)
+ [Connecting to RISE using SD-WAN](rise-connection-sd-wan.md)
+ [Implementation steps for connectivity](rise-connection-implementation-steps.md)

# Connecting to RISE using AWS VPN
<a name="rise-connection-vpc"></a>

Enable access to your remote network from RISE with SAP VPC using [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html). Traffic between AWS cloud and your on-premises location is encrypted via Internet Protocol security (IPsec) and transferred through a secure tunnel on internet. This option is efficient, and faster to implement when compared to AWS Direct Connect. For more information, see [Connect your VPC to remote networks using AWS Virtual Private Network](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html).

You can get a maximum bandwidth of up to 1.25 Gbps per VPN tunnel. For more information, see [Site-to-Site VPN quotas](https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-limits.html).

To scale beyond the default maximum limit of 1.25 Gbps throughput of a single VPN tunnel, see [How can I achieve ECMP routing with multiple Site-to-Site VPN tunnels that are associated with a transit gateway?](https://repost.aws/knowledge-center/transit-gateway-ecmp-multiple-tunnels) 

When using this option, SAP requires the following details:
+ BGP ASN
+ IP address of your device

You can obtain these details from your AWS VPN device on-premises.

When connecting your remote network directly to RISE using AWS Site-to-Site AWS VPN, the cost for the AWS VPN Connection and the cost for data transfer out are included in the RISE subscription.

For more information see: [AWS Site-to-Site AWS VPN Pricing](https://aws.amazon.com/vpn/pricing/).

Note: Because the cost associated with the lifecycle and operation of a "Customer gateway device" (a physical device or software application on your side of the Site-to-Site AWS VPN connection) varies, this is not taken into consideration in this document.

# Connecting to RISE using AWS Direct Connect
<a name="rise-connection-direct-connect"></a>

Use AWS Direct Connect if you require a higher throughput or more consistent network experience than an internet-based connection. AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. You can create different types of virtual interfaces (VIFs) to connect with various AWS services. For example, you can create a Public VIF to communicate with public services like Amazon S3 or a Private/Transit VIF for private resources such as Amazon VPC, while bypassing the internet service providers in your network path. For more information, see [AWS Direct Connect connections](https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithConnections.html).

You can choose from a dedicated connection of 1 Gbps, 10 Gbps, 100 or 400 Gbps or an AWS Direct Connect Partner’s hosted connection where the Partner has an established network link with AWS cloud. Hosted connections are available from 50 Mbps. 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, 10 Gbps, and 25 Gbps. You can order hosted connections from an AWS Direct Connect Delivery Partner approved to support this model. For more information, see [AWS Direct Connect Delivery Partners](https://aws.amazon.com/directconnect/partners/).

To connect, use a virtual private gateway in AWS account managed by SAP or a Direct Connect gateway in your AWS account associated with a virtual private gateway in AWS account managed by SAP. For more information, see [Direct Connect gateways](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html). Direct Connect gateway can also connect to a AWS Transit Gateway. For more information, see [Connecting to RISE using your single AWS account](rise-connection-accounts.md).

You must acquire a *Letter of Authorization* from SAP to setup a AWS Direct Connect dedicated connection in the AWS account managed by SAP.

When connecting your remote network directly to RISE using AWS Direct Connect, the cost for data transfer out (egress) is included in the RISE subscription. Costs associated to the capacity (the maximum rate that data can be transferred through a network connection) and the port hours (the time that a port is provisioned for your use with AWS or an [AWS Direct Connect Delivery Partners](https://aws.amazon.com/directconnect/partners/)) are not included in the RISE subscription. AWS Direct Connect does not have setup charges, and you may cancel at any time, however, services provided by your [AWS Direct Connect Delivery Partners](https://aws.amazon.com/directconnect/partners/) or other local service provider may have other terms and conditions that apply.

For more information, see: [AWS Direct Connect Pricing](https://aws.amazon.com/directconnect/pricing/) 

# Connecting to RISE using SD-WAN
<a name="rise-connection-sd-wan"></a>

 **What is SD-WAN** 

 [Software-Defined Wide Area Networking (SD-WAN)](https://en.wikipedia.org/wiki/SD-WAN) is a networking technology that uses software to manage and route traffic across different networks such as Multi-Path Label Switching (MPLS), public internet, or the AWS backbone focusing on improving connectivity and application performance. SD-WAN primarily operates at layer 3 (Network Layer) of the network OSI model offering centralized control, routing, path selection, IP-based policies, and the ability to prioritize specific mission critical applications, such as SAP, making it well-suited for cloud-based RISE with SAP environments.

Although SD-WAN primarily operates at Layer 3, using an overlay network such as broadband internet, it can utilize Layer 2 (Data Link) technologies such as [AWS Direct Connect](https://aws.amazon.com/directconnect/) as the underlay network for transport, and Layer 3 (Network) technologies such as [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html).

In SD-WAN architecture, an SD-WAN headend acts as a hub or centralized network component, while [SD-WAN edge devices](https://en.wikipedia.org/wiki/SD-WAN#SD-WAN_edge) deployed at branch offices, remote sites or data centers which serves as the entry and exit points for WAN Traffic.

You can refer to more detailed information in the [Reference Architectures for Implementing SD-WAN Solutions on AWS](https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/sd-wan-deployment-models-ra.pdf?did=wp_card&trk=wp_card).

 **Scenario A: SD-WAN appliances (edge and/or headend/hub) on-premises** 

 [AWS Transit Gateway Connect](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html) allows you to extend your SD-WAN network to AWS using [GRE (Generic Routing Encapsulation)](https://en.wikipedia.org/wiki/Generic_routing_encapsulation) tunnels without needing additional AWS infrastructure. Through [Transit Gateway Connect Peer](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html#tgw-connect-peer), you can establish GRE tunnels between your transit gateway in your AWS account and the SD-WAN appliance on-premises which are connected via AWS Direct Connect connection as underlying transport.

The appliance must be configured to send and receive traffic over a GRE tunnel to and from the transit gateway using the [Connect attachment](https://docs.aws.amazon.com/vpc/latest/tgw/create-tgw-connect-attachment.html). The appliance must be configured to use [BGP (Border Gateway Protocol) ](https://aws.amazon.com/what-is/border-gateway-protocol/)for dynamic route updates and health checks.

Each connection can be configured with its own route table and BGP peer, enabling you to extend your on-premises network segmentation via [Virtual routing and forwarding (VRF)](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/extend-vrfs-to-aws-by-using-aws-transit-gateway-connect.html) to AWS. The RISE with SAP VPC is attached to the AWS Transit Gateway.

This setup provides a streamlined way to connect your SD-WAN environment with RISE with SAP on AWS using AWS Direct Connect, maintaining network separation while simplifying the overall architecture.

In this scenario, the [overlay network](https://en.wikipedia.org/wiki/Overlay_network) is SD-WAN (with GRE Tunnels) with the headend/hub or edge devices deployed on on-premises, and the underlay transport is AWS Direct Connect

 **Pattern A-1: SD-WAN devices integration with AWS Transit Gateway and AWS Direct Connect with your AWS landing zone** 

![\[SD-WAN devices integration with Transit Gateway and Direct Connect with your landing zone\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-pattern-a-1-sd-wan-tgw-dx-lz.png)


The preceding diagram illustrates a pattern of how you can extend and segment your SD-WAN traffic to AWS without adding extra infrastructure. You can create Transit Gateway connect attachments using an AWS Direct Connect connection as underlying transport in your AWS account.

Outbound from RISE with SAP VPC:

1. Traffic initiated from the RISE VPC to the corporate data center is routed to the Transit Gateway.

1. The Transit Gateway connect attachment uses the Direct Connect connection as the underlay transport and connects the Transit Gateway to the corporate data center SD-WAN device with GRE tunneling and BGP.

Inbound to RISE with SAP VPC:

1. Traffic from the corporate data center SD-WAN device to the RISE VPC is forwarded to the Transit Gateway via the GRE tunnel of the Transit Gateway attachment over the Direct Connect link.

1. Transit Gateway forwards the traffic to the destination RISE with SAP VPC.

 **Pattern A-2: SD-WAN devices integration with AWS Transit Gateway and AWS Direct Connect with no AWS landing zone** 

![\[SD-WAN devices integration with Transit Gateway and Direct Connect with no landing zone\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-pattern-a-2-sd-wan-tgw-dx-no-lz.png)


The preceding diagram illustrates a pattern of how you can extend and segment your SD-WAN traffic to AWS without adding extra infrastructure. In RISE with SAP, you can request SAP to create Transit Gateway connect attachments using a Direct Connect connection as underlying transport. Customers can leverage SAP-managed [Direct Connect gateway (DXGW)](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-intro.html) if required.

Outbound from RISE with SAP VPC:

1. Traffic initiated from RISE VPC to the corporate data center is routed to the Transit Gateway.

1. The Transit Gateway connect attachment uses the Direct Connect connection as transport and connects the Transit Gateway to the corporate data center SD-WAN device using GRE tunneling and BGP.

Inbound to RISE with SAP VPC:

1. Traffic from the corporate data center SD-WAN device to the RISE VPC is forwarded to the Transit Gateway via the GRE tunnel of the Transit Gateway attachment over the Direct Connect link.

1. Transit Gateway forwards the traffic to the destination RISE with SAP VPC.

 **Scenario B: SD-WAN appliances (edge and/or headend/hub devices) in AWS ** 

In this scenario, the virtual appliances of the SD-WAN network are deployed in a VPC within AWS. Then, you use a VPC attachment as underlying transport for the Transit Gateway connect attachment between the SD-WAN virtual appliances and the Transit Gateway in your AWS account(s). Similar to Scenario A, Transit Gateway connect attachments support GRE for higher bandwidth performance compared to a VPN connection. It supports BGP for dynamic routing and removes the need to configure static routes. In addition, its integration with [Transit Gateway Network Manager](https://docs.aws.amazon.com/vpc/latest/tgwnm/what-is-network-manager.html) provides advanced visibility through global network topology, attachment level performance metrics, and telemetry data.

Between on-premises and AWS, the [overlay network](https://en.wikipedia.org/wiki/Overlay_network) is SD-WAN with GRE or IPSec tunnels with the headend/hub deployed within AWS, and the underlay transport could be Internet, MLPS, or Direct Connect. Following are the architecture patterns under this scenario:

Note: Network patterns covered in the following sections are applicable only with your existing or a new landing zone setup on AWS. For SD-WAN appliances deployment and connectivity directly with AWS Account – managed by SAP, refer to Pattern A-2.

 **Pattern B-1: SD-WAN appliances in AWS integrated with AWS Transit Gateway Connect with your AWS landing zone** 

![\[SD-WAN appliances integrated with Transit Gateway and Direct Connect with your landing zone\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-pattern-b-1-sd-wan-aws-tgw-dx-lz.png)


The preceding diagram illustrates a pattern of integrating your SD-WAN network with Transit Gateway using [connect attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-connect.html) and placing (third-party) virtual appliances of the SD-WAN network in an Appliance VPC within AWS. It’s common to have SD-WAN edge appliances deployed at branch locations, and on-premises data center to create a full mesh topology.

Outbound from RISE with SAP:

1. Traffic initiated from the RISE VPC to the corporate data center is routed to the Transit Gateway.

1. The Transit Gateway connect attachment uses the VPC attachment as transport and connects Transit Gateway to the third-party appliance in the Appliance VPC using GRE tunneling and BGP.

1. The third-party virtual appliance encapsulates the traffic, which uses the SD-WAN overlay – on top of the Direct Connect link – to reach the corporate data center.

Inbound to RISE with SAP:

1. Traffic from branches outside AWS to the RISE VPC reaches the internet gateway of the appliance VPC via the SD-WAN overlay over the internet. Similarly, traffic from the corporate data center to the RISE VPC reaches the virtual private gateway of the Appliance VPC via the SD-WAN overlay over the Direct Connect link.

1. The third-party virtual appliance in the appliance VPC forwards the traffic to the Transit Gateway via the connect attachment.

1. Transit Gateway forwards the traffic to the destination RISE VPC.

 **Pattern B-2: SD-WAN appliances in AWS integrated with AWS Site-to-Site VPN** 

![\[SD-WAN appliances iintegrated with Site-to-Site VPN\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-pattern-b-2-sd-wan-s2svpn.png)


The diagram above illustrates a pattern of integrating your SD-WAN network with Transit Gateway using an AWS Site-Site VPN connection and placing (third party) virtual appliances of the SD-WAN network in an Appliance VPC within AWS. You may use this option when your third-party virtual appliance does not support GRE. It’s common to have SD-WAN edge appliances deployed at branch locations, and on-premises data center to create a full mesh topology.

Outbound from RISE with SAP:

1. Traffic initiated from the RISE VPC to the corporate data center is routed to the Transit Gateway Elastic Network Interface (TGW ENI).

1. The traffic is routed between the Transit Gateway and the third-party virtual appliance using the Site-to-Site VPN connection.

1. The third-party virtual appliance encapsulates the traffic, which uses the SD-WAN overlay – on top of the Direct Connect link – to reach the corporate data center.

Inbound to RISE WITH SAP:

1. Traffic from branches outside AWS to the RISE VPC reaches the internet gateway of the appliance VPC via the SD-WAN overlay over the internet. Similarly, traffic from the corporate data center to the RISE VPC reaches the virtual private gateway of the appliance VPC via the SD-WAN overlay over the AWS Direct Connect link.

1. The third-party virtual appliance in the appliance VPC forwards the traffic to the Transit Gateway via Site-to-Site VPN connection.

1. Transit Gateway forwards the traffic to TGW ENI of the destination RISE VPC.

# Implementation steps for connectivity
<a name="rise-connection-implementation-steps"></a>

This section provides a deeper dive into the implementation steps for connectivity between RISE with SAP and your on-premises environments (without any Customer managed AWS Account usage). The two options we will step into are: first, creating highly resilient deployment for critical workloads, and second, creating cost effective alternative for non-critical workloads.

For each option we’ll provide clarity on the details SAP needs, the steps you will take in your on-premises environment.

## Option 1: Resilient Deployment for Critical Workloads
<a name="option-1-resilient-deployment-for-critical-workloads"></a>

![\[Resilient Deployment for Critical Workloads\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-option-1-resilience-connectivity.png)


 [AWS Direct Connect (DX)](https://aws.amazon.com/directconnect/?nc=sn&loc=0) comes in two connection types, namely [Dedicated](https://docs.aws.amazon.com/directconnect/latest/UserGuide/dedicated_connection.html) and [Hosted](https://docs.aws.amazon.com/directconnect/latest/UserGuide/hosted_connection.html). A Dedicated DX is a physical Ethernet connection associated with a single customer, between the customer’s private network and AWS. Hosted DX is a physical Ethernet connection that an [AWS Direct Connect Partner](https://aws.amazon.com/directconnect/partners/) provisions on behalf of a customer. Learn about [AWS Direct Connect](https://aws.amazon.com/directconnect/) to familiarize yourself with the service.

To set up a resilient Direct Connect solution for your RISE with SAP deployment, follow these implementation steps:

 **Prerequisites** 

Before configuring the Direct Connect connection, ensure your on-premises network is ready. This includes:
+ Reviewing the AWS documentation on [BGP with AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/routing-and-bgp.html) for detailed guidance on router configuration.
+ Configuring Border Gateway Protocol (BGP) on your routers with MD5 authentication. BGP is a requirement for using Direct Connect.
+ Verifying that your network can support multiple BGP connections for redundancy.

 **Initiate the Setup Process** 

Start by contacting your SAP ECS (Enterprise Cloud Services) representative and request the "AWS Connectivity Questionnaire" for RISE with SAP on AWS Direct Connect setup. This questionnaire will help gather the necessary information to provision the Direct Connect connection.

We advise you to set up redundant connections for high availability by completing the questionnaire for each Direct Connect connection you plan to establish. Review the [Direct Connect Resiliency Recommendations](https://aws.amazon.com/directconnect/resiliency-recommendation/) to understand best practices.

 **Complete the SAP Questionnaire** 

When filling out the AWS Connectivity Questionnaire, specify that you want to set up a resilient AWS Direct Connect configuration.

In the questionnaire, provide the following details about your Direct Connect connection:
+ Whether it’s a new or dedicated Direct Connect connection
+ The Direct Connect provider or partner you’ll be using
+ The specific Direct Connect region/location
+ The minimum number of Direct Connect links required
+ The subnet CIDR blocks for the primary and secondary Direct Connect links (in /30 CIDR format)
+ The VLAN ID
+ The Autonomous System Number (ASN) of your on-premises router
+ The IP address ranges of your on-premises network (to allow for proper firewall configuration)

Additionally, include information about your on-premises router, such as the make, model, and interface details.

Submit the completed questionnaire to your SAP ECS representative. SAP will then use this information to provision the necessary Direct Connect resources in your RISE with SAP environment on AWS.

 **SAP’s Responsibilities** 

After you submit the completed questionnaire, SAP will handle the following tasks (the list below is illustrative only for this context):
+ Create a virtual interface (depending on your DX type: hosted or dedicated)
+ Create the Direct Connect Gateway
+ If you need SAP to provision Transit Gateway in RISE VPC,
  + Setup the Transit Gateway (including the ASN you provided)
  + Create the Transit Gateway attachment for your VPC
  + Update the route tables to allow the Transit Gateway to communicate with the RISE with SAP network VPC
  + Associate the Transit Gateway with the Direct Connect Gateway, including the CIDR of the RISE with SAP network that will be advertised to your network

 **Complete the Setup Process** 

Once you receive the necessary information from SAP, such as the VLAN ID, BGP peer IPs, and optional BGP authentication key, configure your on-premises routers accordingly. This includes setting up the VLAN interface and BGP for the Direct Connect connection. Consult the AWS documentation on [router configuration for Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/router-configuration.html) for detailed instructions.

Configure for active/active topology: Implement routing policies to balance traffic across the redundant Direct Connect connections, leveraging BGP communities or more-specific subnet advertisements to influence path selection from AWS to your on-premises network.

 **Establish and Test the Connections** 

Coordinate with SAP to enable the BGP sessions for both Direct Connect connections. Verify the BGP paths and test failover scenarios by simulating the failure of one connection to ensure traffic properly fails over to the other.

Confirm end-to-end connectivity with SAP for both paths. You can also leverage the [AWS Direct Connect Resiliency Toolkit](https://docs.aws.amazon.com/directconnect/latest/UserGuide/resiliency_toolkit.html) to [perform scheduled failover tests](https://docs.aws.amazon.com/directconnect/latest/UserGuide/resiliency_failover.html) and verify the resiliency of your connections. and validate the resiliency of your connections.

 **Maintain the Connections** 

Regularly review and update the Direct Connect configurations as needed. Coordinate any changes with SAP. Monitor the performance and availability of both connections, and refer to the AWS documentation on [Monitoring Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/monitoring-overview.html) for best practices.

By following these steps, you can establish a resilient AWS Direct Connect solution to securely connect your on-premises infrastructure with the RISE with SAP environment on AWS, ensuring high availability and reliable network performance.

## Option 2: Cost Effective Alternative for Non-Critical Workloads
<a name="option-2-cost-effective-alternative-for-non-critical-workloads"></a>

![\[Cost Effective Alternative for Non-Critical Workloads\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-option-2-cost-effective-connectivity.png)


Some AWS customers prefer the benefits of one or more AWS Direct Connect connections as their primary connectivity to AWS, coupled with a lower-cost backup solution. Additionally, they may want an agile and adaptable connection that can be quickly established or decommissioned between network locations globally. To achieve these objectives, they can implement AWS Direct Connect connections with an AWS Site-to-Site VPN backup.

The Site-to-Site VPN connection consists of three key components:

1. Virtual Private Gateway (VGW) - The router on the AWS side

1. Customer Gateway (CGW) - The router on the customer side

1. The S2S VPN connection that binds the VGW and CGW together over two secure IPSec tunnels in an active/passive configuration

For in-depth documentation on establishing the AWS Site-to-Site VPN connection, refer to [Getting started with AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html) in the AWS documentation.

 **Prerequisites** 

This approach builds on the steps outlined in the previous Option 1 for setting up a Resilient AWS Direct Connect solution. After completing those Direct Connect implementation steps, you can add an Site-to-Site VPN connection as a failover option.

While your Direct Connect connections are being provisioned, you can begin preparing your on-premises infrastructure for the VPN setup:
+ Review the AWS documentation on Site-to-Site VPN to understand the requirements and best practices.
+ Ensure your firewalls allow the necessary traffic for the VPN tunnels.
+ Confirm you have two customer gateway devices or a single device capable of managing multiple VPN tunnels.

The addition of an Site-to-Site VPN connection provides a faster and more agile backup to your primary Direct Connect links. It’s a similar process to setting up the Direct Connect, but with a few key differences.

 **Initiate the Setup Process** 

Start by contacting your SAP ECS representative again and request the "AWS Connectivity Questionnaire" for adding an AWS Site-to-Site VPN connection to your RISE on AWS setup. Inform SAP of your intent to implement the VPN as a failover to your Direct Connect links.

 **Complete the SAP Questionnaire** 

When filling out the AWS Connectivity Questionnaire this time, specify that you want to set up an AWS Site-to-Site VPN in addition to the Direct Connect connections.

In the AWS Connectivity Questionnaire, you’ll need to provide the following information about the VPN connection in addition to the details filled out for the DX:
+ Customer VPN Gateway details such as the make and model of your customer gateway device(s)
+ Customer VPN Gateway Internet facing public IP Address
+ Type of Routing (static / dynamic)
+ BGP ASN for Dynamic Routing (Customer gateway ASN for BGP. Only 16 bit ASN is supported.)
+ ASN for the AWS side of the BGP session (16- or 32-bit ASN)
+ Customer Side BGP Peer IP-address (if different from VPN peer IP provided)
+ Second Public IP Address (OPTIONAL: only if active-active mode is used)
+ Customer On-Premises Network IP ranges

Submit the completed questionnaire to SAP. They will then create the VPN connection and provide you with the configuration details.

 **SAP’s Responsibilities** 

After you submit the completed questionnaire, SAP will handle the following tasks (the list below is illustrative only for this context):
+ Create the customer gateway (with your provided information like BGP ASN, IP address, and optional private certificate)
+ Create the AWS Site-to-Site VPN and attach it to the RISE with SAP Transit Gateway and your customer gateway
+ Provide the VPN configuration file for you to set up on your on-premises router
+ If you need SAP to provision Transit Gateway in RISE VPC, SAP will add the necessary route to the Transit Gateway route table and update the security groups

Using the information received from SAP, configure the VPN tunnels on your on-premises router. Implement routing policies to prefer the Direct Connect connection over the VPN as the primary path.

Refer to the AWS documentation on [router configuration for Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/router-configuration.html) for guidance on the necessary settings.

 **Test and Verify Connections** 

Coordinate with SAP to enable the VPN connection and verify end-to-end connectivity. Test failover scenarios by simulating a Direct Connect failure and ensure traffic properly fails over to the VPN.

Confirm with SAP that the failover is working as expected for both the Direct Connect and VPN paths.

 **Maintain the Connections** 

Regularly review and update the configurations for both the Direct Connect and VPN connections. Coordinate any changes with SAP.

Monitor the performance and availability of both connections, and refer to the AWS documentation on [monitoring Direct Connect and VPN for best practices](https://docs.aws.amazon.com/directconnect/latest/UserGuide/monitoring-overview.html).

By implementing this Direct Connect with Site-to-Site VPN failover solution, you can achieve a highly resilient connectivity setup for your RISE with SAP deployment on AWS, ensuring seamless failover and reliable network performance.

# Connecting to RISE from your AWS account
<a name="rise-accounts"></a>

You can connect to RISE from your AWS account in the following ways.

**Topics**
+ [Amazon VPC peering](rise-connection-peering.md)
+ [AWS Transit Gateway](rise-connection-transit.md)
+ [AWS Direct Connect gateway](rise-connection-direct-connect-gateway.md)
+ [AWS Cloud WAN](rise-connection-cloud-wan.md)
+ [Connecting to RISE using your single AWS account](rise-connection-accounts.md)
+ [Connecting to RISE using a shared AWS Landing Zone](rise-landing-zone.md)

# Amazon VPC peering
<a name="rise-connection-peering"></a>

VPC peering enables network connection between two AWS VPCs using private IPv4 and IPv6 addresses. Instances can communicate over the same network. For more information, see [What is VPC peering?](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) 

Before setting up a VPC peering connection, you need to create a request for SAP’s approval. For a successful VPC peering, the defined IPv4 Classless Inter-Domain Routing (CIDR) block must not overlap. Check with SAP for the CIDR ranges that can be used in RISE with SAP VPC.

VPC peering is one-on-one connection between VPCs, and is not transitive. Traffic cannot transit from one VPC to another via an intermediary VPC. You must setup multiple peering connections to establish direct communication between RISE with SAP VPC and multiple VPCs.

VPC peering works across AWS Regions. All inter-Region traffic is encrypted with no single point of failure or bandwidth bottleneck. Traffic stays on AWS Global Network and never traverses the public internet, reducing threats of common exploits and DDoS attacks.

![\[VPC peering connections between multiple accounts in multiple Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-peering.jpg)


Data transfer for VPC peering within an Availability Zone is free, and for across Availability Zones is charged per-GB for "data in" to and "data out". Data transfer for VPC peering for across regions is charged for "out" per-GB. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/on-demand/). In your AWS account, use the Availability Zone ID of AWS account managed by SAP to avoid cross-Availability Zone data transfer charges. You can ask for the Availability Zone ID from SAP. For more information, see [Availability Zone IDs for your AWS resources](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html).


|  | 
| --- |
|   **Pricing example - VPC peering across Availability Zones**  ![\[VPC peering across Availability Zones\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-peering-pricing.png) 100GB of data sent from the AWS account – managed by SAP via VPC Peering toward the AWS account – managed by Customer across AZs: 100GB \$1 \$10.01per-GB = \$11 (out - billed to AWS account – managed by SAP) and 100GB \$1 \$10.01per-GB = \$11 (IN - billed to AWS account – managed by Customer) As the cost for data transfer is included In the RISE subscription, the AWS account – managed by Customer will only incur the cost for traffic IN e.g. \$10.01 per-GB.  *[note: the cost example also applies when Sender is AWS account – managed by Customer and Receiver is AWS account – managed by SAP]*   | 
|   **Pricing example - VPC peering across Regions**   *[note: cost between AWS Regions vary. For more information see: [Amazon EC2 pricing Data Transfer](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer)]*  ![\[VPC peering across Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-peering-across-regions-pricing.png) 1). 100GB of data sent from the AWS account – managed by SAP via VPC Peering toward the AWS account – managed by Customer across Regions. 100GB \$1 (\$10.01-\$10.138per-GB) = \$11-\$113.8 (out - billed to AWS account – managed by SAP) As the cost for data transfer is included In the RISE subscription the AWS account – managed by Customer will not incur cost for this example. 2). 100GB of data sent from the AWS account – managed by Customer via VPC Peering toward the AWS account – managed by SAP across Regions. 100GB \$1 (\$10.01-\$10.138per-GB) = \$11-\$113.8 (out - billed to AWS account – managed by Customer) As the cost for data transfer is calculated for "data out" the AWS account – managed by Customer will incur the cost for this example.  | 

# AWS Transit Gateway
<a name="rise-connection-transit"></a>

 AWS Transit Gateway is a network transit hub to interconnect Amazon VPCs. It acts as a cloud router, resolving complex peering setup issues by acting as the central communication hub. You need to establish this connection with AWS account managed by SAP only once.

 **Transit Gateway in your own AWS account** 

To establish connection with AWS account managed by SAP, create and share AWS Transit Gateway via AWS Resource Access Manager (RAM) in your AWS account. SAP then creates an attachment to enable traffic flow through an entry in route table. As AWS Transit Gateway resides in your AWS account, you can retain control over traffic routing. For more information, see [Transit gateway peering attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-peering.html).

![\[Connections between multiple accounts in multiple Regions using Transit Gateway\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-transit-1.png)


 **Transit Gateway in AWS account managed by SAP** 

When you already have an Transit Gateway in another AWS Region, and cannot create another AWS account with Transit Gateway in the Region that has RISE with SAP account, then SAP can provide the Transit Gateway in the RISE with SAP account that will be managed by SAP. You can enable communication between your Transit Gateway and SAP managed Transit Gateway through Transit Gateway Peering. You cannot connect VPC attachments of VPCs outside of the RISE environment to the SAP-managed Transit Gateway.

For peering attachments, each Transit Gateway owner is billed hourly for the peering attachment with the other Transit Gateway, thus the hourly cost for the peering attachment of the Transit Gateway in the SAP account - managed by SAP (for the purpose of Inter Region Transit Gateway Peering) is part of the RISE subscription. However the hourly cost for the peering attachment of the Transit Gateway in the Customer account – Customer managed is billed to the Customer. For more information, see: [Transit Gateway pricing](https://aws.amazon.com/transit-gateway/pricing/) 


|  | 
| --- |
|   **Pricing example - Transit Gateway across VPCs in different Regions**   *[note: cost between AWS Regions vary. For more information see: [Amazon EC2 pricing Data Transfer](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer)]*  ![\[Transit Gateway across VPCs in different Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-transit-different-regions-pricing.png) 1). 100GB of data sent from a VPC in Region X in the AWS account – managed by SAP via the Transit Gateway that resided in the AWS account – managed by SAP, towards a peered Transit Gateway, in a different Region Y, that resided in the AWS account – managed by Customer ending at a VPC in the AWS account – managed by Customer: 100GB \$1 \$10.02per-GB = \$12 (Transit Gateway data processing) \$1 100GB \$1 (\$10.01-\$10.138per-GB) = \$11-\$113.8 (Region out) = \$13-\$115.8 (Total - billed to AWS account – managed by SAP) Data processing is charged to the VPC owner who sends the traffic to Transit Gateway. As the sending VPC is residing in the AWS account – managed by SAP and the cost for data transfer is included in the RISE Subscription, thus the AWS account – managed by Customer will not incur data transfer cost for this example. As data processing charges do not apply for data sent from a peering attachment to a Transit Gateway and inbound inter-Region data transfer charges are free, no further Data Transfer charges apply to the AWS account – managed by Customer. The AWS account – managed by Customer will only be billed for the price per Transit Gateway peering attachment per hour. Data out of an AZ will always go via Transit Gateway endpoint in that AZ to reach other VPC, so there is no cross AZ Data Transfer costs. 2). 100GB of data sent from a VPC in region Y in the AWS account – managed by Customer via the Transit Gateway that resided in the AWS account – managed by Customer, towards a peered Transit Gateway, in a different region X, that resided in the AWS account – managed by SAP ending at a VPC in the AWS account – managed by SAP: 100GB \$1 \$10.02per-GB = \$12 (Transit Gateway data processing) \$1 100GB \$1 (\$10.01-\$10.138per-GB) = \$11-\$113.8 (Region out) = \$13-\$115.8 (Total - billed to AWS account – managed by Customer) Data processing is charged to the VPC owner who sends the traffic to Transit Gateway. As the sending VPC is residing in the AWS account – managed by Customer all data transfer cost for this example are billed to the AWS account – managed by Customer. In addition, the AWS account – managed by Customer will be billed for the price per Transit Gateway peering attachment per hour.  | 

# AWS Direct Connect gateway
<a name="rise-connection-direct-connect-gateway"></a>

 [AWS Direct Connect gateway](https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways.html) is a global service that enables you to establish private connectivity between your on-premises networks and multiple Amazon VPCs across different AWS regions. This centralized connection hub allows you to consolidate your network architecture, reduce complexity, and maintain secure, high-bandwidth connections while avoiding public internet for your mission-critical workloads.

 ** AWS Direct Connect gateway in your own AWS account** 

To establish connection with AWS account managed by SAP, create AWS Direct Connect gateway that routes traffic from Private VIF to VPC Private Gateway. As AWS Direct Connect gateway resides in your AWS account, you can retain control over traffic routing.

![\[Direct Connect gateway in your own account\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-direct-connect-gateway.png)


When you have a requirement for connectivity from multiple on-premises sites and/or are using multiple AWS regions for RISE with SAP (i.e. for long range DR), you can simplify the connectivity utilizing Direct Connect Gateway

![\[Direct Connect gateway in your own account with Multi Region\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-direct-connect-gateway-multi-regions.png)


 ** AWS Direct Connect gateway in AWS account managed by SAP** 

If you do not have any requirement to own and manage an AWS account, you can request for SAP to provide the AWS Direct Connect gateway that is part of AWS Account which is managed by SAP.

![\[Direct Connect gateway in your own account with Multi Region\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-direct-connect-gateway-sap-provided.png)


There is no additional charges for AWS Direct Connect gateway itself. You can find out more from the [AWS Direct Connect FAQs](https://aws.amazon.com/directconnect/faqs/#Direct_Connect_Gateway).

# AWS Cloud WAN
<a name="rise-connection-cloud-wan"></a>

 [AWS Cloud WAN](https://aws.amazon.com/cloud-wan/) is a managed wide-area networking (WAN) service designed to simplify the process of building, managing, and monitoring unified global networks that connect cloud and on-premises resources. It enables organizations to centrally connect data centers, branch offices, remote sites, and Amazon Virtual Private Clouds (VPCs) across the AWS global backbone, using a centralized dashboard and policy-driven automation. For more information, see [AWS Cloud WAN documentation](https://docs.aws.amazon.com/network-manager/latest/cloudwan/what-is-cloudwan.html).

 **Connecting to RISE from on-premises using AWS Cloud WAN in your AWS account** 

To establish a connection with RISE Environment (AWS account managed by SAP), create and share AWS Cloud WAN via AWS Resource Access Manager (RAM) in your AWS account. Afterwards, SAP will accept the shared Cloud WAN and create an VPC attachment to enable traffic flow through an entry in route table. As AWS Cloud WAN resides in your AWS account, you can retain control over traffic routing.

Here is high level step-by-step guide to create Cloud WAN global:

1. In AWS Network Manager, create a global network and associated core network.

1. Create a Core Network Policy (CNP) that defines segments, Autonomous System Number (ASN) range, AWS Regions and tags to be used to attach to segments.

1. Apply the network policy.

1. Share the core network using the resource access manager with SAP ECS that manages RISE with SAP Account.

1. Create and tag attachments.

1. Update routes in your attached VPCs to include the core network.

You can find out more details from these documentations:
+  [Quick start: Create an AWS Cloud WAN global network and core network](https://docs.aws.amazon.com/network-manager/latest/cloudwan/cloudwan-getting-started.html) 
+  [Configure the core network settings in an AWS Cloud WAN policy version](https://docs.aws.amazon.com/network-manager/latest/cloudwan/cloudwan-core-network-config.html) 
+  [Building a Scalable and Secure Multi VPC AWS Network Infrastructure – Cloud WAN](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/aws-cloud-wan.html) 

![\[Cloud WAN\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-cloudwan-01.jpg)


1.  **Attaching AWS Site-to-Site VPN (S2S VPN) to AWS Cloud WAN** – Create a Site-to-Site VPN connection with Target Gateway Type set to Not Associated. You can create an AWS S2S VPN attachment for AWS Cloud WAN under Site-to-Site VPN connections from the Amazon VPC console. Once the AWS S2S VPN is created, you can [attach it to AWS Cloud WAN core network](https://docs.aws.amazon.com/network-manager/latest/cloudwan/cloudwan-vpn-attachment-add.html). For more information, see [How Site-to-Site VPN connection can be created for AWS Cloud WAN](https://docs.aws.amazon.com/vpn/latest/s2svpn/create-cwan-vpn-attachment.html).

1.  **Attaching AWS Direct Connect gateway with AWS Cloud WAN** – Create a Direct Connect gateway with a transit virtual interface and attach Cloud WAN to Direct Connect gateway which exist in your AWS Account. For more information, see [AWS Cloud WAN attachment to a Direct Connect gateway](https://aws.amazon.com/blogs/networking-and-content-delivery/simplify-global-hybrid-connectivity-with-aws-cloud-wan-and-aws-direct-connect-integration/). For detailed steps to create the transit virtual interface for Direct Connect Gateway, you can refer to AWS documentation - [Create a transit virtual interface to the AWS Direct Connect gateway](https://docs.aws.amazon.com/directconnect/latest/UserGuide/create-transit-vif-dx.html).

You can estimate the costs of deploying AWS Cloud WAN from the [pricing documentation](https://aws.amazon.com/cloud-wan/pricing/). Below are pricing examples for you to consider.

 **Scenario A. AWS Cloud WAN connecting two VPCs in same Region** 

![\[Cloud WAN connecting two VPCs in same Region\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-cloudwan-02.jpg)



|  | 
| --- |
|   **Pricing example – AWS Cloud WAN connecting two VPCs in same Regions**   *[note: cost between AWS Regions vary. For more information see: [Amazon EC2 pricing Data Transfer](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer)]*  100GB of data sent from a VPC in Region X in the AWS account – managed by SAP via Cloud WAN that resides in the AWS account – managed by customer ending at a VPC managed by customer. 100GB \$1 \$10.02 per-GB = \$12 (Cloud WAN data processing) (Billed to AWS account – managed by SAP) Apart from data processing there would be VPC attachment cost to AWS account – managed by SAP. [Cloud WAN pricing](https://aws.amazon.com/cloud-wan/pricing/) would vary depending upon region where SAP VPC is attached to Cloud WAN. For example, SAP VPC is in Region US East (N. Virginia). You pay \$10.065 per hour for VPC attachments in the US East (N. Virginia) Region. \$10.065 \$1 730 = \$147.45 (Monthly fixed cost billed to AWS account , managed by SAP) Hence the total cost = \$149.45 Data processing and VPC Attachment costs are charged to the VPC owner who sends the traffic to AWS Cloud WAN. As the sending VPC is residing in the AWS account – managed by SAP and the cost for data transfer is included in the RISE subscription, thus the AWS account – managed by Customer will not incur data transfer and attachment cost for this example. The AWS account - managed by customer will only be billed for the price Cloud WAN per VPC attachment per hour. Data out of an AZ will always go via Cloud WAN endpoint in that AZ to reach other VPC, so there is no cross AZ Data Transfer costs.  | 

 **Scenario B. AWS Cloud WAN connecting two VPCs in different Regions** 

![\[Cloud WAN connecting two VPCs in different Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-cloudwan-03.jpg)



|  | 
| --- |
|   **Pricing example – AWS Cloud WAN connecting two VPCs in different Regions**   *[note: cost between AWS Regions vary. For more information see: [Amazon EC2 pricing Data Transfer](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer)]*  100GB of data sent from a VPC in region Y in the AWS account - managed by Customer via AWS Cloud WAN to AWS Account - managed by SAP in different region X. 100GB \$1 \$10.02 per-GB = \$12 (Cloud WAN data processing) \$1 100GB \$1 (\$10.01 - \$10.138 per-GB) = \$11 - \$113.8 (Region out) = \$13 - \$115.8 (Total - billed to AWS account – managed by Customer) Data processing is charged to the VPC owner who sends the traffic to Cloud WAN. As the sending VPC is residing in the AWS account – managed by customer all data transfer costs for this example are billed to the AWS account – managed by Customer. In addition, the AWS account – managed by Customer will be billed for the price per VPC attachment per hour in region Y. VPC attachment charges in Region X would be charged to AWS account – managed by SAP and the charges are included in the RISE subscription.  | 

# Connecting to RISE using your single AWS account
<a name="rise-connection-accounts"></a>

You can establish connectivity between on-premises and RISE with SAP VPC using your AWS account. This method provides you with more control but also requires managing AWS services in your AWS account. You can use any one of the following options.
+  AWS Transit Gateway – Share AWS Transit Gateway resource in you AWS account with AWS account managed by SAP.
+  AWS VPN with AWS Transit Gateway – Create an IPsec VPN connection between your remote network and transit gateway over the internet. For more information, see [How AWS Site-to-Site VPN works](https://docs.aws.amazon.com/vpn/latest/s2svpn/how_it_works.html) and [Transit gateway VPN attachments](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-vpn-attachments.html).
+ Direct Connect gateway – Create a Direct Connect gateway with a transit virtual interface. For more information, see [Transit gateway attachments to a Direct Connect gateway](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-dcg-attachments.html).

  To strengthen the security, see [How do I establish an AWS VPN over an AWS Direct Connect connection?](https://repost.aws/knowledge-center/create-vpn-direct-connect) 

The following image shows this option within the same AWS Regions.

![\[Example connections in a single Region\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-own.jpg)


The following image shows this option across different AWS Regions.

![\[Example connections across Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-own-regions.jpg)


When you choose AWS Site-to-Site VPN and/or AWS Direct Connect to establish connectivity between on-premises and RISE with SAP VPC using a Transit Gateway in the AWS account - managed by the Customer, either in the same AWS Region or a different AWS Region than the RISE with SAP VPC, the following applies.

 **Hourly cost:** 

As the AWS Site-to-Site VPN is residing in the AWS account – managed by Customer and is attached to the Transit Gateway that resides in the AWS account – managed by Customer, the cost for the VPN connection and the cost for the Transit Gateway attachment are billed to the AWS account – managed by Customer

As the Direct Connect and Direct Connect Gateway is residing in the AWS account – managed by Customer and is attached to the Transit Gateway that resides in the AWS account – managed by Customer the cost for the AWS Direct Connect ports hours and the cost for the Transit Gateway attachment are billed to the AWS account – managed by Customer.

For peering attachments, each Transit Gateway owner is billed hourly for the peering attachment with the other Transit Gateway.

 **Data processing charges:** 

Data processing charges apply for each gigabyte sent from a VPC, Direct Connect or VPN to/via the Transit Gateway.

Depending on the source and destination the data processing charges vary and will be billed to the AWS account – managed by Customer, or are already included in the RISE subscription (For a cost estimation example: see below)

For more information see:
+  [AWS Site-to-Site VPN Pricing](https://aws.amazon.com/vpn/pricing/) 
+  [AWS Direct Connect Pricing](https://aws.amazon.com/directconnect/pricing/) 
+  [Transit Gateway pricing](https://aws.amazon.com/transit-gateway/pricing/) 


|  | 
| --- |
|   **Pricing example – Transit Gateway in VPCs in the same region via VPN or Direct Connect**   *[note: cost between AWS Regions vary. For more information see: [Amazon EC2 pricing Data Transfer](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer)]*  ![\[Transit Gateway in VPCs in the same region via VPN or Direct Connect\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-transit-same-regions-via-vpndxc-pricing.png) 1). 200GB of data sent from a VPC in the AWS account – managed by SAP via the Transit Gateway that resided in the AWS account – managed by Customer via a VPN or Direct Connect in the AWS account – managed by SAP towards On-Premises: 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 100 GB \$1 \$10.09per-GB = \$19 (VPN data transfer out, with the first 100 GB are free, then \$1 0.09 per-GB) = \$113 (Total data transfer out billed to AWS account – managed by SAP) or 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 200GB \$1 (\$10.02-\$10.19per-GB) = \$14-\$138 (Direct Connect data transfer out) = \$18-\$142 (Total data transfer out billed to AWS account – managed by SAP) Data processing is charged to the VPC owner who sends the traffic to Transit Gateway. As the sending VPC is residing in the AWS account – managed by SAP and the cost for data transfer is included in the RISE Subscription, therefore the AWS account – managed by Customer will not incur Data Transfer cost in this example. 2). 200GB of data sent from On-Premises via a VPN or Direct Connect in the AWS account – managed by Customer via the Transit Gateway that resided in the AWS account – managed by Customer towards VPC in the AWS account – managed by SAP: 200GB \$1 \$10.00per-GB = \$10 (VPN data transfer in) \$1 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 \$10 (VPN data transfer in) = \$14 (Total data transfer in billed to AWS account – managed by Customer) or 200GB \$1 \$10.00per-GB = \$10 (Direct Connect data transfer in) \$1 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) = \$14 (Total data transfer in billed to AWS account – managed by Customer) Data transfer into AWS is free and this also applies to VPN and Direct Connect therefore the only data processing charge is the data processing of the Transit Gateway. As Transit Gateway resides in the AWS account – managed by Customer the cost for data transfer is billed to the AWS account – managed by Customer  | 
|   **Pricing example – Transit Gateway in VPCs in the different regions via VPN or Direct Connect**   *[note: cost between AWS Regions vary. For more information see: [Amazon EC2 pricing Data Transfer](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer)]*  ![\[Transit Gateway in VPCs in the different regions via VPN or Direct Connect\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-transit-different-regions-via-vpndxc-pricing.png) 1). 200GB of data sent from a VPC in the AWS account – managed by SAP via the Transit Gateway that resided in the AWS account – managed by SAP that is peered with an Transit Gateway in a different Region in the AWS account – managed by Customer via a VPN OR Direct Connect in the AWS account – managed by Customer towards On-Premises: 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 200GB \$1 (\$10.01-\$10.138per-GB) = \$12-\$127.6 (Region out) \$1 100GB \$1 \$10.09per-GB = \$19 (VPN data transfer out, with the first 100 GB are free, then \$1 0.09 per-GB) = \$115-\$140.6 (Total data transfer out billed to AWS account – managed by SAP) or 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 200GB \$1 (\$10.01-\$10.138per-GB) = \$12-\$127.6 (Region out) \$1 200GB \$1 (\$10.02-\$10.19per-GB) = \$14-\$138 (Direct Connect data transfer out) = \$110-\$169.6 (Total data transfer out billed to AWS account – managed by SAP) Data processing is charged to the VPC owner who sends the traffic to Transit Gateway. As the sending VPC is residing in the AWS account – managed by SAP and the cost for Data Transfer is included in the RISE subscription, therefore the AWS account – managed by Customer will not incur Data Transfer cost in this example. 2). 200GB of data sent from On-Premises via a VPN or Direct Connect in the AWS account – managed by Customer via the Transit Gateway that resided in the AWS account – managed by Customer via a peered Transit Gateway in a different region in the AWS account – managed by SAP towards a VPC in the AWS account – managed by SAP: 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 200GB \$1 \$10.00per-GB = \$10 (VPN data transfer in) \$1 200GB \$1 (\$10.01-\$10.138per-GB) = \$12-\$127.6 (Region out) = \$16-\$131.6 (Total data transfer in billed to AWS account – managed by Customer) or 200GB \$1 \$10.02per-GB = \$14 (Transit Gateway data processing) \$1 200GB \$1 \$10.00per-GB = \$10 (Direct Connect data transfer in) \$1 200GB \$1 (\$10.01-\$10.138per-GB) = \$12-\$127.6 (Region out) = \$16-\$131.6 (Total data transfer in billed to AWS account – managed by Customer) Data transfer into AWS in is free and this also applies to VPN and Direct Connect therefore the data processing charge is the data processing of the Transit Gateway and the inter-region data transfer charges. As Transit Gateway resides in the AWS account – managed by Customer, the cost for data transfer is billed to the AWS account – managed by Customer.  | 

# Connecting to RISE using a shared AWS Landing Zone
<a name="rise-landing-zone"></a>

Modern SAP landscapes have several connectivity requirements. Services are accessed across on-premises and AWS Cloud as well as across a variety of SaaS solutions and other cloud service providers.

Creating an [AWS Landing Zone](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-aws-environment/understanding-landing-zones.html) facilitates secure, scalable, and well-architected foundation for RISE with SAP connectivity. It provides the following benefits:
+ Streamlined SAP network integration with standardized architecture
+ Enhanced business continuity through redundant connectivity options
+ Strengthened security posture with layered network controls
+ Centralized management of network resources and policies
+ Ability to reuse [AWS Direct Connect](https://aws.amazon.com/directconnect/) connections across broader AWS solutions
+ Optimized network performance with reduced latency
+ Enhanced governance through AWS native services

A Landing Zone is designed to help organizations achieve their cloud initiatives by automating the set-up of an AWS environment that follows [AWS Well Architected](https://aws.amazon.com/architecture/well-architected/) framework. It provides scalability to cater to all scenarios, from the simplest connectivity, where only RISE with SAP connectivity to on-premises environments is required, to complex requirements with connectivity to multiple SaaS solutions, multiple CSPs and on-premises connectivity.

The key components and benefits of a Landing Zone include:
+  **Multi-account structure** – it sets up an organized hierarchy using [AWS Organizations](https://aws.amazon.com/organizations/) with separate accounts for production, development, and shared services, ensuring clear separation of concerns and improved security boundaries.
+  **Network Architecture** - it establishes a centralized [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/) as the network hub with standardized VPC configurations which connects the RISE with SAP account with other AWS accounts. It also supports integration with AWS Direct Connect and [AWS Site-to-Site VPN](https://aws.amazon.com/vpn/site-to-site-vpn/) to connect your on-premises with RISE with SAP account while maintaining network segmentation and security controls.
+  **Security Framework** - it implements comprehensive AWS security services integration with centralized logging and monitoring, including network firewall implementation and identity and access management controls.
+  **Automation and Management** - it uses Infrastructure as Code deployment through [AWS Control Tower](https://aws.amazon.com/controltower/) or [AWS CDK](https://aws.amazon.com/cdk/) and [Landing Zone Accelerator (LZA)](https://aws.amazon.com/solutions/implementations/landing-zone-accelerator-on-aws/) for automated account provisioning, standardized configurations, and consistent policy enforcement across the environment.
+  **Logging and Monitoring** - it configures AWS services including [AWS Config](https://aws.amazon.com/config/), [AWS CloudTrail](https://aws.amazon.com/cloudtrail/), [Amazon GuardDuty](https://aws.amazon.com/guardduty/) for centralized logging, monitoring, and auditing of resource changes and security events.
+  **Security Controls** - it implements AWS security best practices through Config Rules, CloudTrail trails, and Security Hub standards while enabling network firewall capabilities.
+  **Customization Options** - it allows for customization based on specific organizational requirements, including integration with existing infrastructure and addition of AWS services through the Landing Zone Accelerator configuration.

We recommend using an AWS Landing Zone for RISE with SAP connectivity.

 **Choosing Your Implementation Approach** 

 AWS offers two solutions for implementing a Landing Zone for RISE with SAP connectivity, each designed to meet different organizational needs.

 [AWS Control Tower](https://aws.amazon.com/controltower/) provides a streamlined solution through its console-based interface, enabling quick deployment with standardized controls. This approach suits organizations seeking rapid implementation with built-in governance and compliance controls, particularly those starting their cloud journey or requiring straightforward SAP connectivity.

 [Landing Zone Accelerator (LZA)](https://aws.amazon.com/solutions/implementations/landing-zone-accelerator-on-aws/) extends AWS Control Tower’s capabilities through Infrastructure as Code, offering extensive customization and automation. This solution serves enterprises with complex SAP networking requirements, multiple regions, or significant scaling plans. Organizations with established DevOps practices will benefit from LZA’s configuration-driven approach.

Both solutions deliver secure, scalable foundations for RISE with SAP connectivity. Choose Control Tower for rapid deployment and visual management, or LZA for enhanced customization and automation capabilities.

![\[Connecting to RISE with a shared landing zone\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-rise-landing-zone.png)


 **Building an AWS Landing Zone** 

You can implement AWS Landing Zones using AWS Control Tower and the Landing Zone Accelerator, which provides an automated process for building a secure, scalable, multi-account environment, including management and governance services.

For detailed implementation steps or LZA, AWS provides the [Guidance for Building an Enterprise-Ready Network Foundation for RISE with SAP on AWS](https://aws.amazon.com/solutions/guidance/building-an-enterprise-ready-network-foundation-for-rise-with-sap-on-aws/). It includes validated architecture patterns, security configurations, and operational procedures specifically designed for RISE with SAP deployments. In a simple scenario, a Landing Zone contains a minimal footprint focused on network connectivity that is typically centred around AWS Transit Gateway. For more information, see [AWS Landing zone](https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-migration/aws-landing-zone.html).

The following is a general overview of the process:

1.  **Define requirement** – understand your organization’s security, compliance, and operational requirements. This will help determine the appropriate guardrails, controls, and services to be included in the Landing Zone. Review AWS Connectivity Questionnaire provided by SAP Enterprise Cloud Services (ECS) team.

1.  **Design architecture** – plan the overall architecture, including the number of accounts (management, shared services, workload accounts), network design (VPCs, subnets, routing), shared services (logging, monitoring, identity management), and security controls (IAM, service control policies, guardrails). For LZA implementations, include planning for [configuration file structure](https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/using-configuration-files.html) and customization needs.

1.  **Setup AWS Control Tower** – Control Tower helps in setting up and governing a multi-account AWS environment based on best practices. It allows you to create and provision new AWS accounts and deploy baseline security configurations across those accounts. For LZA implementations, this serves as the foundation for additional customization.

1.  **Deploy Landing Zone Accelerator (Optional)** - If implementing LZA, deploy the installer stack using either AWS CDK or [AWS CloudFormation](https://aws.amazon.com/cloudformation/). Implement standardized configuration files for networking, security, and RISE with SAP connectivity requirements.

1.  **Configure AWS Organizations** - Organizations enables you to centrally manage and govern your AWS accounts. Configure Organizations in Control Tower by creating the necessary organizational units (OUs) and service control policies (SCPs). For LZA implementations, ensure OUs align with [configuration file structure](https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/using-configuration-files.html).

1.  **Deploy Core and Shared Services Accounts** - create and configure the core accounts, such as the management account, shared services accounts (for logging, security tooling), and any other required shared accounts. Deploy shared services, such as CloudTrail, Config, and [AWS Security Hub](https://aws.amazon.com/security-hub/) in the shared services account.

1.  **Deploy Network Architecture** - set up the network architecture, including VPCs, subnets, route tables, and Transit Gateway for hub-spoke model. For LZA implementations, configure Direct Connect and/or Site-to-Site VPN through [network configuration files](https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/using-configuration-files.html). Include [AWS Network Firewall](https://aws.amazon.com/network-firewall/) setup if required.

1.  **Configure IAM** - establish IAM roles, policies, and groups for controlling access and permissions across the Landing Zone accounts.

1.  **Implement Security Controls** - deploy security services and guardrails, such as Security Hub, [AWS Network Firewall](https://aws.amazon.com/network-firewall/), [AWS GuardDuty](https://aws.amazon.com/guardduty/), and [AWS Config](https://aws.amazon.com/config/) Rules.

1.  **Configure Observability and Monitoring** - set up centralized logging and monitoring solutions, such as [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/), [AWS CloudTrail](https://aws.amazon.com/cloudtrail/), and AWS Config.

1.  **Share Transit Gateway Details with SAP** - using AWS connectivity questionnaire. Accept incoming transit gateway association requests and configure routing between RISE with SAP VPC and landing zone. Test connectivity and failover scenarios.

1.  **Deploy Workload Accounts** - deploy workload accounts with your Landing Zone. Create separate AWS accounts for different workload types such as separating development, test and production environments, or Generative AI workloads utilizing Amazon Bedrock, or Data Analytics workloads utilizing Amazon SageMaker.

1.  **Implement Operational Procedures** - establish monitoring, alerting, and backup procedures. Document operational procedures and implement change management processes. Given the complex nature of multi-account environments and the need to maintain consistent security and operational standards across the organization it is advised to set up automated testing and validation.

1.  **Automate and Maintain** - use CloudFormation templates or AWS CDK to automate deployment and maintenance. For LZA implementations, maintain configuration files and regularly update LZA version. Establish processes for ongoing maintenance, updates, and compliance checks. This includes keeping the LZA version up-to-date with latest releases and regular check to ensure compliance with security and compliance standards.

1.  **Manage Costs** - monitor network transfer costs, optimize connectivity paths, and implement cost allocation tags. Regularly review resource utilization and configure budgets and alerts.

Best Practices:
+ Start implementation at least 6-8 weeks before planned go-live
+ Implement redundant connectivity options for high availability
+ Use Landing Zone Accelerator for standardized deployment
+ Follow [AWS Well-Architected framework](https://aws.amazon.com/architecture/well-architected/) guidelines
+ Regularly review and update security controls
+ Maintain documentation and operational procedures
+ LZA implementations can automate most of this setup through [configuration files](https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/using-configuration-files.html).

Costs associated to a Customer Managed AWS Landing Zone vary depending on the AWS Services that are used. The AWS Services as described in this paragraph have their own pricing model. For more information on price, see the dedicated pricing pages of the listed AWS Services. See [AWS Pricing Calculator](https://calculator.aws/#/) to configure a cost estimate that fits your business needs.

Regularly review and update the landing zone configuration to ensure it continues to meet evolving business needs and security requirements.

# Connect to nearest Direct Connect POP (including Local Zone)
<a name="rise-local-zone"></a>

 AWS Direct Connect point-of-presence (POP) is a physical cross-connect that allows users to establish a network connection from their own premises to an AWS Region or AWS Local Zone. You can use the nearest Direct Connect POP (for example, in an AWS Local Zone) to benefit from lower setup and running costs, with the same or lower network latency to your RISE with SAP VPC that runs on the parent AWS Region. For more information, see [AWS Direct Connect Traffic Flow with AWS Local Zone](https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/direct-connect-traffic-flow-local-zone-ra.pdf).

Here is an example scenario - You are based in Philippines, and you would like to deploy RISE with SAP in AWS Singapore Region. You can use Direct Connect POP in Manila to setup Direct Connect from your on-premises data centre or offices. This strategy provides a lower network latency compared to a connecting directly to the AWS Region in Singapore.

The following diagram displays RISE connectivity through nearest AWS Direct Connect POP.

![\[Connect to RISE through the nearest Direct Connect POP (including Local Zone)\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-rise-direct-connect.png)


The following are some considerations when using AWS Direct Connect POP:
+ Use separate VPCs for Region (RISE with SAP VPC) and Local Zones based non-SAP workloads
+ Use Direct Connect Gateway in AWS Direct Connect POP and Private VIF connectivity
+ Use Direct Connect Gateway in AWS Direct Connect POP and Transit VIF connectivity for Region VPCs (RISE with SAP VPC).This is done because Direct Connect Gateway does not exists in AWS Direct Connect POP, and AWS Transit Gateway exists only in AWS Regions.

If resilience is critical, setup a secondary Direct Connect to the AWS Region running RISE with SAP VPC or use AWS Site-to-Site VPN to the AWS Region connectivity option. These services operate within the parent AWS Region, serving as a failover connectivity option ensuring uninterrupted connectivity in the event of disruptions or failures.

![\[Example connections across Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-rise-direct-connect-2.png)


Cost of data transferred between a Local Zone and an Availability Zone within the same AWS Region, "in" to and "out" from Amazon EC2 in the Local Zone varies. For more information see: [EC2 - On-Demand Pricing - Data Transfer within the same AWS Region](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer_within_the_same_AWS_Region) 

# Decision tree on connectivity to RISE
<a name="rise-decision-tree"></a>

You must establish required connectivity to proceed with RISE with SAP on AWS. The following are a few connectivity patterns described in the preceding sections:
+ direct to RISE VPC, supported with Site-to-Site VPN
+ direct to RISE VPC, supported with Direct Connect
+ connectivity through your AWS account via VPC Peering
+ connectivity through Transit Gateway, supporting multi-account deployments
+ connectivity through SAP-managed Transit Gateway supporting multi-account deployments

You must also consider if you want to connect:
+ directly to an AWS Region where the RISE with SAP VPC is going to be deployed
+ or through AWS Local Zone (nearest AWS Direct Connect POP) to benefit from lower setup and running costs, with the same or lower network latency to connect to your RISE with SAP VPC

The decision tree displayed in the following diagram helps you decide which connectivity is suitable based on your requirements, such as future plan of additional AWS or RISE accounts, dedicated line (security, performance), and bandwidth needs.

![\[Example connections across Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-decision-tree.png)


Note:

1. ECMP requires Transit Gateway for S2S VPN.

1. Direct Connect Gateway is recommended to connect to multiple AWS regions. This simplifies the connectivity setup and avoids TGW peering between AWS regions.

# Other considerations
<a name="other-considerations"></a>

This sections provides information about other considerations when connecting to RISE.

**Topics**
+ [SAP BTP with RISE on AWS](rise-btp.md)
+ [Connecting to SaaS from RISE](rise-saas.md)
+ [Connectivity patterns for multi-cloud](rise-multi-cloud.md)
+ [Implement chargeback for connectivity to RISE](rise-chargeback.md)
+ [Connectivity to Overlay IP in RISE on AWS](rise-oip.md)
+ [Integrating DNS to RISE and Route 53](rise-dns.md)

# SAP BTP with RISE on AWS
<a name="rise-btp"></a>

You can use SAP Business Technology Platform BTP services on AWS to extend the functionality of the RISE with SAP. SAP recommends SAP Cloud Connector to connect RISE with SAP VPC with SAP BTP via internet. When both RISE with SAP and SAP BTP run on AWS (in the same AWS region or different AWS regions), the network traffic is encrypted and contained within AWS Global Network, without going through the internet (see the following diagram). This provides better security and performance for any integration use-cases between RISE with SAP and SAP BTP. For more information, see [Amazon VPC FAQs - Does traffic go over the internet when two instances communicate using public IP addresses or when instances communicate with a public AWS service endpoint ?](https://aws.amazon.com/vpc/faqs/).

![\[Example connections across Regions\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-btp-internet.png)


As displayed in the preceding diagram, you can configure Transit Gateway to handle both RISE and BTP network traffic. For more information, see [How to route internet traffic from on-premises via Amazon VPC?](https://guide.aws.dev/articles/ARUIFmbCauTQeyJogByCa5xg/how-to-route-internet-traffic-from-on-premise-via-aws-vpc) 

SAP also offers SAP Private Link Service for SAP BTP on AWS. SAP Private Link connects SAP BTP on AWS with a secure connection without using public IPs in your AWS account.

![\[Connecting multiple accounts using PrivateLink\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-btp-services.jpg)


You can connect to an AWS endpoint service from an SAP BTP application running on Cloud Foundry. By establishing this connection, you can directly connect to AWS services, or for example, to an S/4HANA system. For a complete list of supported AWS services, see [Consume Amazon Web Services in SAP BTP](https://help.sap.com/docs/private-link/private-link1/consume-amazon-web-services-in-sap-btp-beta).

You can establish a secure and private communication between SAP BTP and AWS services with [SAP Private Link Service](https://help.sap.com/docs/private-link/private-link1/what-is-sap-private-link-service). By using private IP address ranges (RFC 1918), you reduce the attack surface of the application. The connection does not require an internet gateway. If you do not require this extra layer of security, you can still connect via the public APIs of SAP BTP without SAP Private Link, and benefit from AWS global network. For more information, see [Amazon VPC FAQs](https://aws.amazon.com/vpc/faqs/).

SAP Private Link for AWS currently supports connections initiated from SAP BTP Cloud Foundry to AWS.

For AWS services across AWS Regions, you can create a VPC in the same AWS Region as your SAP BTP Cloud Foundry Runtime, and connect these VPCs via VPC peering or AWS Transit Gateway. For a list of supported Regions, see [Regions and API Endpoints Available for the Cloud Foundry Environment](https://help.sap.com/docs/btp/sap-business-technology-platform/regions-and-api-endpoints-available-for-cloud-foundry-environment).

![\[Connecting multiple accounts in multiple Regions using PrivateLink\]](http://docs.aws.amazon.com/sap/latest/general/images/connectivity-btp-regions.jpg)


SAP Private Link Service is a paid service offered by SAP on SAP BTP. For more information see: [SAP Discovery Center – Services – SAP Private Link Service](https://discovery-center.cloud.sap/serviceCatalog/private-link-service).

Cost associated to AWS Services in the AWS account - managed by the Customer to facilitate cross region connectivity for example the AWS Network Load Balancer, or Transit Gateway vary. For more information on price, see the dedicated pricing pages of the listed AWS Services.

# Connecting to SaaS from RISE
<a name="rise-saas"></a>

When modernizing the SAP landscape, you may subscribe to several SAP cloud solutions or SaaS from independent software vendors to complement RISE with SAP solution.

When the cloud solutions are running on AWS (in the same AWS region or different AWS regions), the connectivity from RISE with SAP is kept within the AWS global network without requiring internet connectivity. The connectivity is retained through the provided squid proxy server within RISE with SAP VPC.For more information, see [Amazon VPC FAQs - Does traffic go over the internet when two instances communicate using public IP addresses or when instances communicate with a public AWS service endpoint ?](https://aws.amazon.com/vpc/faqs/).

![\[Connecting to cloud solutions or SaaS from RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-saas1.png)


If your cloud is running on other data centre or with another cloud service provider, then you need internet connectivity.

![\[Connecting to cloud solutions or SaaS from RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-saas2.png)


SaaS cloud solutions do not offer connectivity via VPN, Direct Connect or any other means of private connectivity. You can implement a centralized egress to internet architecture to manage this connectivity. For more information, see [Centralized egress to internet](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-egress-to-internet.html).

# Connectivity patterns for multi-cloud
<a name="rise-multi-cloud"></a>

In a complex connectivity scenario, you may need to integrate RISE with SAP setup with on-premises, AWS-hosted systems, and a variety of SaaS solutions and other cloud service providers.

Managing connectivity directly from the AWS environment decouples dependencies with on-premises networking infrastructure, improving availability and resiliency of the overall landscape.

You can use public or private connectivity to connect multi-cloud with RISE.

![\[Connectivity patterns for multi-cloud to RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-multi1.png)


 **Public connectivity** 

Connectivity is routed over the public internet. This pattern is typically used for connectivity from RISE with SAP to SaaS solutions that runs across multiple clouds. When building connectivity routed over the public internet, consider the following:
+ ensure that all communication is encrypted
+ protect end-points by using AWS services, such as Elastic Load Balancers and AWS Shield
+ monitor endpoints using Amazon CloudWatch
+ ensure that traffic between two public IP addresses hosted on AWS is routed over the AWS network

 **Private connectivity** 

The following three are the options to establish private connectivity between different cloud service providers:
+ Site-to-site VPN encrypted tunnel routed over public internet
+ private interconnect using AWS Direct Connect in a managed infrastructure (use Azure ExpressRoute for Azure and Google Dedicated Interconnect for Google Cloud Platform)
+ private interconnect using an AWS Direct Connect in a facility with a multi-cloud connectivity provider

The following diagram describes the factors to choose a multi-cloud connectivity method.

![\[Connectivity patterns for multi-cloud to RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-multi2.png)


For more information, see [Designing private network connectivity between AWS and Microsoft Azure](https://aws.amazon.com/blogs/modernizing-with-aws/designing-private-network-connectivity-aws-azure/).

# Implement chargeback for connectivity to RISE
<a name="rise-chargeback"></a>

If you are a company with subsidiaries, you may have different RISE contracts, leading to deployments in separate AWS accounts while requiring an interlinked network connectivity. In this instance, you must deploy Transit Gateway connection in a Landing Zone (multi-account) setup. It can scale your RISE deployment and integrate with multiple RISE with SAP VPCs.

Transit Gateway Flow Logs enables effective cost management. Transit Gateway Flow Logs can be integrated with Cost and Usage Report (CUR) that can be attributed as chargeback to the business units. For more information, see [Logging network traffic using Transit Gateway Flow Logs](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-flow-logs.html).

![\[How to implement chargeback capability for connectivity to RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-chargeback.png)


The preceding diagram displays how Transit Gateway can be used to connect multiple RISE with SAP VPCs and provide chargeback capability through the Flow Logs.

For more information, see the following blogs:
+  [Using AWS Transit Gateway Flow Logs to chargeback data processing costs in a multi-account environment](https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-transit-gateway-flow-logs-to-chargeback-data-processing-costs-in-a-multi-account-environment/) 
+  [How-to chargeback shared services: An AWS Transit Gateway example](https://aws.amazon.com/blogs/aws-cloud-financial-management/gs-chargeback-shared-services-an-aws-transit-gateway-example/) 

Use the following steps to enable this setup:

1. Enable Transit Gateway Flow Logs. For more information, see [Create a flow log that publishes to Amazon S3](https://docs.aws.amazon.com/vpc/latest/tgw/flow-logs-s3.html#flow-logs-s3-create-flow-log).

1. Setup Cost and Usage Reporting and setup Athena to utilize the reporting. For more information, see [Creating Cost and Usage Reports](https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html) and [Querying Cost and Usage Reports using Amazon Athena](https://docs.aws.amazon.com/cur/latest/userguide/cur-query-athena.html).

1. Obtain the Transit Gateway data processing charge per-account.

   1. Decide a cost allocation strategy - distribute costs evenly across all accounts or distribute proportionally across all accounts.

   1. Calculate the total network traffic and percentage allocation per account using [AWS Transit Gateway](https://catalog.workshops.aws/cur-query-library/en-US/queries/networking-and-content-delivery#aws-transit-gateway) query.

   1. Estimate cost per account, by collecting from CloudWatch that collects Network In(Upload) and NetworkOut(Download).

      1. NetworkIn(Upload) \$1 NetworkOut(Download) per usage account/ total data processed in network account

      1. % of usage x total cost = chargeback cost per usage account

# Connectivity to Overlay IP in RISE on AWS
<a name="rise-oip"></a>

An Overlay IP is a private IP address assigned to an EC2 instance that is outside the VPC’s CIDR block. It’s used for [high availability and failover scenarios in SAP deployments on AWS](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-oip-sap-on-aws-high-availability-setup.html), allowing traffic to be directed to the active instance even if it is in a different Availability Zone. This IP address is routable and managed through routing tables, enabling seamless failover without changing the application’s configuration.

Overlay IP is very important in RISE construct for the following scenarios:
+ SAP GUI connectivity to SAP Message Server which is part of the ASCS instance
+ Application Server connectivity to SAP Enqueue Server which is part of ERS instance
+ Client connectivity to HANA Database when it runs XS and XS Advanced Applications

The Overlay IP is moved by HA Cluster software from primary node to secondary node (or vice versa) when there is an availability issue with primary node or primary availability zone. All the client connectivity must be rerouted when this event occurs so users can continue with their business activities.

There are two ways to connect to this Overlay IP addresses, which is through [Network Load Balancer (NLB)](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-oip-overlay-ip-routing-with-network-load-balancer.html) and [AWS Transit Gateway (TGW)](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-oip-overlay-ip-routing-using-aws-transit-gateway.html). You can refer to more details in this [SAP on AWS High Availability with Overlay IP Address Routing guide](https://docs.aws.amazon.com/sap/latest/sap-hana/sap-ha-overlay-ip.html).

 **NLB Configuration** 

RISE with SAP High Availability deployment strategy spans across two Availability Zones and involves several key networking components. When setting up this configuration, SAP implements NLBs specifically for two critical Overlay IPs, one for the database and another for ASCS. To manage DNS resolution, SAP includes CNAMEs within their RISE managed DNS system, which correspond to the Amazon NLB addresses (ending in .amazonaws.com).

When connecting to RISE with SAP VPC through VPC Peering, you can only access the system using Network Load Balancer (NLB) addresses. Direct access through Overlay IP addresses is not available.

 **Transit Gateway Configuration** 

When you are utilizing TGW, SAP’s default setup is to propagate routes only for the VPC CIDR range they’re actively using. This leads to an important requirement for customers to manually configure static routes for the CIDR range used by the Overlay Ips (which is outside of the VPC CIDR range). This additional configuration is crucial because it enables direct access to these Overlay IPs through the TGW. Without this static route configuration, traffic would be forced to take a less efficient path through the Network Load Balancer rather than going directly via TGW.

This routing configuration is a critical detail that customers should keep in mind during their SAP deployment, as it can significantly impact the efficiency of their network traffic flow from end-users and other external systems outside of RISE with SAP.

![\[Overlay IP in RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-oip.png)


# Integrating DNS to RISE and Route 53
<a name="rise-dns"></a>

This documentation offers guidance on Domain Name System (DNS) integration options for “RISE with SAP” deployments on AWS, focusing on enterprise scenarios where customers want to enable name resolution between RISE with SAP workloads and their existing workloads across AWS and external environments.

A bi-directional DNS integration is essential for connecting RISE with SAP systems to various AWS cloud and on-premises resources and enterprise infrastructure. In manufacturing environments, a common use case involves connecting SAP applications to shop floor equipment. For example, SAP might need to communicate with printers located on the production floor to generate labels, work orders, or shipping documents. This requires the ability to resolve internal hostnames like “printer-line1.factory.company.local” within the RISE with SAP environment.

Conversely, external systems and applications usually require a DNS lookup to access resources in the RISE with SAP environment, particularly when calling ODATA API endpoints to execute business transactions such as generating a work orders. Integration scenarios between RISE with SAP systems and existing enterprise systems typically require internal network connectivity due to compliance and security requirements. This is particularly true for RISE with SAP deployments, which is why the following sections focus on DNS resolution within private networks.

Integration scenarios between RISE with SAP systems and existing enterprise systems typically require internal network connectivity due to compliance and security requirements. This is particularly true for RISE with SAP deployments, which is why the following sections focus on DNS resolution within private networks.

 **Architectural options** 

When integrating RISE with SAP with your existing DNS setup, you have two primary architectural options, which is Conditional DNS Forwarding and DNS Zone Transfer. You also have to consider DNS Zone Delegation aspect. These options and considerations are designed for AWS-only deployments and hybrid scenarios where AWS connects with external environments (e.g. on-premises or another cloud provider).

The selection of a DNS integration architecture depends on your service reliability needs, existing DNS infrastructure capabilities, and acceptable operational complexity level, with managed services generally demanding less maintenance and expertise than self-operated DNS infrastructure.

For DNS integration with RISE with SAP, we recommend implementing conditional DNS forwarding with [Amazon Route 53](https://aws.amazon.com/route53/) resolver endpoints. Route 53 provides a highly available, scalable DNS service that minimizes operational overhead. This approach eliminates the need to setup and operate your own DNS servers, further reducing operational complexity. Furthermore, Route 53 offers straightforward integration with your existing environments and monitoring capabilities through Amazon CloudWatch. However, if you have specific requirements or technical limitations, you can refer to alternative approaches detailed in subsequent sections.

The recommended DNS segregation pattern is to implement dedicated subdomains for each environment (e.g., aws.corp.com, dc.corp.com, and sap.corp.com), keeping DNS resolution local to each environment with conditional cross-environment forwarding. This approach optimizes performance by keeping local DNS requests within their respective environments, reducing latency, and improving system resilience while simplifying DNS management. It’s particularly effective in reducing the impact of network link failures between environments.

 **Common Infrastructure Requirements** 

Before implementing DNS integration approach, ensure the following prerequisites are in place (see also subsequent diagrams):

1. Network Connectivity: AWS Transit Gateway (or Cloud WAN or VPC Peering) connecting external environments through AWS Direct Connect or AWS Site-to-Site VPN, your AWS environment, and the RISE with SAP VPC.

1. Domain Delegation: During RISE with SAP setup, SAP requires delegation of a sub-domain (sap.<customer>.<domain>) to RISE DNS servers in the RISE with SAP VPC. This enables end users and applications to access RISE with SAP systems through your organization’s domain.

 **Conditional DNS Forwarding (recommended approach)** 

Conditional DNS Forwarding allows for selectively forwarding queries for specific domain names to another DNS server for resolution (e.g. Amazon Route 53 forwards DNS queries of sap.corp.com to RISE DNS Servers). We recommend implementing conditional DNS forwarding, unless technical constraints prevent this approach. The primary advantage of this approach is that customers can leverage Route 53 instead of setting up and operating own DNS infrastructure on AWS. This results in a simplified integration path while benefiting from Route 53 highly available and reliable global infrastructure.

The reference architecture below outlines the components needed for this approach:

![\[DNS Forwarding in RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-dns-forwarding.png)


1. Network Connectivity: refer to Common Infrastructure Requirements

1. Domain Delegation: refer to Common Infrastructure Requirements

1. Create Route 53 resolver endpoints (Inbound and Outbound) in your central Networking VPC to handle DNS queries between your AWS accounts and RISE with SAP account. Please follow [the best practices for operating Resolver endpoints](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/best-practices-resolver.html). We recommend deploying multiple endpoints across all availability zones and monitoring their utilization in CloudWatch to allow for proactive scaling. Provide SAP with details of your on-premises DNS server and the IP addresses of your Route 53 Resolver endpoints (needed for forwarding and firewall configuration).

1. Configure the Route 53 Resolver rules in your workload VPCs to forward DNS queries as follows:

   1. SAP-bound DNS queries: Forward to Outbound endpoint to resolve queries through SAP DNS servers

   1. Corporate data center-bound DNS queries: Forward to Outbound endpoint to resolve queries through on-premises DNS servers

1. Configure your on-premises DNS server to forward DNS queries as follows:

   1. SAP-bound queries: Forward to the SAP DNS server (alternatively, zone transfer of sap.<customer>.<domain> from SAP DNS server)

   1.  AWS-bound queries: Forward to the Inbound endpoint

1. SAP DNS servers are configured as follows:

   1. Corporate data center-bound DNS queries: Forward to on-premises DNS server

   1.  AWS-bound DNS queries: Forward to the Inbound endpoint

Ensure your Workload VPCs have all relevant resolver rules associated with them for DNS forwarding through your central Networking VPC. We recommend using Route 53 Profiles to manage these configurations, as they enable consistent DNS settings across multiple VPCs and AWS accounts. This approach simplifies DNS management by allowing you to define and apply standardized DNS configurations throughout your AWS infrastructure.

Please note that for DNS resolution in hybrid environments, DNS delegation can be an alternative approach to conditional forwarding. While conditional forwarding is generally recommended for RISE with SAP environments, DNS delegation might be beneficial in specific scenarios, particularly in environments with many distributed DNS resolvers without a centralized upstream resolver. However, for scenarios involving SAP DNS servers, additional technical considerations apply as outlined in the DNS Zone Delegation section.

 **DNS Zone Transfer** 

With zone transfers, the DNS database of an authoritative DNS server is replicated across a set of secondary DNS servers. You can implement zone transfers directly between your on-premises DNS servers and the SAP DNS servers in your RISE environment. However, if you want to extend zone transfers to include your AWS DNS namespace (e.g., aws.<customer>.<domain>) for communication between on-premises and your Workload VPCs, you’ll need to operate your own DNS servers (such as BIND) in your AWS environment. This is because Route 53 doesn’t support zone transfers. Keep in mind that this approach increases operational complexity compared to using Route 53 with DNS forwarding.

Please consult your SAP Cloud Architect or your AWS Account Team for details on this approach. For best practices regarding running your own BIND DNS server, please refer to [this link](https://kb.isc.org/docs/bind-best-practices-authoritative).

The following diagram shows a reference architecture for integrating the RISE environment with your existing DNS landscape ( on-premises / AWS ) through zone transfers.

![\[DNS Zone Transfer in RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-dns-zonetransfer.png)


1. Network Connectivity: refer to Common Infrastructure Requirements

1. Domain Delegation: refer to Common Infrastructure Requirements

1. Setup a central DNS server in your Networking VPC (e.g. BIND on EC2) or decentralized in each Workload VPC by [modifying VPC DHCP options sets accordingly](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html). Please provide SAP with the details of your on-premises DNS Server and the AWS-hosted DNS servers (needed for zone transfer and firewall configuration).

1. Configure your AWS-hosted DNS server as follows:

   1. SAP-bound queries: Zone transfer of sap.<customer>.<domain> from SAP DNS server

   1. Data center-bound queries: Zone transfer of dc.<customer>.<domain> from on-premises DNS server

1. Configure the on-premises DNS server as follows:

   1. SAP-bound DNS queries: Zone transfer of sap.<customer>.<domain> from SAP DNS server

   1.  AWS-bound DNS queries: Zone transfer of aws.<customer>.<domain> from AWS-hosed DNS server

1. SAP DNS servers are configured as follows:

   1. Customer data center-bound DNS queries: Zone transfer of dc.<customer>.<domain> from on-premises DNS server

   1.  AWS-bound DNS queries: Zone transfer of aws.<customer>.<domain> from AWS-hosed DNS server

 **DNS Zone Delegation** 

For customers operating many DNS resolvers distributed across multiple environments without a centralized DNS resolver service, configuring and maintaining DNS forwarding rules or zone transfers can become operationally challenging. DNS zone delegation allows you to define authority for specific subdomains at a single point in the DNS hierarchy, simplifying DNS management across your infrastructure.

Using Amazon Route 53 Resolver endpoints with DNS delegation enables you to build and maintain a unified private DNS namespace spanning on-premises and AWS environments.

However, zone delegation with SAP DNS servers in RISE environments comes with specific technical considerations. Without a centralized upstream resolver, zone delegation to SAP DNS servers increases concurrent query load due to reduced cache efficiency. Additionally, all DNS resolvers require direct network paths to SAP DNS servers, potentially requiring additional connectivity configurations. Please consult with SAP ECS before implementing this approach.

There are 2 main scenarios:

 **Scenario 1. Parent domain in Route 53 on AWS ** 

For customers who run the majority of their workloads in the cloud and operate their private DNS root zone on AWS with Route 53, you can delegate subdomains to external DNS servers. This includes delegating to both SAP DNS servers (e.g., sap.corp.com) and on-premises DNS servers (e.g., dc.corp.com).

![\[DNS Zone Delegation with parent domain in route 53\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-dns-zonedelegation01.png)


1. Network Connectivity: refer to Common Infrastructure Requirements

1. Domain Delegation: refer to Common Infrastructure Requirements

1. Set up Route 53 Resolver endpoints (Inbound and Outbound) in your central Networking VPC

1. Configure the IPs of your on-premises and SAP DNS servers as NS records in the Private Hosted Zone (PHZ) of your parent domain (e.g. corp.com) and associate the PHZ with your Workload VPCs ([Route 53 Profiles](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/profiles.html) can help with the management of PHZ associations and resolver rules). If your DNS servers are part of the same domain (e.g. ns.dc.corp.com), you also need to configure [glue records](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-name-servers-glue-records.html) in the parent domain. Create Route 53 Resolver delegation rules for the relevant subdomains (dc.corp.com) and associate them with your Workload VPCs (see diagram above).

1. Configure conditional DNS forwarding at your on-premises resolvers to allow for resolution of the parent domain and SAP domain (SAP will need to do the same on their side)

 **Scenario 2. Parent domain on-premises** 

For customers who are in the beginning of their cloud journey and still maintain their root zone on-premises, DNS delegation provides an efficient way to integrate both SAP and AWS environments while maintaining DNS control on-premises.

![\[DNS Zone Delegation with parent domain in on-premises\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-dns-zonedelegation02.png)


1. Network Connectivity: refer to Common Infrastructure Requirements

1. Domain Delegation: refer to Common Infrastructure Requirements

1. Set up Route 53 Resolver endpoints (inbound and outbound) in your central Networking VPC

1. Configure a PHZ for aws.corp.com and associate it your central Networking and Workload VPCs. Configure conditional DNS forwarding rules to allow your VPC to resolve queries for workloads on-premises and your RISE with SAP systems (SAP will need to do the same on their side).

1. Update the corp.com zone with delegation (NS) records for sap.corp.com and aws.corp.com (for example ns1.corp.com) in your domain’s authoritative nameserver on-premises.

Configure IPs of your AWS Route 53 Resolver inbound endpoint and SAP DNS servers as target records in your ns1.corp.com zone file. If your DNS servers are part of the same domain, you also need to configure glue records in the parent domain.

Please consult the Route 53 documentation for more details on the zone delegation feature. The following blog post provides you with a more in-depth step-by-step guide on how to make use of Route 53 delegation feature for private DNS: [Streamline hybrid DNS management using Amazon Route 53 Resolver endpoints delegation](https://aws.amazon.com/blogs/networking-and-content-delivery/streamline-hybrid-dns-management-using-amazon-route-53-resolver-endpoints-delegation/).

For more information on the above described integration approaches, please reach out to your SAP Cloud Architect or your AWS Account Team.

# Security
<a name="security-rise"></a>

SAP manages the security in AWS account managed by SAP. You can implement additional security mechanisms in your own AWS account.

**Topics**
+ [SSO – SAP Cloud Identity Services and AWS IAM Identity Center](sso-iam.md)
+ [SSO – SAP Cloud Identity Services and Microsoft Entra](sso-entra.md)
+ [SSO – SAPGUI Front-End](sso-sapgui.md)
+ [Advanced security using AWS Services](rise-security-aws-services.md)
+ [Integrating SAP Data Custodian KMS with AWS KMS](aws-kms.md)
+ [How AWS Nitro helps secure RISE with SAP?](aws-nitro.md)
+ [Amazon WorkSpaces as remote access solution](rise-workspaces.md)

# SSO – SAP Cloud Identity Services and AWS IAM Identity Center
<a name="sso-iam"></a>

One of the security best practices for RISE with SAP is to centralize the user access control through the integration with a corporate Identity Provider (IdP). This makes it easier for you to provision, de-provision and manage your user access across the company including RISE with SAP, AWS services, and others.

 AWS IAM Identity Center is one of the IdP that you can integrate with RISE to support Single Sign-On (SSO). IAM Identity Center provides a centralized access points for users to manage AWS account and applications consistently within the AWS Organizations (example in multi accounts setup).

If you already have an existing identity source such as Okta, Ping, Microsoft Windows Active Directory, Microsoft Entra (previously known as Azure Active Directory), or others, you can integrate the identity source to IAM Identity Center through Security Assertion Markup Language (SAML) and System for Cross-Domain Identity Management (SCIM) protocols.

For more information, you can refer to the following references:
+  [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) 
+ Integration of IAM Identity Center with other identity source, see [Getting started tutorials](https://docs.aws.amazon.com/singlesignon/latest/userguide/tutorials.html).
+  [SAP Cloud Identity Services - Identity Authentication](https://help.sap.com/docs/identity-authentication).

The following image shows the integration between Identity Authentication from SAP BTP and AWS IAM Identity Center in the context of RISE with SAP

![\[SAP Cloud Identity Services with IAM Identity Center\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-iam.png)


 **Authentication flow** 

1. User accesses SAP Fiori via an Internet browser.

1. SAP Fiori will redirect SAML request back to the internet browser.

1. Internet Browser relays the SAML request to SAP Cloud Identity Services.

1. SAP Cloud Identity Service delegate authentication request to IAM Identity Center.

1. If IAM Identity Center integrates with existing identity source such as Okta, Ping, Entra, then IdP will authenticate the user.

1. User is authenticated by IdP and SAML response is provided to the internet browser with user identity information.

1. User can access RISE with SAP systems.

For more information on how to do this, you can refer to [AWS IAM Identity Center (successor to AWS SSO) Integration Guide for SAP Cloud Platform Cloud Foundry](https://static.global.sso.amazonaws.com/app-c1553f5036ecbcd6/instructions/index.htm).

# SSO – SAP Cloud Identity Services and Microsoft Entra
<a name="sso-entra"></a>

Microsoft Entra (previously Azure AD) or other IdPs can be integrated to SAP Cloud Identity Services directly. This support a direct authentication with Single Sign-On (SSO), when you do not need AWS IAM Identity Center (i.e. no requirement to run a multi account strategy that utilizes AWS Organizations).

![\[SAP Cloud Identity Services with Microsoft Entra\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-entra.png)


 **Authentication flow** 

1. User accesses SAP Fiori via an Internet browser.

1. SAP Fiori will redirect SAML request back to the internet browser.

1. Internet Browser relays the SAML request to SAP Cloud Identity Services.

1. SAP Cloud Identity Service delegate authentication request to IdPs.

1. User is authenticated by IdP and SAML response is provided to the internet browser with user identity information.

1. User can access to SAP S/4HANA in RISE with SAP VPC.

For more information on how to do this, you can refer to [Enable SSO between Azure AD and SAP Cloud Platform using Identity Authentication Service](https://developers.sap.com/mission.cp-azure-ias-single-signon.html).

# SSO – SAPGUI Front-End
<a name="sso-sapgui"></a>

SAPGUI is a graphical user interface client in the SAP ERP’s three-tier architecture of database, application servers and clients. It requires installation in a local desktop that run on Windows or macOS or Linux.

In order to achieve Single-Sign-On (SSO) for SAPGUI in RISE with SAP, we must use either Kerberos or X.509 method. Kerberos is not recommended by AWS, because it requires user to always be connected to the corporate network and authenticated against a Microsoft Active Directory which reduce their mobility. Due to this, X509 is recommended.

SAPGUI Single-Sign-On with X509 can be achieved with [SAP Secure Login Service on BTP](https://help.sap.com/docs/SAP%20SECURE%20LOGIN%20SERVICE?version=Cloud), the image below describes how the integration works.

![\[SSO for SAPGUI Front-End\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-sso-sapgui.png)


 **Authentication flow** 

1. User accesses SAPGUI on their desktop.

1. SAP S/4HANA will redirect authentication request to SAP Secure Login Service.

1. SAP Secure Login Service will delegate the authentication to SAP Cloud Identity Service.

1. When SAP Cloud Identity Service is integrated to IdP (i.e. Azure AD, Okta, Ping, etc.), then IdP will authenticate the user.

1. User is authenticated by IdP and X509 is provided by SAP Secure Login Service to the SAPGUI.

1. User can access to SAP S/4HANA in RISE with SAP VPC.

For more information on how to do this, you can refer to [Securing SAP GUI with SAP Secure Login Service](https://community.sap.com/t5/technology-blogs-by-sap/explore-securing-sap-gui-with-sap-secure-login-service/ba-p/13579130).

# Advanced security using AWS Services
<a name="rise-security-aws-services"></a>

 AWS offers a comprehensive suite of security services that can act as a multi-layered security envelope around RISE with SAP deployments on AWS. These services act as an additional security barrier, intercepting and mitigating potential threats before they can reach the RISE account, providing robust protection and assisting with compliance with industry-standard security best practices.

**Topics**
+ [AWS Network Firewall](networkfirewall.md)
+ [Amazon Macie](macie.md)
+ [Amazon GuardDuty](guardduty.md)
+ [Security Hub, Detective, Audit Manager and EventBridge](securityhub.md)
+ [Using All AWS Security Services](allawssecurity.md)

# AWS Network Firewall
<a name="networkfirewall"></a>

 AWS Network Firewall is a managed firewall service that provides essential network protection for Amazon Virtual Private Cloud (VPC) environments. AWS Network Firewall acts as a first line of defence, filtering and inspecting all network traffic to and from RISE resources, effectively creating a protective perimeter around a RISE environment.

Key features of AWS Network Firewall include:
+ Stateful Firewall Capabilities. AWS Network Firewall offers advanced stateful firewall features to monitor and control network traffic. It can inspect the complete context of a network connection, including source, destination, ports, and protocols, to detect and block malicious or unauthorized traffic.
+ Threat Signature Matching. AWS Network Firewall comes pre-loaded with a comprehensive set of threat detection rules and signatures, continuously updated by AWS, to identify and mitigate known threats, malware, and other malicious activity targeting RISE deployments.
+ Custom Rule Definition. In addition to the pre-defined threat signatures, customers can create and deploy custom firewall rules to address specific security requirements or policies unique to connections hitting SAP systems in the RISE environment.
+ Centralized Policy Management. AWS Network Firewall allows to define and manage firewall policies centrally, which can then be easily deployed across multiple VPCs including non-SAP VPCs and VPCs associated with the SAP-managed RISE VPC, ensuring consistent security enforcement.
+ Scalability and High Availability. As a fully managed service, AWS Network Firewall automatically scales to handle changes in network traffic volume and patterns, ensuring RISE environment remains protected without the need for complex infrastructure management.

In the context of RISE with SAP, AWS Network Firewall can be leveraged for the following:
+ Centralized Firewall Management. AWS Network Firewall provides a centralized, managed firewall service to control and monitor network traffic travelling to and from the SAP-managed RISE VPC.
+ Stateful Packet Inspection. AWS Network Firewall performs stateful packet inspection, allowing it to detect and mitigate advanced threats by analysing the context of network connections to/from SAP systems within the RISE VPC.
+ Regulatory Compliance. AWS Network Firewall helps organizations meet compliance requirements by enforcing security policies and providing logging/auditing capabilities for the RISE with SAP landscape.

Below is example architecture of AWS Network Firewall inspecting network traffic before it reaches RISE with SAP

![\[Network Firewall inspecting network traffic before it reaches RISE with SAP\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-network-firewall.png)


In the diagram above

1. A malicious actor exploits network misconfiguration to get access to SAP system on RISE.

1. Traffic is first routed through AWS Transit Gateway.

1. Packet inspection by AWS Network Firewall catches abnormal connection attempts..

It is worth noting that AWS Network Firewall can be also used by customers who want to consume SAP BTP services hosted by AWS connecting first to an AWS Transit Gateway with AWS Direct Connect, so that their end-to-end stay on the AWS backbone.

For instructions to configure AWS Network Firewall, see [Getting started with AWS Network Firewall](https://docs.aws.amazon.com/network-firewall/latest/developerguide/getting-started.html).

# Amazon Macie
<a name="macie"></a>

Amazon Macie is a data security service that helps customers discover, classify, and protect sensitive data stored in Amazon S3 buckets by continuously monitoring and alerting on potential data risks and unauthorized access attempts.

In the context of RISE with SAP, Amazon Macie can protect Amazon S3 buckets in customer-managed AWS account fed by a RISE with SAP environment, for instance:
+ as a RISE customer, backups can be copied from the SAP-managed AWS account to a customer-managed environment and S3 bucket.
+ SAP data can be extracted from or a RISE environment (see [Architecture Options for extracting SAP Data with AWS Services](https://aws.amazon.com/blogs/awsforsap/architecture-options-for-extracting-sap-data-with-aws-services/)) to a customer-managed S3 bucket, to enable advanced analytics, machine learning, and business intelligence using other AWS services like Amazon Athena, AWS Glue, and Amazon Sagemaker;
+ Certain industries and regulations, such as GDPR, HIPAA, or PCI-DSS, may require long-term storage and preservation of sensitive data. Exporting this data to a customer-managed S3 can help meet these compliance requirements, as S3 provides robust security and durability features.
+ Centralized Policy Management. AWS Network Firewall allows to define and manage firewall policies centrally, which can then be easily deployed across multiple VPCs including non-SAP VPCs and VPCs associated with the SAP-managed RISE VPC, ensuring consistent security enforcement.
+ Customers can also consume security event logs out of their RISE environment, so ingest in their own S3 buckets or SIEM systems.

Below is example architecture of Amazon Macie continuously scanning an S3 bucket with SAP data extracted from RISE

![\[Amazon Macie continuously scanning an S3 bucket with SAP data extracted from RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-macie.png)


In the diagram above

1. Data is written to S3 bucket for data lake/compliance reporting purposes.

1. Amazon Macie continuously analyzes bucket to detect Privately Indentifiable Information.

For instructions to configure Amazon Macie, see [What is Macie ?](https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html).

# Amazon GuardDuty
<a name="guardduty"></a>

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behaviour within an AWS environment. It combines machine learning, anomaly detection, and integrated threat intelligence to identify potential threats and protect AWS account linked to RISE with SAP environments, workloads, and data.

Amazon GuardDuty monitors the following:
+  AWS CloudTrail Logs: Amazon GuardDuty monitors API activity across AWS account to detect suspicious API calls, unauthorized deployments, and unauthorized access attempts to resources. Amazon GuardDuty identifies attempts to access AWS services from unauthorized IP addresses or regions. Amazon GuardDuty detects unusual behaviour in Identity and Access Management (IAM) users, roles, and policies, such as privilege escalation.
+ VPC Flow Logs. Amazon GuardDuty analyses network traffic within a Virtual Private Cloud (VPC) to detect unexpected traffic patterns, data exfiltration attempts, or unauthorized access alongside identifying communications between AWS resources and known malicious IP addresses or domains. In the context of RISE with SAP on AWS, the inspection takes places on a VPC fronting the RISE SAP-managed account;
+ DNS Logs. Amazon GuardDuty monitors DNS queries made by an AWS resource to detect attempts to connect to malicious domains or unusual DNS request patterns. Amazon GuardDuty also detects the use of Domain Generation Algorithms (DGA) for generating domain names associated with Command and Control servers.

In the context of RISE with SAP, Amazon GuardDuty can be leveraged for the following:
+ Intrusion Detection: GuardDuty enables early detection of intrusion attempts into an RISE environment fronted by a customer-managed AWS account by identifying malicious activities such as unauthorized API calls, network reconnaissance, and access attempts from known malicious IP addresses;
+ Compliance Validation: For organizations with stringent compliance requirements, GuardDuty helps ensure adherence by continuously monitoring for policy violations and unauthorized access attempts, providing detailed logs and reports for audit purposes. This can be achieved when the SAP RISE environment is accessed from a customer-managed AWS account. See [Compliance Validation](https://docs.aws.amazon.com/guardduty/latest/ug/compliance-validation.html) for more details
+ Automated Incident Response. GuardDuty can be integrated with AWS Lambda and AWS Security Hub to automate incident response workflows. Upon detecting a threat, these services can trigger automated remediation actions, such as isolating compromised resources or notifying security teams.

Below is example architecture of GuardDuty monitoring CloudTrail trails of a RISE with SAP deployment on AWS 

![\[GuardDuty monitoring CloudTrail trails of a RISE with SAP deployment\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-guardduty.png)


In the diagram above

1. Data is written to S3 bucket for data lake/compliance reporting purposes.

1. A malicious actor changes IAM rules and IAM permissions on S3 bucket to obtain access.

1. IAM changes are intercepted by AWS CloudTrail.

1. GuardDuty detects suspicious activity and alerts administrators.

Below is example architecture of GuardDuty monitoring DNS logs of a RISE with SAP deployment on AWS 

![\[GuardDuty monitoring DNS logs of a RISE with SAP deployment\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-guardduty-dnslogs.png)


In the diagram above

1. A malicious actor introduces rogue DNS redirecting users to makeshift SAP systems.

1. The rogue DNS entries are detected by GuardDuty and reported to administrators.

Below is example architecture of GuardDuty monitoring VPC Flow Logs of RISE with SAP VPC

![\[GuardDuty monitoring VPC Flow Logs of RISE with SAP VPC\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-guardduty-vpcflowlogs.png)


In the diagram above

1. A malicious actor attempts to access SAP systems from VPC managed by customer peered to RISE VPC or scan ports.

1. The connection attempt from malicious actor IP logged in VPC Flow Logs.

1. The suspicious connection attempt is detected by Amazon GuardDuty and reported to administrators.

For instructions to configure Amazon GuardDuty, see [Getting Started](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_settingup.html).

# Security Hub, Detective, Audit Manager and EventBridge
<a name="securityhub"></a>

Building on implementation of GuardDuty and Amazon Macie, AWS Security Hub acts as a central hub, consolidating and prioritizing security findings AWS security services. AWS Security Hub provides a unified view of the security posture across services surrounding a RISE with SAP deployment, allowing too quickly identify and address any security issues.

To further investigation and incident response capabilities, Amazon Detective analyses security incidents by gathering and processing relevant log data from AWS resources. This service helps quickly identify the root cause of issues, enabling to take appropriate actions to mitigate the impact.

Maintaining compliance is also a critical aspect of securing a RISE with SAP environment. AWS Audit Manager automates the assessment of AWS resources against industry standards and regulations, helping demonstrate compliance and reduce the risk of non-compliance.

Finally, Amazon EventBridge enables real-time response to security events by triggering custom automated workflows and remediation actions. This service allows to quickly and efficiently address security incidents, minimizing the potential impact on RISE with SAP deployment

Below is example architecture of AWS Security Hub, Amazon Detective, AWS Audit Manager and Amazon EventBridge paired to RISE with SAP

![\[Security Hub\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-securityhub.png)


# Using All AWS Security Services
<a name="allawssecurity"></a>

Combining together all services described above allow for an architecture monitoring multiple areas of a RISE on AWS deployment: network traffic, DNS logs, CloudTrail API activity, sensitive information extracted SAP data. Amazon GuardDuty and AWS Security Hub are fed from multiple services and uses AIML intelligence to detect malicious activities and anomalies. Findings are passed to Amazon Detective for a deeper RCA analysis or sent to Amazon EventBridge for custom reporting and alerting.

Below is example architecture of GuardDuty, AWS Network Firewall, Amazon Macie, AWS Security Hub and Amazon Detective combined together to improve security posture of RISE with SAP on AWS deployment

![\[GuardDuty\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-allawssecurity.png)


# Integrating SAP Data Custodian KMS with AWS KMS
<a name="aws-kms"></a>

SAP Data Custodian Key Management Service enables customer-managed encryption keys for data stored in SAP services. Please note that SAP Data Custodian Key Management Service is not the same as AWS Key Management Service (KMS).

Using AWS KMS as the keystore in [HYOK (Hold Your Own Key) scenario](https://help.sap.com/docs/sap-data-custodian/key-management-service/amazon-web-services-hyok?locale=en-US), SAP Data Custodian Key Management Service provides a consistent and centralized approach to key management especially if AWS KMS is already employed for other AWS workloads, enabling seamless integration, streamlined key lifecycle management, and enhanced security through AWS robust encryption and access control mechanisms.

This integration allows customers to manage and control the encryption keys used to protect their sensitive data, ensuring greater security and compliance. SAP Data Custodian Key Management Service can be interfaced with AWS KMS in HYOK (Hold Your Own Key) scenario with the following supported key:


| Area |  AWS KMS (HYOK Scenario) | Supported Key Types and Key Sizes | 
| --- | --- | --- | 
|  AES (256), RSA (3072, 4096)  |  Key Management  |  Key is created and stored in AWS KMS keystore  | 

Below is the SAP KMS integration with AWS KMS - HYOK

![\[The SAP KMS integration with KMS - BYOK\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-security-hyok.png)


In the diagram above:
+ Key is created in AWS KMS keystore
+ Key is stored in AWS KMS and retrieved by SAP KMS when required
+ SAP KMS encrypts SAP data at application level

# How AWS Nitro helps secure RISE with SAP?
<a name="aws-nitro"></a>

 AWS Nitro System is the underlying technology used for [Amazon Elastic Compute Cloud](https://aws.amazon.com/ec2/) (Amazon EC2) instances in RISE with SAP. AWS Nitro System offers a unique set of capabilities that support the most sensitive workloads in a multi-tenanted, hyper-scale cloud environment.

A traditional virtualization architecture consists of "hypervisor" or "Virtual Machine Monitor (VMM)" and what is commonly known as ['Dom0’](https://docs.aws.amazon.com/whitepapers/latest/security-design-of-aws-nitro-system/traditional-virtualization-primer.html#:~:text=Xen%20Project%20calls%20the%20system%E2%80%99s%20dom0) in Xen project or ["parent partition"](https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/hyper-v-architecture) in Hyper-V. More details on traditional virtualization architecture is available [here](https://docs.aws.amazon.com/whitepapers/latest/security-design-of-aws-nitro-system/traditional-virtualization-primer.html).

In Nitro System virtualization architecture, the management or control domain components (with privileged access to the hardware and device drivers) are fragmented into independent purpose-built service processor units (SoC - System on Chip) which are known as Nitro cards. While the "hypervisor" layer remains, the design has been minimized to include only those services and features which are strictly necessary for its task. Additionally, there is also a "Nitro Security Chip" introduced to enhance the security while ensuring there is no overhead on performance.

Below is the Nitro High Level Architecture

![\[Nitro High Level Architecture\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-nitro-1.png)


The resulting Nitro System has been divided into the following components:

 **Nitro Cards** 

Nitro Controller - This is the sole outward facing management interface between the physical server and the control planes for EC2, Amazon EBS, and Amazon VPC. It is implemented as passive API endpoints where each action is logged and all attempts to call an API are cryptographically authenticated and authorized using a fine-grained access control model.Nitro Controller also provides the hardware root of trust for the overall system and is responsible for managing all other components of the server system including the firmware loaded in the system. Firmware for the system is stored on an encrypted SSD that is attached directly to the Nitro Controller. The encryption key for the SSD is designed to be protected by the combination of a Trusted Platform Module (TPM) and the secure boot features of the SoC.Nitro Cards purpose-built for specific functions Nitro Cards purpose-built for specific functions:

Networking - The newer generation of Nitro cards for VPC transparently encrypt all VPC traffic to other EC2 instances running on hosts also equipped with encryption compatible Nitro Cards. It uses Authenticated Encryption with Associated Data (AEAD) algorithms, with 256-bit encryption. In RISE with SAP, depending on customer’s requirements, different families of compute instances are selected. While AWS provides secure and private connectivity between EC2 instances of all types, in-transit traffic encryption is available between the later generation instances only. Please check whether your RISE with SAP instances are supported for this feature [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/data-protection.html#encryption-transit).

EBS (SSD) storage - The Nitro Card for EBS provide encryption of remote EBS volumes without any practical impact on their performance.

Local instance storage (ephemeral) – Similar to Nitro Card for EBS, the Nitro Card for instance storage provides encryption to local instance storage. All EC2 instances do not have local instance storage and this would depend on the instance types chosen for your RISE with SAP workloads. Details can be found [here](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-instance-type-specifications.html).

The encryption keys used for VPC, EBS and Instance Storage are only ever present on the system in plaintext within the protected memory of a Nitro Card.

 **Nitro Security Chip** 

While the Nitro Controller and other Nitro Cards operate as one domain, the system main board on which SAP workloads runs make up the second domain. While the Nitro Controller and its secure boot process provide the hardware root of trust between the Nitro System components, Nitro Security chip is used to extend that trust and control over the system main board. The Nitro Security Chip is the link between those two domains that extends the control of the Nitro Controller to the system main board, making it a subordinate component of the system, thus extending the Nitro Controller chain of trust to cover it. To maintain the root of trust, all write access to non-volatile storage is blocked in hardware.

Below is when Nitro blocked write access to non-volatile storage

![\[Nitro blocked write access to non-volatile storage\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-nitro-2.png)


 **Nitro Hypervisor** 

Unlike traditional hypervisors, Nitro Hypervisor is not a general-purpose system and does not have a shell nor any type of interactive access mode. Some of the key exclusions in the Nitro Hypervisor which enhances its security posture are networking stack, general purpose file system implementations, peripheral driver support, ssh server, shell etc. Primary functions of Nitro Hypervisor are restricted to:

1. Receive virtual machine management commands (start, stop and so on) sent from Nitro Controller

1. Partition memory and CPU resources by utilizing hardware virtualization features of the server processor

1. Assign SR-IOV virtual functions provided by Nitro hardware interfaces (NVMe block storage for EBS and instance storage, Elastic Network Adapter [ENA] for network, and so on) through PCIe to the appropriate VM

This simplicity of the Nitro Hypervisor is a significant security benefit compared to conventional hypervisors.

 **Key Benefits of AWS Nitro System** 
+ Nitro chips offload virtualization tasks from the main CPUs, reducing the attack surface and improving overall system security.
+  AWS personnel do not have access to Your Content on AWS Nitro System EC2 instances. There are no technical means or APIs available to AWS personnel to access you content on an AWS Nitro System EC2 instance or encrypted-EBS volume attached to an AWS Nitro System EC2 instance. Access to AWS Nitro System EC2 instance APIs – which enable AWS personnel to operate the system without access to your content - is always logged, and requires authentication and authorization. Please find more information [here](https://aws.amazon.com/service-terms/).
+ Tenancy protection and prevention of side channel attacks - The Nitro Hypervisor is directed by the Nitro Controller to allocate the full complement of physical cores and memory for the instance. These hardware resources are "pinned" to that particular instance. The CPU cores are not used to run other customer workloads, nor are any instance memory pages shared in any fashion across instances. No sharing of CPU cores means that instances never share CPU core-specific resources, including Level 1 or Level 2 caches thereby providing strong mitigation against side channel attacks. Please find more information [here](https://docs.aws.amazon.com/whitepapers/latest/security-design-of-aws-nitro-system/the-ec2-approach-to-preventing-side-channels.html).
+ The Nitro architecture allows for secure boot and runtime integrity verification, ensuring the AWS infrastructure is running in a trusted and verified state.
+ Both the Nitro Card firmware and the hypervisor are designed to be live-updatable (zero downtime for customer instances). This eliminates the need for carefully balanced tradeoffs around updates yielding improved security posture. Please find more information [here](https://d1.awsstatic.com/events/Summits/awsreinforce2023/DAP401_Security-design-of-the-AWS-Nitro-System.pdf).
+ Data encryption for both data at rest and in transit using hardware offload engines with secure key storage integrated in the SoC.

# Amazon WorkSpaces as remote access solution
<a name="rise-workspaces"></a>

Using [Amazon WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html) provides a secure, scalable, and managed virtual desktop environment for accessing SAP systems. This virtual desktop can be used as a centrally managed hosting platform for SAP end user software such as SAPGUI and be connected to your SAP S/4HANA environment in RISE with SAP.

 [Amazon WorkSpaces Personal](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html#personal-pools) offers persistent virtual desktops, tailored for users who need a highly-personalized desktop provisioned for their exclusive use, similar to a physical desktop computer assigned to an individual.

 [Amazon WorkSpaces Pool](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html#personal-pools) offers non-persistent virtual desktops, tailored for users who need access to highly-curated desktop environments hosted on ephemeral infrastructure.

The following image shows the use of Amazon WorkSpaces as remote access solution for RISE with SAP.

![\[Amazon WorkSpaces as remote access solution for RISE with SAP\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-workspaces.png)


 **Traffic flow** 

1. User initiates a connection to the AWS WorkSpaces URL via a Web browser or [WorkSpaces Client](https://clients.amazonworkspaces.com/).

1. User authenticated through the authentication gateway within the AWS Managed VPC. When an end-user logs in, the Authentication Gateway verifies user against Directory Services and once the user is authenticated, the gateway establishes a secure session for the user to access their virtual desktops. This session management ensures that the user’s WorkSpaces remains accessible during their active session and helps maintain session integrity and security. This part of architecture uses Secure Socket Layer (SSL) with TCP protocol on port 443.

1. The connection is routed through another VPC Attachment to reach the Domain Controller in a separate Amazon VPC. The Domain Controller manages permissions and access control policies for users. It ensures that users have the appropriate access to resources based on their roles and group memberships. This is typically done through integration (such as AWS Managed Microsoft AD or an on-premises AD connected via AWS Directory Service)

1. Transit Gateway manages the routing between VPCs and Direct Connect or VPN. AWS Direct Connect or VPN provides a secure connection from AWS to the SAP RISE environment.

1. A secure session is established between the user’s device and the SAP managed RISE VPC.

1. The streaming service gateway within the AWS managed VPC begins to stream the virtual desktop environment to the user’s device. This streaming is secured and managed within AWS infrastructure. The streaming gateway securely transmits the desktop stream over the internet to the user’s device. The user’s device now can access SAP applications like SAP S/4 hosted in the RISE environment through SAP end user software such as SAPGUI.

1. Amazon WorkSpaces allows you to access the following 2 types of WorkSpaces, depending on your organization and user needs

    **WorkSpaces Pool**, in a pooled configuration, WorkSpaces are dynamically assigned to users from a shared pool. When a user logs in, they may not always connect to the same machine, and changes such as installed applications or user configurations are generally not persistent between sessions

    **WorkSpaces Personal**, in this configuration, each user is assigned their own dedicated virtual desktop, where they can install applications, save files, and have their settings and data persist between sessions.

 **Set up Amazon WorkSpaces for SAP RISE Access** 

1. To use or setup Amazon WorkSpaces to connect to SAP RISE, follow the [Get started with WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/getting-started.html).

1. For more information about integrating Amazon WorkSpaces with SAP Single-sign-on, see [How to integrate Amazon WorkSpaces with SAP Single Sign-On](https://aws.amazon.com/blogs/awsforsap/how-to-integrate-amazon-workspaces-with-sap-single-sign-on/) 

1.  [Install SAPGUI on your WorkSpaces from SAP Software download](https://help.sap.com/doc/2e5792a2569b403da415080f35f8bbf6/770.00/en-US/sap_frontend_inst_guide.pdf) 

1.  [Connect to SAP system via the SAPGUI client](https://help.sap.com/doc/saphelp_em92/9.2/en-US/4e/1260dd1e3d2287e10000000a15822b/content.htm) in WorkSpaces using your SAP System details

 **Amazon Workspaces Operational Best Practices** 

1. Monitoring: Use [AWS CloudWatch to monitor the performance and health of your WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/cloudwatch-metrics.html).

1. Backup and Recovery: Ensure that critical data on your WorkSpaces is backed up and that you have a [recovery plan in place](https://docs.aws.amazon.com/workspaces/latest/adminguide/restore-workspace.html).

1. Updates and Maintenance: Regularly update the software and systems on your WorkSpaces to ensure security and compliance. [By default, Windows WorkSpaces will automatically update weekly](https://docs.aws.amazon.com/workspaces/latest/adminguide/workspace-maintenance.html).

1. Optimizing Performance

   Scaling and Performance Tuning: You can switch a WorkSpaces between the Standard, Power, Performance, and compute types dependent on user needs.

1. Cost Management

   WorkSpaces Bundles: Consider purchasing virtual desktop bundles inclusive of your end user software needs. Generally, for simple SAPGUI access a "Value" user will save on costs. See the [AWS WorkSpaces Pricing page](https://aws.amazon.com/workspaces-family/workspaces/pricing/) for further details

   Monitoring Usage: Use AWS Cost Explorer and budgets to monitor and manage costs effectively.

   For non-persistent, secure desktop access consider WorkSpaces Pools as a highly cost-effective option.

By following these steps, you can set up Amazon WorkSpaces as an effective remote access solution for RISE with SAP systems, ensuring secure, scalable, and efficient operations.

 **WorkSpaces Benefits to RISE** 

Using Amazon WorkSpaces as a remote access solution in a RISE with SAP deployment offers several benefits, particularly around security, access control, and operational efficiency. Here are the key benefits of this approach:

1.  **Enhanced Security and Controlled Access** 

   Isolated Environment: WorkSpaces provide an isolated environment where access to SAP systems in a RISE deployment can be tightly controlled. This helps prevent unauthorized direct access to critical systems

   No Direct Internet Exposure: By using WorkSpaces as a remote access solution, you can restrict internet access to the SAP environment. External users or administrators must first connect to a secure WorkSpaces, limiting exposure to SAP systems.

   Secure Protocols (PCoIP/WSP): WorkSpaces use secure streaming protocols like PCoIP or WSP, ensuring that data is encrypted during transmission.

   Reduced Attack Surface: By utilizing WorkSpaces as the only point of access to SAP systems, you can reduce the attack surface by isolating SAP environments from direct access over the internet or corporate networks.

   VPC Integration: WorkSpaces can be deployed in private subnets within an Amazon Virtual Private Cloud (VPC), ensuring secure and direct connectivity to the RISE with SAP infrastructure.

    AWS Direct Connect or VPN: You can use AWS Direct Connect or VPN connections to provide a secure network path between the WorkSpaces and SAP environments, further enhancing security.

1.  **Centralized Management** 

   Unified Access Point: Amazon WorkSpaces serve as a single point of access to manage and operate the RISE with SAP environments, simplifying monitoring and control.

   Audit and Logging: AWS services such as AWS CloudTrail and Amazon CloudWatch can log user actions and monitor activities on the WorkSpaces. This helps with security audits and tracking access to SAP systems.

   Integration with AWS IAM: Role-based access control (RBAC) through AWS Identity and Access Management (IAM) ensures fine-grained access to WorkSpaces and SAP resources. This minimizes the risk of unauthorized access and supports compliance requirements.

1.  **Improved Operational Efficiency:** 

   On-Demand Scalability: WorkSpaces can be provisioned quickly and scaled on-demand, making it easy to provide access to administrators or developers needing to access the SAP environment without lengthy setup processes.

   Minimal Maintenance: Amazon WorkSpaces are fully managed, which reduces the overhead of maintaining physical servers or traditional remote desktop infrastructure. Updates and patches are handled by AWS, freeing up time for more critical operations.

   Cost Efficiency: WorkSpaces can be configured to charge only when in use (hourly pricing), making it a cost-effective solution for temporary or infrequent access, especially when not in continuous operation.

   Remote Access: With WorkSpaces, administrators and users can access the SAP environment securely from any location with an internet connection. This is particularly useful for distributed teams or remote workers supporting SAP environments.

   Resilience and Availability: WorkSpaces can be integrated with AWS backup solutions and spread across multiple AWS Availability Zones (AZs), ensuring redundancy and high availability.

   Quick Recovery: In case of failure or disaster in the SAP environment, WorkSpaces provide a quick and scalable way to reconnect to alternative environments or backup systems.

# Reliability
<a name="reliability-rise"></a>

Reliability is one of the six pillars of SAP Lens - AWS Well-Architected Framework. For more information, see [Reliability](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/reliability.html).

 AWS cloud offers reliability with multiple Availability Zones within an AWS Region. This enables your SAP applications on AWS to be more resilient. Each Region is further isolated from other Regions, providing the greatest possible fault tolerance and stability. Within each AWS Region, there are a minimum of three, isolated, physically separate Availability Zones. For more information, see [Regions and Availability Zones](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/).

![\[Diagram that shows the fault tolerance of Regions and Availability Zones\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-aws-global-infra.png)


Availability Zones enable you to operate production applications and databases that are more highly available than would be possible from a single data center. Distributing your applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures.

Each Availability Zone can be multiple data centers. At full scale, it can contain hundreds of thousands of servers. They are fully isolated partitions of AWS Global Infrastructure. An Availability Zone is physically separated from any other zones with its own separate power and networking resources. There is a distance of several kilometers, although all are within 100 km (60 miles of each other). This distance provides isolation from the most common disasters that could affect data centers, such as floods, fire, severe storms, earthquakes, etc.

All Availability Zones within a Region are interconnected with high-bandwidth and low-latency networking, over fully redundant and dedicated metro fiber. This ensures high-throughput, low-latency networking between Availability Zones. The network performance is sufficient to accomplish synchronous replication.

![\[Network design diagram for Availability Zones\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-aws-network-design.png)


Availability Zones enable you to run your applications in a highly-available manner, with synchronous data replication and automated failover between Availability Zones. RISE with SAP can offer such high available designs for your workload in every AWS Region.

 **Resiliency and Cost Considerations** 

SAP has options available for RISE to meet different resiliency requirements. The following key requirements are adjustable for RISE via option packages available from SAP.
+ Service Level Agreement (SLA) – describes the targeted availability of the solution.
+ Recovery Time Objective (RTO) – describes the targeted duration within which a recovery from a disaster event should be completed.
+ Recovery Point Objective (RPO) – describes the targeted level of data loss that may occur during recovery from a disaster event.

For more details, refer to the definitions provided by SAP as part of RISE agreement for specific definitions, clauses, impacts, and penalties in the event of a breach.

The impact of an outage on your organisation and loss of data can cause loss of productivity, loss of income, and can damage reputation. Weighing the trade-off between cost and resiliency can help assess the risk to your organisation.

 **Resiliency and Performance Considerations** 

When you opt for short distance disaster recovery option in RISE, the SAP application servers and database servers will be installed across multi Availability Zones. This architecture supports highly available design for your SAP workload.

While using the application servers in multiple Availability Zones in an active-active configuration, it increases the resiliency. In parallel, a higher latency cross Availability Zones from application server to database server is introduced. You can refer to [SAP Note 3496343](https://me.sap.com/notes/3496343) (Network Latency on AWS) that address in details the increased latency due to the distance between application servers and database servers in multi Availability Zones deployment. This will be discussed in details in the subsequent section.
+ Network latency between the SAP application server and database server should be less than 0.7 milliseconds as per [SAP Note 1100926](https://me.sap.com/notes/1100926) 
+ Network latency for HANA system replication with synchronous data replication (which is required to achieve zero data loss) to be [less than 1 millisecond](https://help.sap.com/docs/SAP_HANA_PLATFORM/4e9b18c116aa42fc84c7dbfd02111aba/781c30f901cd49e5be8e711384349379.html) 

You can use the [AWS Network Manager – Infrastructure Performance tool](https://docs.aws.amazon.com/network-manager/latest/infrastructure-performance/what-is-nmip.html) to automatically measure Inter-AZ, Intra-AZ and Inter-Region network latency. Alternatively, you can use SAP’s [NIPING](https://me.sap.com/notes/1100926) tool as per [SAP Note 2986631](https://me.sap.com/notes/2986631).

When SAP application servers and database servers distributed across multiple Availability Zones (AZs), it significantly enhances system reliability and availability, outweighing the impact of increased network latency.

Cross Availability Zones traffic may increase the time required to perform certain transactions or batch jobs that make frequent calls to the database. In case the impact is high, we recommend keeping this traffic within the same Availability Zone using [SAP Logon Groups](https://help.sap.com/docs/SUPPORT_CONTENT/nwtech/3362694203.html?locale=en-US), [RFC Server Groups](https://help.sap.com/docs/SUPPORT_CONTENT/basis/3354611643.html?locale=en-US) and [Batch Server Groups](https://help.sap.com/docs/SUPPORT_CONTENT/si/3362959530.html?locale=en-US). This ensure that the impacted transactions or batch jobs only use application servers in the same availability zone as the database servers.

To automate and optimise the running of such performance-critical batch jobs and transactions on application servers located in the same Availability Zone as the database server, AWS provides [example ABAP code](https://github.com/aws-samples/aws-sap-multiaz) which customers can test and implement in their SAP systems.

You may implement further optimization through [C-State parameters](https://www.intel.com/content/www/us/en/content-details/814415/power-management-dynamic-receive-side-to-increase-sleep-state-residency-solution-brief.html) by referring to [AWS re:Post article Inter-AZ Latency for SAP](https://repost.aws/articles/AR1oVZmFbRSoKqeq1IFJORiA) to lower the network latency.

When it is not feasible to run application servers in active-active mode across multi Availability Zones, you can run in active-passive mode by utilizing [ABAPSetServerInactive (SAP Note 3075829)](https://me.sap.com/notes/3075829/E) 

In rare cases, where you observe performance impacts due to latency within one Availability Zone, you may use [Cluster Placement Groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-strategies.html#placement-groups-cluster) to achieve lowest possible latency. You can refer to the [Placement Strategies Guide from AWS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-strategies.html).

In summary, these are the architecture patterns in multi Availability Zones deployment:


| App Servers in AZ1 | App Servers in AZ2 | Failover Mechanism from AZ1 to AZ2 | 
| --- | --- | --- | 
|  Active  |  Active  |  Automated script (i.e. pacemaker)  | 
|  Active  |  Active  |  Manual adjustment of Logon Groups, RFC and Batch Server Groups  | 
|  Active  |  Active  |  Automatic script to adjust Logon Groups, RFC and Batch Server Groups  | 
|  Active  |  Passive  |  Manual activation of the passive application servers  | 
|  Active  |  Passive  |  Automatic script to activate the passive application servers  | 

To achieve high reliability of SAP workloads, We recommend the following tasks:

1. Discuss with SAP on the Availability SLA requirement for RISE deployment. This will drive the components (i.e. database and application servers) that will be deployed across multiple Availability Zones to maximise reliability and availability of RISE.

1. If you have business scenarios involving batch jobs and/or transactions that makes frequent calls to the database servers, it may be adversely impacted by inter-AZ network latency, you can consider using SAP’s workload distribution mechanism (SAP Logon Groups, RFC Server Groups and Batch Server Groups) to ensure these jobs and transactions run on the application servers located in the same Availability Zone as the database server

1. You may implement further optimization of network latency by referring to AWS re:Post article Inter-AZ Latency for SAP.

1. When active-active mode is not feasible, you can run in active–passive mode of application servers utilizing ABAPSetServerInactive (SAP Note 3075829).

1. You can consider putting other workloads, that are outside of RISE, within the same Availability Zone in order to achieve better network latency and lower data transfer cost.

 **Disaster recovery options** 

You can implement a disaster recovery solution by replicating data into a second AWS Region. Your SAP workloads are protected in the event of rare occurrence of local or regional failures.

RISE with SAP S/4HANA Cloud, private edition offers the following two options.
+  **Short distance disaster recovery** or Metro disaster recovery – RISE with SAP uses multiple Availability Zones in an AWS Region. Unique AWS region with three or more Availability Zones provide the option of short distance disaster recovery in every AWS regions.
+  **Long distance disaster recovery** or Regional disaster recovery – RISE with SAP uses a secondary AWS Region as standby for failover systems. Owing to the physical distance between two AWS Regions, data is replicated asynchronously between two AWS Regions.

For more details, see SAP documentation [SAP Service Description: Disaster Recovery and Customer Invoked Failover](https://assets.cdn.sap.com/agreements/product-policy/hec/service-description/sap-service-description-disaster-recovery-and-customer-invoked-failover-english-v7-2022.pdf).

# Observability
<a name="rise-observability"></a>

Observability is essential for SAP customers to understand their SAP landscape and the internal state of their systems by analyzing external outputs such as logs, metrics, and traces. Unlike on-premises or native AWS deployments, customers running RISE with SAP do not have the ability to directly access, manage, or monitor the underlying infrastructure and dependent resources. Nevertheless, they still need to ensure their systems are operating as expected and that any issues are proactively identified and resolved within the SAP application stack.

**Topics**
+ [Shared Responsibility](rise-observability-shared-responsibility.md)
+ [Observability Options](rise-observability-options.md)

# Shared Responsibility
<a name="rise-observability-shared-responsibility"></a>

SAP bundles cloud infrastructure, S/4HANA software, tools, and services into a single subscription in the RISE with SAP commercial model. Although it is a comprehensive managed service, observability remains a critical concern that customers still want to have control of, and prefer to understand the internal state of their systems. Not all observability features are included in the construct by default. Customers should be aware of optional and excluded tasks based on the latest [RISE Roles and Responsibilities](https://assets.cdn.sap.com/agreements/product-policy/hec/roles-responsibilities/rise-with-sap-s4hana-cloud-private-edition-and-sap-erp-tailored-option-roles-and-responsibilities-english-v3-2025.pdf). SAP manages the infrastructure, operating system, database, and application layer. However, this creates a potential visibility gap for customers that they didn’t have while running SAP on-premises or natively on cloud. Without appropriate observability tools, organizations struggle to understand performance issues, identify bottlenecks, and ensure optimal business operations. This lack of visibility becomes especially problematic when issues span both SAP and other enterprise systems.

One such example, data volume management, requires active customer oversight. As data volumes grow, performance can degrade and costs can increase. Customers need tools to monitor data growth, usage patterns, and archiving needs to maintain system health and control expenses. Understanding data consumption patterns is critical, as they directly impact operational costs. System availability and performance monitoring across the entire landscape is equally essential. While SAP monitors the core systems, customers need visibility into end-to-end performance, including response times, system availability, and resource utilization. However, customers are responsible for monitoring all custom applications and external interfaces.

# Observability Options
<a name="rise-observability-options"></a>

Observability in RISE with SAP requires a strategic approach considering native tools from AWS and SAP, and third-party solutions. The guide highlights three observability options that customers can choose from based on specific requirements and solution limitation.

**Topics**
+ [Native AWS](rise-observability-options-nativeaws.md)
+ [SAP Cloud ALM](rise-observability-sap-cloud-alm.md)
+ [Partner Solutions](rise-observability-partner-solutions.md)

# Native AWS
<a name="rise-observability-options-nativeaws"></a>

 **SAP Monitoring using Amazon CloudWatch** 

Amazon CloudWatch is a service that monitors applications, responds to performance changes, optimizes resource use, and provides insights into operational health. Amazon CloudWatch for SAP is a native AWS monitoring solution that provides comprehensive observability for SAP workloads running on AWS. The solution enables organizations to monitor, analyze, and optimize their SAP landscape using AWS's built-in monitoring capabilities, offering seamless integration with AWS services and automated insights for SAP systems.

To provide reliable, end-to-end observability of SAP landscape on AWS, it is recommended to implement a layered approach that spans application metrics, user experience, operations tooling, and automation. When building observability for SAP on AWS, the aim should be to proactively detect issues across the entire SAP stack from application servers and databases to networks and user interfaces, while also measuring real user experience in applications such as SAP Fiori. The goal is to shorten the time required to detect, diagnose, and remediate problems, automate routine monitoring tasks to minimize manual effort, and ensure that all activities are carried out with strong security, cost efficiency, and operational discipline.

Because you cannot access CloudWatch in the RISE with SAP account directly, you can use the solution described in the next section to export the metrics into your AWS Account to access the metrics via your CloudWatch service.

 **Monitoring SAP ABAP-based systems on AWS ** 

To establish lightweight and scalable monitoring for SAP ABAP-based systems with RISE on AWS, you can adopt a serverless model where AWS Lambda (with SAP Java Connector) configured in your own AWS account extracts workload and monitoring data from SAP transactions like ST03, STAD, and /SDF/SMON, and publishes them as custom metrics in Amazon CloudWatch. A CloudWatch rule schedules the data collection, while credentials are managed securely in AWS Secrets Manager and the Lambda runs in a customer managed VPC with connectivity to the SAP Managed VPC. The lambda function connects to the SAP systems running in the SAP Managed VPC via RFC. You can then build dashboards and alarms in CloudWatch to visualize system performance, proactively detect anomalies, and alert on thresholds, all with minimal operational overhead and low cost. This approach eliminates the need for additional infrastructure or agents, scales across multiple SAP systems, and provides a secure, cost-effective baseline for observability.

![\[RISE observability Native Option\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-nativeaws.png)


High-Level Implementation Steps:

1. Create a dedicated SAP RFC user with required authorizations for monitoring.

1. Establish network connectivity between your AWS account and RISE AWS account.

1. Deploy a Lambda function in your own AWS account using the SAP Java Connector (JCo) as a layer, via the AWS Serverless Application Repository or CloudFormation template.

1. Configure the Lambda to run inside a VPC/subnet with RFC access to your SAP system.

1. Store SAP credentials securely in AWS Secrets Manager.

1. Set a CloudWatch rule to schedule metric collection at appropriate intervals.

1. Build CloudWatch dashboards and alarms using the custom metrics to visualize system health and trigger alerts.

You can follow [SAP monitoring: A serverless approach using Amazon CloudWatch](https://aws.amazon.com/blogs/awsforsap/sap-monitoring-a-serverless-approach-using-amazon-cloudwatch/) for detailed steps and implementation guidance.

By implementing this approach, you gain scalable, secure, and cost-effective monitoring for your SAP ABAP systems, enabling proactive issue detection and performance visibility. This foundation allows you to expand observability over time, incorporate additional metrics, and integrate monitoring seamlessly into your operational workflows via native AWS services.

 **Leveraging Quick Sight Visualization for SAP Monitoring** 

Building on the “Monitoring SAP ABAP-based Systems on AWS", you can gain deeper, business-level visibility into your RISE with SAP environment by integrating Amazon CloudWatch Logs with Amazon Quick Sight using Amazon Athena. This lets you take raw operational log data, store and query it efficiently, and build interactive dashboards and reports that non-technical stakeholders can use, offering you a unified picture of system health, user behaviour, and security from a single pane.

![\[RISE observability with Quick Sight\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-quicksight.png)


To implement this integration, you first set up the Athena CloudWatch Logs connector by deploying a Lambda function that enables Athena to query your CloudWatch Logs. Next, you define Athena views that structure and extract the relevant log fields, such as timestamps, error codes, or custom SAP log entries, to make them ready for analysis. With the views in place, you connect Amazon Quick Sight to Athena by granting the necessary IAM permissions and configuring S3 access, then import or directly query the log data. Finally, you build interactive dashboards and visualizations in Quick Sight to monitor trends, error rates, and operational KPIs, and optionally enable Amazon Q in Quick Sight so your business users can ask natural language questions against the SAP log data without writing SQL.

Once you have setup SAP metrics from RISE environment into Amazon CloudWatch in your onw AWS account, you can follow [Integrate Amazon CloudWatch Logs with Amazon Quick Sight using Amazon Athena](https://aws.amazon.com/blogs/business-intelligence/integrate-amazon-cloudwatch-logs-with-amazon-quicksight-using-amazon-athena/) for detailed steps and implementation guidance.

 **Monitoring and optimizing SAP Fiori user experience on AWS ** 

You can monitor and improve the user experience of your SAP Fiori applications by leveraging [Amazon CloudWatch Real User Monitoring (RUM)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM.html). This enables you to capture how actual users interact with the SAP Fiori launchpad and apps in real-time, measuring performance, error rates, and user drop-offs. By understanding user experience metrics, you can proactively optimize your front-end performance and ensure a smooth, responsive SAP Fiori environment.

![\[RISE observability for SAP Fiori\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-fiori.png)


High-Level Implementation Steps:

1. Create a CloudWatch RUM app monitor in the AWS console.

1. Deploy the generated JavaScript snippet as a Fiori plugin in the launchpad with appropriate catalogs and role assignments.

1. Configure RUM to capture key metrics: page load times, Core Web Vitals (LCP, FID, CLS), and browser errors.

1. Optionally configure sampling to balance data volume and cost.

1. Create dashboards and alarms in CloudWatch to monitor performance trends and user-impacting issues.

1. Add manual route-change events where necessary to properly capture single-page application navigation.

You can follow [Monitor and Optimize SAP Fiori User Experience on AWS](https://aws.amazon.com/blogs/awsforsap/monitor-and-optimize-sap-fiori-user-experience-on-aws/) for detailed steps and implementation guidance.

By implementing CloudWatch RUM for SAP Fiori, you gain deep insight into end-user experience, allowing your team to proactively identify and resolve front-end performance bottlenecks. This approach ensures higher user satisfaction, continuous improvement of SAP Fiori apps, and actionable data for IT and business teams.

 **Enhance SAP Monitoring using AIOps with CloudWatch & Application Signals MCP Servers** 

You can supercharge your RISE with SAP observability by using the AWS MCP Servers together with Amazon Q CLI to enable intelligent, context-aware troubleshooting. These tools let you correlate metrics, traces, logs, and service health automatically, define service-level objectives (SLOs), and interact with your observability data using natural-language prompts, helping you find root causes faster, diagnose performance problems more intuitively, and generally improve how quickly you remediate issues in your SAP landscape. Additionally, you can monitor critical network components, such as Direct Connect links and VPCs in a RISE with SAP environment deployed via AWS Landing Zone, ensuring connectivity is available, performance is optimal, and any failures are detected and mitigated promptly.

High-Level Implementation Steps:

1. Ingest full observability data (metrics, logs, traces) from your RISE with SAP systems into Amazon CloudWatch and enable Application Signals.

1. Define Service Level Objectives (SLOs) that align with SAP performance goals (e.g., dialog response time, transaction throughput, Fiori UI latency).

1. Deploy and configure the CloudWatch MCP Server and Application Signals MCP Server in your environment.

1. Set up IAM roles and permissions with least-privilege access so MCP Servers can securely interact with CloudWatch and Application Signals data.

1. Install the Amazon Q Developer CLI, configure it to use the MCP Servers, and map it to your AWS profile and region.

1. Validate that MCP Servers are loaded correctly and responding to Q CLI.

1. Start using natural-language queries in Q CLI to troubleshoot issues, detect latency spikes, validate SLO compliance, and accelerate root-cause analysis across your SAP stack.

Once operational, you use Q CLI to ask for natural-language-style queries like “Which backend operations are failing most often in my S/4HANA system?”, “Is there any breach in our SLOs for SAP services over past 24 hours?”, or “Please check any clues of threat in my SAP system in the latest 7 days from my cloudtrail log”, letting the tools do much of the correlation and log/pattern detection for you.

You can follow [Streamline SAP Operation with CloudWatch MCP server and Amazon Q CLI](https://aws.amazon.com/blogs/awsforsap/streamline-sap-operation-with-cloudwatch-mcp-server-and-amazon-q-cli-part-3/) for detailed steps and implementation guidance.

By adopting CloudWatch and Application Signals MCP Servers with Q CLI, you make SAP monitoring not just reactive but more predictive and conversational. You dramatically reduce mean time to resolution, because instead of manually crawling logs & dashboards you can ask focused questions and get insights tied to your SAP environment. In environments with many components (app servers, database, network, UI), the MCP servers help you correlate failures across layers (e.g. slow DB, overloaded app server, network latency) more quickly. This approach also helps you enforce performance targets (through SLOs), better visibility into service health, and more robust incident remediation workflows, all helping you operate RISE with SAP on AWS with higher efficiency and reliability.

 **Conclusion** 

By combining Amazon CloudWatch, CloudWatch RUM, Application Insights, MCP Servers, Amazon Q CLI, Athena, and Quick Sight, you can create a fully integrated, end-to-end observability strategy for your RISE with SAP environment on AWS. This approach enables you to monitor backend systems, SAP Fiori user experience, and service-level objectives, while correlating metrics, logs, and traces across your entire SAP stack.

MCP Servers and Amazon Q CLI provide powerful capabilities to interact with observability data using natural-language queries, automate routine operational tasks, generate health reports, and accelerate root-cause analysis, reducing manual effort and improving operational efficiency. At the same time, the solution is fully customizable, giving you the opportunity to design dashboards, alerts, data collection, and workflows to meet your specific business requirements and compliance needs. Overall, this strategy improves system reliability, enhances user satisfaction, and empowers both technical teams and business stakeholders to proactively optimize and maintain SAP workloads on AWS in a secure, cost-effective, and resilient manner.

# SAP Cloud ALM
<a name="rise-observability-sap-cloud-alm"></a>

 [SAP Cloud Application Lifecycle Management (ALM)](https://support.sap.com/en/alm/sap-cloud-alm.html) serves as the primary tool for observability in cloud and hybrid landscapes. It provides a cloud-native approach to monitoring SAP solutions with a focus on standardization rather than extensive customization. Cloud ALM is provided to customers with active cloud services and can be used for both cloud and on-premises SAP solutions, making it suitable for hybrid environments.

![\[RISE observability with SAP Cloud ALM\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-sap-cloud-alm.png)


 **Health Monitoring in SAP Cloud ALM** 

At the heart of Cloud ALM’s monitoring capabilities is the [Health Monitoring application](https://support.sap.com/en/alm/sap-cloud-alm/operations/expert-portal/health-monitoring/health-monitoring-setup-configuration/sap-cloud-alm.html), which systematically collects metrics to calculate the overall health of managed components. The solution presents a comprehensive dashboard displaying the current status of all connected services and systems, tracking critical KPIs including system availability, response times, memory and CPU utilization, database performance, disk space usage, job processing status, queue backlogs, user sessions, and security events. This multifaceted monitoring approach enables organizations to maintain visibility across their SAP landscape, with features spanning system availability tracking, performance monitoring, security surveillance, certificate expiration alerts, threshold-based notifications, and historical data retention for trend analysis. For further details on SAP Cloud ALM Health Monitoring, refer to [SAP Help documentation](https://help.sap.com/docs/cloud-alm/applicationhelp/health-monitoring?locale=en-US).

 **User Experience Monitoring in SAP Cloud ALM** 

Cloud ALM enhances its monitoring capabilities through User Experience Monitoring, which employs two complementary approaches. Real User Monitoring captures actual user interactions with SAP applications, providing authentic insights into performance metrics such as page load times, response times, and error rates. Complementing this, Synthetic User Monitoring simulates user interactions at regular intervals through predefined scripts, measuring performance even when no actual users are active. This dual approach ensures continuous visibility into application performance from both real-world and controlled testing perspectives. For further details on SAP Cloud ALM User Experience Monitoring, refer to [SAP Help documentation](https://help.sap.com/docs/cloud-alm/applicationhelp/real-user-monitoring?locale=en-US).

 **Operations Automation and View Dashboard** 

SAP Cloud ALM offers Operations Automation capabilities for orchestrating and automating standard operations and problem resolution procedures. The Operations View dashboard provides a comprehensive view of system health, calculating a System Health score based on key performance indicators such as Connectivity, Exceptions, Background Processing, and Performance.

 **Cost of Using SAP Cloud ALM** 

SAP Cloud ALM is included in cloud subscriptions with SAP Enterprise Support. According to [SAP’s fair use policy](https://help.sap.com/docs/CloudALM/08879d094f3b4de3ac67832f4a56a6de/fair-use), the default resources provided are generally sufficient for standard use cases. Organizations can monitor their usage metrics, including memory consumption and outbound API usage, in the Tenant Information app within SAP Cloud ALM. To reduce memory usage without purchasing extensions, organizations can adjust housekeeping settings in SAP Cloud ALM for operations apps. For extended use scenarios or organizations requiring additional resources, SAP offers SAP Cloud ALM, Tenant Extension. For further details, refer to [SAP Help documentation](https://help.sap.com/docs/cloud-alm/setup-administration/getting-additional-tenants?locale=en-US).

 **Conclusion** 

For SAP Cloud ERP environments, Cloud ALM represents a valuable starting point for monitoring that comes included with their subscription. As environments grow in complexity and business criticality increases, organizations should continuously assess whether the standardized monitoring approach of Cloud ALM sufficiently addresses their evolving needs or if a specialized partner monitoring solutions would provide greater business value through enhanced observability and improved operational efficiency.

# Partner Solutions
<a name="rise-observability-partner-solutions"></a>

While customers can build SAP observability solutions using AWS services, or use SAP Cloud ALM, there are several compelling reasons to choose partner solutions. Partner observability solutions offer pre-built integrations thus faster implementation. While Cloud ALM has out-of-the-box observability options with a focus on standardization, extensive customization and specialized expertise without the need for dedicated engineering teams is often possible with partner offerings. Partner solutions deliver a complete package with built-in best practices, professional support, and advanced capabilities like AI/ML analytics, often at a lower total cost of ownership. This allows organizations to focus on their core business rather than building and maintaining observability infrastructure.

These following partner solutions are not exhaustive. We recommend checking the latest AWS Marketplace listings for SAP observability solutions or [contacting us](https://pages.awscloud.com/contact-the-sap-on-aws-team.html) for more information.

**Topics**
+ [New Relic Monitoring for SAP](rise-observability-newrelic.md)
+ [SoftwareOne: PowerConnect for SAP Solutions](rise-observability-powerconnect.md)
+ [PowerConnect for SAP on Dynatrace](rise-observability-dynatrace.md)
+ [Splunk Service Intelligence for SAP Solutions](rise-observability-splunk.md)

# New Relic Monitoring for SAP
<a name="rise-observability-newrelic"></a>

New Relic Monitoring for SAP is a comprehensive observability solution that provides a holistic, end-to-end view connecting SAP performance to business outcomes and non-SAP systems. The solution enables organizations to monitor their entire enterprise stack through a single pane of glass, offering unified visibility across SAP landscapes with AI-driven insights and powerful visualizations.

Key Benefits:
+ Over 175 monitoring points, 35 dashboards, and 17 alert policies out-of-the-box
+ Certified for SAP Cloud ERP Private with agentless architecture
+ Minimal performance impact through agentless architecture and non-SAP Monitoring with unified "single pane of glass" view
+ End-to-end distributed traces for full transaction flow monitoring and business process step monitoring with key process performance indicators

Architecture

The solution utilizes a truly agentless architecture through a native, SAP-certified ABAP Add-on installed on a single, central monitoring system. This centralized connector pulls data from other SAP systems, eliminating the need for agents on each production system. The solution provides comprehensive monitoring across six key areas:

1. System Health: Monitors overall system health, central/enqueue server status, ABAP message server, and network connectivity

1. Resource Utilization: Tracks user activity, memory utilization, CPU usage, and system efficiency metrics

1. Database: Provides detailed insights for both HANA and Non-HANA databases

1. Performance: Measures Dialog Response Time, RFC Response Time, and Background Jobs

1. Security: Monitors critical security components, certificates, and compliance

1. BTP Monitoring: Integrates with SAP CloudALM OpenTelemetry APIs for comprehensive BTP environment monitoring

![\[RISE observability with New Relic\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-newrelic.png)


New Relic Monitoring for SAP Solutions [product documentation](https://docs.newrelic.com/docs/data-apis/custom-data/sap-integration/) details technical details along with installation and configuration steps. You can procure your [New Relic solution from AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-yg3ykwh5tmolg), or get a quick overview through the [data sheet](https://newrelic.com/sites/default/files/2025-08/new-relic-sap-data-sheet-2025-aug.pdf).

Disclaimer: New Relic, and the New Relic logo are trademarks of the New Relic, Inc.. All other trademarks are the property of their respective owners.

# SoftwareOne: PowerConnect for SAP Solutions
<a name="rise-observability-powerconnect"></a>

PowerConnect, an SAP-certified advanced observability and security monitoring solution that streams real-time telemetry, performance, business, and security data from SAP systems into leading observability platforms such as Splunk and Dynatrace. It enables organizations to extend their existing monitoring investments into the SAP landscape, providing deep visibility into application performance, user activity, security events, and system health without disrupting core business operations.

Key Capabilities:
+ Out-of-the-box connectors for SAP NetWeaver, S/4HANA, ECC, BW, and more.
+ Pre-built dashboards and analytics for rapid time-to-value.
+ Configurable data capture for performance metrics, change events, and business transactions.
+ Low-overhead data collection that does not impact SAP system performance.

Architecture

PowerConnect ensures full compatibility and compliance with SAP standards. The solution can be deployed and configured in under 45 minutes per SAP system, enabling rapid time-to-value. Out of the box, PowerConnect can capture over 360 key SAP metrics across performance, security, and business process domains, and delivers over 1600 pre-defined use cases ready to consume in your chosen monitoring or observability platform, reducing implementation effort and accelerating insights.

![\[RISE observability with Power Connect\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-powerconnect.png)


SoftwareOne PowerConnect for SAP Solutions' [product documentation](https://help.powerconnect.io/powerconnectdocumentation/powerconnect-documentation-landing-page) details comprehensive technical details along with installation and configuration steps and it is available through [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-bdpl5zjkasukg).

Disclaimer: SoftwareOne, and PowerConnect are trademarks of the SoftwareOne AG. All other trademarks, names, and logos are the property of their respective owners.

# PowerConnect for SAP on Dynatrace
<a name="rise-observability-dynatrace"></a>

PowerConnect for SAP on Dynatrace is a comprehensive observability solution that combines SoftwareOne’s deep SAP expertise with Dynatrace’s AI-powered platform to deliver unified visibility across SAP landscapes. The solution enables organizations to monitor complex SAP environments spanning traditional on-premises infrastructure, SAP Cloud ERP, SAP Business Technology Platform (BTP), and various cloud solutions through a single pane of glass.

Key Benefits
+ Comprehensive visibility across diverse SAP platforms including SAP S/4HANA, SAP BTP, and other SAP offerings
+ Real-time monitoring and insights for business continuity
+ Comprehensive security audit and application log analysis
+ AI-powered contextual intelligence for transaction tracing
+ Over 200 pre-built dashboards for common SAP observability use cases
+ Single pane of glass visibility for entire SAP landscape

Architecture

The solution provides a unified observability framework that seamlessly integrates with various SAP deployment scenarios. At its core, the solution utilizes PowerConnect agents (ABAP and Java) for direct integration with SAP Cloud ERP private environments, while for SaaS and public cloud solutions, it deploys a dedicated AWS virtual machine running the PowerConnect Cloud component. This VM acts as an active remote monitoring agent, establishing connections to SAP APIs and forwarding telemetry data to the Dynatrace tenant. All observability signals, regardless of their source - whether from SAP Cloud ERP, BTP, or other SAP cloud solutions - are consolidated within the Dynatrace Grail data lakehouse. This unified architecture enables comprehensive monitoring and analytics across the entire SAP landscape through a single pane of glass, allowing organizations to maintain complete visibility of their SAP ecosystem while leveraging Dynatrace’s AI-powered analytics capabilities.

![\[RISE observability with Dynatrace\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-dynatrace.png)


PowerConnect for SAP on Dynatrace product [documentation details](https://www.dynatrace.com/hub/detail/powerconnect-for-sap-on-dynatrace-1/) comprehensive technical details along with installation and configuration steps. You can procure your [Dynatrace tenant from AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-si2angoettdnc?sr=0-1&ref_=beagle&applicationId=AWSMPContessa), along with obtaining PowerConnect license from SoftwareOne, via the [AWS marketplace](https://aws.amazon.com/marketplace/pp/prodview-bdpl5zjkasukg).

Disclaimer: Dynatrace, Grail, and the Dynatrace logo are trademarks of the Dynatrace, Inc. group of companies. All other trademarks are the property of their respective owners.

# Splunk Service Intelligence for SAP Solutions
<a name="rise-observability-splunk"></a>

Splunk Service Intelligence for SAP Solutions is a comprehensive out-of-the-box solution that provides proactive, end-to-end monitoring of SAP environment. It gives the ability to monitor various infrastructure elements that run SAP and the application components that connect to it, as well as the system’s underlying infrastructure. Use this app with the monitoring capabilities in Splunk IT Service Intelligence (ITSI) to quickly and proactively detect problems in SAP environment to reduce issues and avoid costly outages.

Key Benefits
+ Out-of-the-box monitoring capability for SAP landscapes and real-time insights into SAP’s health, performance, and security status
+ Proactive management of unplanned downtime
+ Over 2000 SAP-specific use cases and hundreds of pre-delivered dashboards
+ Instant visibility into transaction logs, security use cases, system performance, and user experience
+ Advanced big data analysis and visualization capabilities

Architecture

Service Intelligence for SAP Solutions can be deployed in under an hour with the help of a simple ticket logged to SAP ECS. At its core, the solution utilizes SoftwareOne’s PowerConnect agents (ABAP and Java) for direct integration with SAP Cloud ERP private environments, while for SaaS and public cloud solutions, it deploys a dedicated AWS virtual machine running the PowerConnect Cloud component. Using PowerConnect to access SAP information in real- time and leveraging Splunk machine learning, artificial intelligence, advanced big data, and visualization capabilities unlocks unprecedented insight, understanding, and preventive actions.

![\[RISE observability with Splunk\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-observability-splunk.png)


Splunk Service Intelligence for SAP Solutions [product documentation](https://help.splunk.com/en/splunk-it-service-intelligence/extend-itsi-and-ite-work/service-intelligence-for-sap-solutions/2.4/overview/overview-of-splunk-service-intelligence-for-sap-solutions) details comprehensive technical details along with installation and configuration steps.

Disclaimer: Splunk, ITSI, and the Splunk logo are trademarks of the Splunk Inc, owned by Cisco Systems, Inc.. All other trademarks are the property of their respective owners.

# Change Management
<a name="rise-change-management"></a>

In RISE with SAP, SAP Enterprise Cloud Services (ECS) manages technical-related transports while customers are responsible for application-related transports through the SAP Transport Management System (TMS), refer to [RISE with SAP S/4HANA Roles and Responsibilities](https://assets.cdn.sap.com/agreements/product-policy/hec/roles-responsibilities/rise-with-sap-s4hana-cloud-private-edition-and-sap-erp-tailored-option-roles-and-responsibilities-english-v3-2025.pdf) for more detail.

While customers have flexibility in performing transports, it’s recommended to coordinate larger changes beyond RISE with SAP ECS to ensure proper operational support and monitoring of potential impacts. For example, when you deploy AWS Solutions that are integrated to RISE with SAP on AWS such as [Data Lake on AWS](https://aws.amazon.com/big-data/datalakes-and-analytics/datalakes/), [AWS Internet of Things (IoT)](https://aws.amazon.com/iot/), and other innovations that leverage AWS Services.

**Topics**
+ [Change Management for RISE with SAP](rise-change-management-for-rise.md)
+ [Change Management for AWS Services](rise-change-management-for-aws.md)
+ [Change Management with Partner Solutions](rise-change-management-partner.md)

# Change Management for RISE with SAP
<a name="rise-change-management-for-rise"></a>

 [SAP Cloud ALM](https://support.sap.com/en/alm/sap-cloud-alm.html) provides capability to manage change and orchestrate deployments across the landscape. For RISE with SAP, Cloud ALM integrates with [Change and Transport System (CTS)](https://support.sap.com/en/tools/software-logistics-tools/change-and-transport-system.html) to orchestrate the deployment of transport requests.

For SAP BTP, Cloud ALM integrates with [SAP Cloud Transport Management Service (cTMS)](https://www.sap.com/sea/products/technology-platform/cloud-transport-management.html) and allows you to transport multiple content types from your development or testing to the production subaccount (List of supported content types for transport is available [here](https://help.sap.com/docs/cloud-transport-management/sap-cloud-transport-management/supported-content-types)).

For customers using SAP Solution Manager, [Change Request Management (ChaRM)](https://support.sap.com/en/alm/solution-manager/training-services/alm-consulting-services/change-management.html?anchorId=section) is an integrated functionality that provides comprehensive change management.

SAP provides a [DevOps reference framework](https://architecture.learning.sap.com/docs/ref-arch/1c5706feb5) to automate large parts of your deployment pipeline, allowing you to quickly setup CI/CD pipelines as part of SAP Build.

# Change Management for AWS Services
<a name="rise-change-management-for-aws"></a>

You manage the change management of the AWS services that are connected to RISE with SAP; therefore, AWS provides services to automate pipeline provisioning and control. [AWS for DevOps](https://aws.amazon.com/devops/) provides a comprehensive set of flexible services designed to help companies build and deliver products more rapidly and reliably using AWS and DevOps practices.

These services simplify infrastructure provisioning, application code deployment, software release process automation, and performance monitoring. AWS offers fully managed services that require no setup, are ready to use with an AWS account, and can scale from a single instance to thousands. The platform supports automation of manual tasks, secure access control through IAM, and integrates with a large ecosystem of partners.

 [AWS CodePipeline](https://aws.amazon.com/codepipeline/), [AWS CodeBuild](https://aws.amazon.com/codebuild/), and [AWS CodeDeploy](https://aws.amazon.com/codedeploy/) together form a highly effective CI/CD automation suite that supports synchronized deployments across development (dev), pre-production (pre-prd), and production (prd) landscapes by enabling automated build, test, and deployment workflows that are tailored for multi-environment scenarios.

How the Services Work Together
+ CodePipeline orchestrates the workflow by connecting stages for source, build, test, and deploy actions across environments.
+ CodeBuild handles compiling, packaging, and testing code for each environment (dev, pre-prd, prd), offering isolation for dependencies and configuration.
+ CodeDeploy manages the deployment process to targets such as EC2, ECS, Lambda, and supports advanced strategies like blue/green and canary deployments for safe releases to production.

Multi-Environment Design
+ Separate pipelines or stages can be configured for dev, pre-prd, and prd. Typically:
  + A new commit triggers a pipeline that builds in dev, runs automatic tests, and deploys to the dev landscape.
  + Upon successful tests, a manual or automated approval can promote the artifact to pre-prd for further integration or user acceptance testing.
  + After all checks in pre-prd, another approval or trigger deploys the artifact to prd, leveraging deployment strategies to minimize risk.
+ Best practice is to isolate environments using separate AWS accounts or permission boundaries to enhance security and traceability.

Key Considerations for DEV, PRE-PRD, PRD CI/CD
+ Use infrastructure-as-code (CloudFormation/Terraform) to ensure repeatable, auditable landscape setup.
+ Automate unit, integration, and end-to-end tests at every stage.
+ Apply environment-specific variables and configuration with modular pipeline stages.
+ Implement approval gates for high-stake environments, especially for production releases.
+ Enable monitoring (CloudWatch/X-Ray) and restrict direct environment access, particularly for the production landscape.

Each environment benefits from isolated configuration, targeted testing, and deployment strategies that ensure defects are detected early and mitigated before reaching production.

This modular and environment-aware CI/CD setup automates releases, enables fast iteration in dev, thorough scrutiny in pre-prd, and secure, reliable deployments in prd, supporting the full development lifecycle while protecting production stability.

![\[RISE Change Management\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-change-management.png)


# Change Management with Partner Solutions
<a name="rise-change-management-partner"></a>

When your requirement goes beyond standard SAP and AWS change management tools, below are selected few partner solutions in testing and change management.

1. Tricentis - [Tricentis Continuous Testing Platform](https://aws.amazon.com/marketplace/pp/prodview-ebb4w7ntxyuq4) is an AI-driven, fully automated, and codeless software testing solution deployed on AWS. It accelerates software delivery with faster release cycles, reduces costs through automation and provides risk coverage for enterprise applications. The platform consists of three main components: Tosca, which offers codeless test automation powered by Vision AI for end-to-end testing across various environments; qTest, which provides scalable agile test management for automated and exploratory testing; and Neoload, which simplifies performance testing for continuous performance, reliability, and scalability from development to production.

1. Basis Technologies - [ActiveControl](https://aws.amazon.com/marketplace/pp/prodview-dbi5yapzlyrce?sr=0-1&ref_=beagle&applicationId=AWSMPContessa) is an enterprise-grade change management automation platform specifically designed for SAP ECC, SAP S/4HANA, and SAP BTP while protecting against change failure. The solution enforces consistent governance and quality checks while enabling parallel development, automated testing, and synchronized deployments across different SAP environments, significantly reducing the risk of production issues and accelerating the delivery of business-critical changes.

These are just a few selected ones that that support SAP and AWS change management scenarios, you can find many other partner solutions from [AWS Marketplace](https://aws.amazon.com/marketplace/search/results?searchTerms=SAP) to meet your needs.

# Data Integration and Analytics
<a name="rise-data-integration-analytics"></a>

This sections provides information about Data Integration and Analytics in relation to RISE with SAP

**Topics**
+ [Data integration](rise-data-integration.md)
+ [Data analytics](rise-data-analytics.md)

# Data integration
<a name="rise-data-integration"></a>

RISE with SAP Extensibility for Data Integration with AWS is a technical framework that enables data flow between SAP systems, AWS services, and third-party solutions. This integration architecture provides standardized APIs, connectors, and protocols to establish secure communication channels, addressing the critical need for seamless enterprise data integration in modern cloud environments.

The RISE with SAP Extensibility for Data Integration outlines two primary data handling and integration mechanisms.

**Topics**
+ [Data Replication](rise-data-replication.md)
+ [Replicating data using AWS Services](rise-data-replication-awsmanaged.md)
+ [Replicating data using SAP services](rise-data-replication-sap.md)
+ [Replicating data using Partner Solutions](rise-data-replication-partner.md)
+ [Data Federation using AWS Services](rise-data-federation.md)

# Data Replication
<a name="rise-data-replication"></a>

Data Replication from SAP is a crucial step in making the data usable for reporting, analysis, and integration with other systems. Below is the reference architecture on how this can be done in AWS.

![\[Overall Data replication\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-replication.png)


# Replicating data using AWS Services
<a name="rise-data-replication-awsmanaged"></a>

![\[Data replication using Managed Services\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-replication-aws-services.png)


 ** AWS Glue** 

 [AWS Glue](https://aws.amazon.com/glue/) is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. With AWS Glue, you can discover and connect to SAP using OData and manage your data in a centralized data catalog. You can visually create, run, and monitor extract, transform, and load (ETL) pipelines to load SAP data into your data lakes and data warehouses.

The [Connecting to SAP OData using Glue](https://docs.aws.amazon.com/glue/latest/dg/connecting-to-data-sap-odata.html) user guide offers comprehensive instructions for setting up Glue ETL jobs, configuring SAP OData connections, and reading data from SAP, including handling incremental transfers.

 [AWS Glue Zero-ETL](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) is a set of fully managed integrations by AWS that minimizes the need to build ETL data pipelines for common ingestion and replication use cases. It makes data available in Amazon SageMaker Lakehouse and Amazon Redshift from multiple operational, transactional, and application sources. Leveraging the SAP OData Connectors, you can create full data replication jobs from SAP, with fully managed replication (Inserts, updates and deletions) as well as schema evolution.

 AWS Glue and Glue Zero-ETL serve distinct roles in data integration, with each offering unique advantages for different use cases. While AWS Glue excels in complex ETL operations, data discovery, preparation, and extraction, particularly for specialized scenarios like SAP ODP-based replication. AWS Glue Zero-ETL is designed as a more streamlined, no-code solution for fully managed data replication scenarios.

 AWS Glue requires more hands-on management, including code deployment and maintenance, but offers greater flexibility and control over data transformation processes. AWS Glue performance is enhanced by its serverless, scale-out Apache Spark environment, which allows you to allocate Data Processing Units (DPUs) for scalable compute. This allows parallel processing and event-driven execution.

# Replicating data using SAP services
<a name="rise-data-replication-sap"></a>

![\[Data replication using SAP Services\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-replication-sap-services.png)


 **SAP BDC / Datasphere** 

 [SAP Datasphere](https://www.sap.com/products/data-cloud/datasphere.html) offers various connection types such as SAP ABAP Connections, SAP ECC Connections, SAP S/4HANA Cloud Connections supporting RFC and ODP protocols. Refer to [SAP BDC / Datasphere documentation](https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/eb85e157ab654152bd68a8714036e463.html) to choose most appropriate connectivity to replicate SAP data. Using [premium outbound integration for [Amazon Simple Storage Connection (Amazon S3)](https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/a7b660a0a4ef4a4fbee57b44f5b2147d.html), configure SAP Datasphere replication flow to ingest data to Amazon S3.

 **SAP Data Services** 

 [SAP Data Services](https://www.sap.com/products/technology-platform/data-services.html) offer various connections to replicate data from SAP ECC data. Refer to [SAP Data Services documentation](https://help.sap.com/docs/SAP_DATA_SERVICES) to choose most appropriate connectivity. SAP Data Services offers [Amazon Redshift Datastore](https://help.sap.com/docs/SAP_DATA_SERVICES/af6d8e979d0f40c49175007e486257f0/731d7026ae3b4fef9ebadfbe23ffff12.html) and [Amazon S3 datastore](https://help.sap.com/docs/SAP_DATA_SERVICES/af6d8e979d0f40c49175007e486257f0/e1ed075446344b5ca098e2382cfca78d.html) to ingest data to AWS. It also offers options for [Amazon S3 file location protocol](https://help.sap.com/docs/SAP_DATA_SERVICES/af6d8e979d0f40c49175007e486257f0/a611106693ea422eb0b04705298516b7.html) such as encryption type, compression type, batch-size, number of threads, Amazon S3 storage class, etc.

# Replicating data using Partner Solutions
<a name="rise-data-replication-partner"></a>

 AWS Partner Solutions offer ready to deploy solutions with enhanced features, such as pre-built connectors, specialized data pipelines, and advanced optimization techniques that reduce complexity and improve the speed of deployment.

To find and deploy a solution that fits your specific needs, you can explore the [AWS Partner Solutions Finder](https://partners.amazonaws.com/search/partners) or browse through the [AWS Marketplace](https://aws.amazon.com/marketplace), where you can search for and quickly deploy partner solutions tailored to your unique SAP use case.

 **Further Resources** 

The [Guidance for SAP Data Integration and Management on AWS](https://aws.amazon.com/solutions/guidance/sap-data-integration-and-management-on-aws/) provides the essential data foundation to build data and analytics solutions. It shows how to integrate data from SAP ERP source systems and AWS in real-time or batch mode, with change data capture, using AWS services, SAP products, and AWS Partner Solutions. It includes an overview reference architecture showing how to ingest SAP systems to AWS in addition to five detailed architectural patterns that complement SAP-supported mechanisms (such as OData, ODP, SLT, and BTP) using AWS services that are highlighted above, SAP products, and AWS Partner Solutions.

# Data Federation using AWS Services
<a name="rise-data-federation"></a>

Data federation is a data management strategy that enables, real-time analytics, single source-of-trust, no data duplication or expensive pipelines.

When there is a business requirement to have a consolidated data for transactional, analytics, machine learning, it is preferred for the data to be accessed from the source rather than replicated to avoid latency, inconsistency and extra storage cost.

In the context of SAP and AWS services, it allows organizations to access, combine, and analyze data from both SAP systems and AWS cloud services seamlessly.

![\[Data Federation\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-federation.png)


 **Amazon Athena** 

 [Amazon Athena](https://aws.amazon.com/athena/) is a serverless, scalable and flexible interactive query service by AWS that allows to analyze data directly in Amazon S3. The data stored in Amazon S3 from multiple sources can be further transformed into tables and views using Amazon Athena and queried to replicate meaningful information in a structured way.

Data in Athena can be accessed from SAP Datasphere through [data federation](https://discovery-center.cloud.sap/missiondetail/3401/3441/) from SAP Datasphere connections. Users can also access SAP Datasphere tables and views from Athena by [querying SAP HANA](https://aws.amazon.com/blogs/big-data/query-sap-hana-using-athena-federated-query-and-join-with-data-in-your-amazon-s3-data-lake/) using an [Athena Federated Query](https://docs.aws.amazon.com/athena/latest/ug/connect-to-a-data-source.html).

Data can also be federated to the SAP HANA Cloud by configuring Athena as a remote source using the [Smart Data Access – Athena adapter](https://community.sap.com/t5/technology-blogs-by-sap/federating-queries-in-hana-cloud-from-amazon-athena-using-athena-api/ba-p/13476091). The [Athena Federated Query connection](https://aws.amazon.com/blogs/big-data/query-sap-hana-using-athena-federated-query-and-join-with-data-in-your-amazon-s3-data-lake/) can also be used to read data from a stand-alone SAP HANA Cloud environment.

 **Amazon Redshift** 

 [Amazon Redshift](https://aws.amazon.com/redshift/) iis a fully managed, peta-byte scale data warehouse service from AWS. Customers have built their data warehouses and build data models for analytics and reporting.

 [Data federation](https://discovery-center.cloud.sap/missiondetail/3406/3446/) from Amazon Redshift into SAP Datasphere is possible with SAP HANA Smart Data Integration (SDI) or the SAP Data Provisioning Agent. Amazon Redshift data can also be federated through the Athena Federated Query data source connector.

 **Further resources** 

The [Guidance for Data Federation](https://aws.amazon.com/solutions/guidance/data-federation-between-sap-and-aws/) between SAP and AWS outlines the process of federating data between SAP and AWS cloud analytics services, enabling you to establish a data mesh architecture. By federating data between SAP and AWS. you can easily transform and visualize your data in a scalable, secure, and cost-effective way, helping you inform your decision-making.

# Data analytics
<a name="rise-data-analytics"></a>

SAP customers need business insights in real-time to react to business changes and leverage untapped business opportunities. This needs to be realized with modern, cloud-native solutions to shift from overnight data processing to real-time analytics. Leveraging AWS and SAP solutions, customers can leverage purpose-built analytics services to gain competitive advantage in their respective industries.

Modern data architectures, such as [Data Lakes, Data Warehouses](https://aws.amazon.com/compare/the-difference-between-a-data-warehouse-data-lake-and-data-mart/) and [Lakehouse](https://aws.amazon.com/sagemaker/lakehouse/) provide a combination of patterns and services that enable organisations to handle large volumes of structured and unstructured data for analysis and reporting, providing also a solid foundation for Artificial Intelligence (AI) and Machine Learning (ML) applications, including Generative AI. These architectures provide building blocks that can be implemented independently as well as complimenting each other, based on requirements and preferences.

**Topics**
+ [Data Lake Architecture](rise-data-lake-architecture.md)
+ [Data Warehouse Architecture](rise-data-warehouse-architecture.md)

# Data Lake Architecture
<a name="rise-data-lake-architecture"></a>

The [Data lake](https://aws.amazon.com/what-is/data-lake/) architecture provides building blocks that demonstrate how to combine and consolidate SAP and non-SAP data from disparate sources using analytics and machine learning services on AWS.

Data lake enables customers to handle structured and unstructured data. It is designed based on a “schema-on-read” approach, meaning data can be stored in raw form and, only applies schema or structure upon consumption (i.e.: to create a Financial Report). The structure is defined when reading the data from the source, defining data types and lengths at that point. Due to this, storage and compute is decoupled, leveraging low cost storage that can scale to petabyte sizes at a fragment of cost compared to traditional databases.

Data lake enables organizations to perform various analytical tasks like creating interactive dashboards, generating visual insights, processing large-scale data, conducting real-time analysis, and implementing machine learning algorithms across diverse data sources.

![\[Data Lake Architecture\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-lake-architecture.png)


The Data Lake reference architecture provides three distinct layers to transform raw data into valuable insights:

 **Raw Layer** 

The raw layer is the initial layer in a data lake, built on [Amazon S3](https://aws.amazon.com/s3/), where data arrives in its original format directly from source systems without any transformation. The data in this layer is used to determine changes and data to consolidate in the next layer since it will contain multiple versions of the same data (changes, full loads, etc).

Data extracted from SAP (via [SAP ODP OData](https://help.sap.com/docs/SAP_NETWEAVER_750/825e9222e7ad4fe1988c6cc600bda779/c1c48cd6d78d4afe8ceb6a1ddc481db1.html) or other mechanisms) needs to be prepared for further processing. The extracted data will be packaged in several files (defined by the package or page size in the extraction tool) hence multiple files for a given extraction run can be generated.

 **Enriched Layer** 

The Enriched Layer is built on [Amazon S3](https://aws.amazon.com/s3/) and it contains a true representation of the data in the source SAP system along with logical deletions and is stored [Amazon S3 Tables](https://aws.amazon.com/s3/features/tables/) with built-in [Apache Iceberg format](https://aws.amazon.com/what-is/apache-iceberg/). The Iceberg Table file format allows the creation of [Glue or Athena Tables](https://docs.aws.amazon.com/athena/latest/ug/understanding-tables-databases-and-the-data-catalog.html) within the [Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html), supporting Database type operations such as Insert, Update and Deletion, with the Iceberg file format handling the file operations (deletion of records, etc). Iceberg tables also supports the concept of [Time Travel](https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg-time-travel-and-version-travel-queries.html), which enables querying data for a specific point in time.

Data from the Raw Layer is inserted or updated in the Enriched layer in the right order based on the table key and persisted in its original format (no transformation or changes). Each records needs to be enriched with certain attributes such as time of extraction and record number, this can be achieved with the [AWS Glue jobs](https://docs.aws.amazon.com/glue/latest/dg/author-glue-job.html).

 **Curated Layer** 

The Curated Layer is the layer where data is stored for data consumption. Records deleted on the source are deleted physically. Any calculations (averages, time between dates, etc) or data manipulation (format changes, lookup from another table) can be stored in this layer, ready to be consumed. Data is updated in this layer using the AWS Glue jobs. Amazon Athena views are created on top of these tables for downstream consumption through Amazon Quick Sight or similar tools.

The [Data Lakes with SAP and Non-SAP Data on AWS Solution Guidance](https://aws.amazon.com/solutions/guidance/data-lakes-with-sap-and-non-sap-data-on-aws/) provides a detailed architecture, steps to implement and accelerators to fast track the implementation of a Data Lake for SAP and non-SAP data. You can refer to the different available options to extract data from SAP to the Data Lake in the prior Data Integration section.

# Data Warehouse Architecture
<a name="rise-data-warehouse-architecture"></a>

A [Data Warehouse](https://aws.amazon.com/what-is/data-warehouse/) is a centralized repository based on “schema-on-write” approach that aggregates structured, historical data from multiple sources (both SAP and non-SAP) to enable advanced analytics, reporting, and business intelligence (BI). It enables organizations to analyze vast amounts of integrated data for informed decision-making, using optimized architectures for complex queries rather than transactional processing.

Business analysts, data engineers, data scientists, and decision-makers utilize business intelligence (BI) tools, SQL clients, and other analytics applications to access data warehouse. The architecture comprises tiers: a front-end client for presenting results, an analytics engine for data access and analysis, and a database server for data loading and storage.

Data is stored in tables and columns within databases, organized by schemas. Data warehouses consolidate data from multiple sources, enabling historical data analysis and ensuring data quality, consistency, and accuracy. Separating analytics processing from transactional databases enhances the performance of both systems, supporting reports, dashboards, and analytics tools by efficiently storing data to minimize I/O and deliver rapid query results to numerous concurrent users.

![\[Data Warehouse Architecture\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-warehouse-architecture.png)


Key Characteristics
+ Integrated: Consolidates data from disparate sources (e.g., CRM, ERP) into a unified schema, resolving inconsistencies in formats or naming conventions.
+ Time-variant: Tracks historical data, allowing trend analysis over months or years.
+ Subject-oriented: Organized around business domains like sales or inventory, rather than operational processes.
+ Non-volatile: Data remains static once stored; updates occur via scheduled Extract, Transform, Load (ETL) processes rather than real-time changes.
+ Price-optimized: SAP and non-SAP data is stored in a cost-optimized architecture.

Architecture Components
+ ETL Tools: Automate data extraction from sources, transformation (cleaning and standardizing), and loading into the warehouse.
+ Storage Layer:
  + Relational databases for structured data
  + OLAP (Online Analytical Processing) cubes for multidimensional analysis
+ Metadata: Describes data origins, transformations, and relationships.
+ Access Tools: SQL clients, BI platforms, and machine learning interfaces.

![\[Data Warehouse Layers\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-data-warehouse-layers.png)


Data warehouses utilize a layered architecture to organize data at different levels of granularity, which helps ensure consistency and flexibility. The most common data warehouse architecture layers are the source, staging, warehouse, and consumption layers. SAP systems also employ a layer-based architecture for data warehouses. In the context of building a SAP cloud data warehouse on AWS. the architecture involves several key layers and components for data acquisition, storage, transformation, and consumption.

 **Corporate Memory** 

Amazon S3 Intelligent-Tiering is a storage class that automatically optimizes storage costs by moving data between access tiers based on changing access patterns. This ensures that frequently accessed data is readily available, while less frequently accessed or "colder" data is stored at a lower cost tier. For more details, you can refer to [Amazon S3 Storage Classes](https://aws.amazon.com/s3/storage-classes/#topic-0).

 **Operational Data Storage Layer** 

Amazon Redshift is utilized for operational data storage, propagation, and data mart functionalities. Scripts are provided to create schemas and deploy Data Definition Language (DDL) with the necessary structures to load SAP source data. These DDLs can be customized to include SAP-specific fields.

 **Data Propagation Layer** 

Delta data loaded into S3 via Glue job is used to generate Slowly Changing Dimension Type 2 (SCD2) tables, which maintain a complete history of changes.

 **Data Mart Layer** 

Architected data mart models are created using Materialized Views in Redshift. Transactional data is enriched with master data (attributes and text), building data models that are ready for data consumption.

The [Building SAP Data Warehouse on AWS Solution Guidance](https://aws.amazon.com/solutions/guidance/building-a-sap-cloud-data-warehouse-on-aws/) provides a detailed architecture, steps to implement and accelerators to fast track the implementation of a Data Warehouse for SAP.

# Agentic AI
<a name="rise-agenticai"></a>

 **What is an agentic AI** 

Agentic AI refers to an autonomous AI system that can independently reason, plan, and execute complex, multi-step tasks to achieve a predetermined goal with minimal human supervision. Unlike generative AI, which primarily focuses on creating content based on human prompts, agentic AI is proactive and focused on taking action. It operates by continually perceiving its environment, reasoning through options, acting on its decisions, and learning from the outcomes in an iterative loop.

 **Types of agentic AI systems** 

Agentic AI can be deployed in different configurations, from single-purpose agent to large-scale multi-agent systems.
+ Single-agent: A single AI agent works alone to complete a defined, focused task.
+ Multi-agent: Multiple AI agents with specialized skills collaborate and coordinate to tackle complex workflows. This can be structured in a vertical hierarchy, with a lead agent overseeing others, or a horizontal, decentralized structure where all agents operate as equals.

 **Evolution into agentic AI** 

 **Stage 1:** More human oversight (Generative AI assistants) At the initial stage, AI systems primarily function as generative AI assistants, like early versions of chatbots or writing aids with high human involvement. It is reactive and prompt bases with “Human in the loop”.

 **Stage 2:** Generative AI agents This stage enhances the basic AI assistant with greater context awareness and tool-use capabilities, creating early generative AI agents with expanded capabilities with agents that are able to perform multi step tasks. They are governed by guardrails and still reliant on prompts.

 **Stage 3:** Agentic AI systems Agentic AI systems represent a major shift toward greater autonomy, integrating more complex reasoning, planning, and memory. They offer proactive execution instead on waiting on prompts, offer continuous learning, and with “Human on the loop” where the human role changes from direct involvement to strategic oversight.

 **Stage 4:** Autonomous AI agents The final stage involves the deployment of highly autonomous, multi-agent systems that operate with minimal human intervention. This has specialized multi agent collaboration to tackle complex end to end workflows and human focus shifts from oversight to governance.

 **Implementing agentic AI with Amazon Bedrock** 

 [Amazon Bedrock](https://aws.amazon.com/bedrock) provides a comprehensive and flexible toolset for building and deploying agents, supporting both fully managed and do-it-yourself (DIY) approaches. This is achieved by combining the fully managed and configuration-based [Amazon Bedrock Agent](https://aws.amazon.com/bedrock/agents/) with the highly customizable and composable services of [Amazon Bedrock AgentCore](https://aws.amazon.com/bedrock/agentcore/).

**Topics**
+ [Amazon Bedrock Agent](rise-agenticai-bedrock-agent.md)
+ [Amazon Bedrock Agentcore](rise-agenticai-bedrock-agentcore.md)
+ [Strands Agent](rise-agenticai-strands-agent.md)
+ [Agentic AI to manage ERP Exceptions](rise-agenticai-erpexceptions.md)

# Amazon Bedrock Agent
<a name="rise-agenticai-bedrock-agent"></a>

Amazon Bedrock Agent acts as the intelligent orchestrator that uses the reason-and-act (ReAct) pattern to fulfil complex user requests. It uses the reasoning of foundation models (FMs), APIs, and data to break down user requests, gathers relevant information, and efficiently completes tasks—freeing teams to focus on high-value work. You can refer to [this link](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-how.html) on how to implement Amazon Bedrock Agent.
+  **User request**: The process begins with a natural language request from a user, such as "Generate a sales report and share it with the finance team".
+  **Reasoning and planning**: The Bedrock Agent’s orchestration prompt and the underlying FM interpret the request and break it down into logical, multi-step actions.
+  **Tool execution**: The agent executes the plan by invoking "tools"—action groups that are defined with API schemas. These tools can call backend services within the SAP system via the Generative AI Hub. For example, the agent might:
  +  **Call an API** to fetch sales data from SAP
  +  **Access a knowledge base** in Bedrock via a Retrieval Augmented Generation (RAG) tool to pull relevant business documents.
  +  **Leverage code interpreter or browser** in AgentCore for data analysis or to interact with a web-based SAP User Interface.
  +  **Utilize memory** to maintain context across multiple user interactions. This is essential for multi-step processes like filling out a complex purchase order over several turns of conversation.

Bedrock Agents fully supports multi-agent collaboration, allowing you to build and deploy systems of specialized AI agents that work together to accomplish complex, multi-step workflows. Instead of a single agent attempting to handle every part of a difficult task, a team of agents can be orchestrated to contribute their specific expertise, improving efficiency, accuracy, and overall performance. The core of [multi-agent collaboration in Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-multi-agent-collaboration.html) is a hierarchical model consisting of a supervisor agent and one or more collaborator agents.

# Amazon Bedrock Agentcore
<a name="rise-agenticai-bedrock-agentcore"></a>

Bedrock AgentCore is a suite of services that enables developers to build, deploy, and operate highly capable AI agents securely and at enterprise scale. It is designed to take on the "undifferentiated heavy lifting" of developing agentic AI, allowing enterprises to move beyond proofs-of-concept and accelerate production deployment. Bedrock AgentCore provides a modular toolkit of services that can be used together or independently to create sophisticated AI agents.
+  **Runtime**: A secure, serverless environment for deploying and scaling dynamic AI agents, supporting long-running and asynchronous tasks with complete session isolation.
+  **Gateway**: A service that converts existing APIs and AWS Lambda functions into agent-compatible tools with minimal code. It supports tool discovery and secure communication using protocols like Model Context Protocol (MCP).
+  **Memory**: Manages both short-term conversational context and long-term memory for agents, enabling more personalized and context-aware interactions without developers managing the underlying infrastructure.
+  **Built-in Tools:** Enhances agent capabilities with a Code Interpreter for secure code execution and a Browser Tool for interacting with web applications.
+  **Identity**: Provides a secure and scalable identity and access management service specifically for AI agents, integrating with existing identity providers to manage agent permissions.
+  **Observability**: Offers tools to trace, debug, and monitor agent performance in production, with comprehensive dashboards powered by Amazon CloudWatch and support for OpenTelemetry.

Bedrock AgentCore is explicitly designed to be model-agnostic, giving developers the flexibility to work with any foundation models (FMs) they choose, both inside and outside of the Amazon Bedrock ecosystem. These are some the FMs hosted within Bedrock, for full list you can refer to [this documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html) :
+  **Anthropic**: The Claude family of models, including the latest Claude models.
+  **Meta**: The Llama family of models.
+  **Mistral AI**: A range of Mistral models.
+  **Amazon**: Amazon’s own models, including the Titan and Nova families.
+  **OpenAI**: Selected open-weight models from OpenAI.
+  **Other providers**: AI21 Labs, Cohere, DeepSeek, Stability AI, and others.

# Strands Agent
<a name="rise-agenticai-strands-agent"></a>

 [Strands Agent](https://strandsagents.com/latest/) is an open-source SDK created by AWS for building AI agents that use large language models (LLMs) to reason and act. The [Strands Agents SDK](https://github.com/strands-agents/sdk-python) simplifies the process of creating AI agents by focusing on three core components:
+  **A language model**: Strands supports a wide range of LLMs from providers like Anthropic, OpenAI, and Meta, giving developers flexibility.
+  **A system prompt**: This defines the agent’s role and overall behaviour.
+  **A set of tools**: These are the specific functions and capabilities the agent can invoke to perform tasks.

Benefit of strands SDK:
+ Strands SDK enables fast, secure development of advanced AI agents on SAP Generative AI Hub.
+ Developers can build complex automations quickly - saving time and resources.
+ Strands SDK supports multiple AI models and future technology shifts.
+ It has enterprise-grade security and robust monitoring ensure safe, reliable use.

![\[Strands Agent with Generative AI Hub and Amazon Bedrock\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-agenticai-strandsagent.png)


The above architecture describes the integration option between Strands Agents, SAP Generative AI Hub to access Amazon Bedrock FMs, and Bedrock Agent SDK which allows integration to [Model Context Protocol (MCP)](https://www.anthropic.com/news/model-context-protocol) servers to access available APIs to automate workflows.

![\[Agent-to-Agent\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-agenticai-a2a.png)


The most effective way in SAP is to have a Strands-built agent act as an external tool that an SAP Joule agent can call. This allows for specialized, custom logic to be developed in Strands, which is then orchestrated by SAP Joule within the business context of SAP applications. The architecture above describes how the [Agent-to-Agent](https://github.com/a2aproject/A2A) protocol works.

# Agentic AI to manage ERP Exceptions
<a name="rise-agenticai-erpexceptions"></a>

 **What is an ERP Exception** An Enterprise Resource Planning (ERP) exception is a notification generated by an ERP system when a real-world situation or process deviates from a planned norm, policy or rule. These exceptions act as alerts to indicate issues such as stock shortages, missed deadlines, or data discrepancies that require human intervention to resolve and prevent disruptions to business operations.

 **Why Agentic AI to manage ERP exception** Agentic AI goes beyond simply flagging an issue; it can autonomously reason, take action to resolve the issue, and learn from the experience. This moves ERP exception handling from a reactive to a proactive and preventative process.

 **How agentic AI improves ERP exception handling** 

Agentic AI to manage ERP exception handling helps with

1. Proactive problem-solving

1. Faster and more autonomous resolution : Agentic AI can resolve many exceptions without human intervention by learning from historical resolutions

1. Continuous learning and improvement

1. Intelligent routing and escalation

1. Enhanced compliance and auditability since every action taken by an Agentic AI agent can be audited and guarded with guardian agent

1. Freeing up human resources

 **Top use cases for ERP exceptions management with Agentic AI** 

 **Use Case 1: Three-way Invoice Matching** In this process, we match Purchase Order against Goods Receipt and Invoice. The exception cases of unmatched invoices are sent to the AI agent. It does the same research that the user would have done, the AI agent successfully finds the correct PO number, saving the exceptions user the time of doing the research. The exceptions user reviews the Agent’s findings and approves. The agent processes the transactions saving the exceptions user the time of processing the transactions.

 **Use Case 2: Customer Payment Matching** In this process, we match the invoice against customer payment in bank statement. The exception cases (unmatched customer payments) are sent to the Agentic AI Agent. The AI Agent does the same research that the user would have done. It will find the invoice and match to the customer payment from bank statement and presents the recommended solution to the user, saving the user the time of doing the research. The exceptions user accepts the recommendation. The agent processes the transactions saving the exceptions user the time of processing the transactions.

 **Use Case 3: Sales Order Entry** In this process, a certain sales order line item has no available stock to fulfil. The Agentic AI Agent retrieves information from the ecommerce site, emailing the customer with a replacement SKU and escalating to the credit and supply chain team. After completing the research, the agent will recommend a solution for each exception. If the user accepts the recommendation, the agent performs the transactions in SAP and/or other systems to replace the item.

 **Use Case 4: PO Confirmation** The Agentic AI Agent can parse each PO to extract key terms such as limits of liability and compare the key terms with the central contract automating the PO Conformation Process. Upon confirmation, the Agent can enter the PO as an order into the ERP system.

 **Use Case 5: Cash Forecasting** The ERP system contains most or all information required for creating a cash forecast. The ERP has bank account balances, unpaid vendor invoices, unpaid customer invoices, and other critical inputs to a cash-forecasting process. Other systems may also contain additional information for input into the cash forecast. A forecast is generated from bank/investment account balances, vendor invoices (liability) and customer invoices (asset). The Agentic AI Agent collects the necessary data points from the ERP and other systems and calculates a per-day cash forecast based on standard operating procedure.

 **Use Case 6: Financial Period End Close** In this process, AI Agent can do several, most or even all of the steps for financial period-end closing with or without a human in the loop. The Agent can reconcile bank statements, account receivables and payables, consolidate ledgers, and account for depreciations, unearned revenue, prepaid expenses and intercompany reconciliations. It can handle the exceptions by communicating with various stakeholders in the organization.

# AWS and SAP JRA
<a name="rise-jra"></a>

 AWS and SAP Joint Reference Architecture (JRA) is a framework designed to guide customers on how to effectively integrate and utilize both AWS and SAP services to achieve specific business outcomes. It provides architectural guidance and best practices for common scenarios, helping customers optimize their SAP solutions on AWS and leverage the strengths of both platforms.

The AWS and SAP JRA was developed to address common questions from joint customers and partners on how to use SAP and/or AWS services for different business solution scenarios. As we dive deeper into each of the use cases, we will see that the services complement each other, thus working together to solve the respective customer’s business challenges holistically. When you apply AWS and SAP JRA to RISE with SAP on AWS, you will be able to unlock possibilities to get more value out of your investment.

**Topics**
+ [Data to Value](rise-jra-datatovalue.md)
+ [Artificial Intelligence](rise-jra-ai.md)
+ [Integration](rise-jra-integration.md)
+ [Custom Application](rise-jra-customapps.md)
+ [Operational Reliability](rise-jra-operational-reliability.md)
+ [Internet of Things](rise-jra-iot.md)

# Data to Value
<a name="rise-jra-datatovalue"></a>

Enterprises need data-driven intelligence that delivers measurable business outcomes. Running SAP on AWS provides a scalable, secure, and flexible foundation to transform raw data into actionable value. The SAP and AWS Joint Reference Architecture (JRA) provides a framework for connecting data sources, harmonizing SAP and non-SAP data, and enabling AI and analytics-driven innovation through [SAP Business Data Cloud (SAP BDC)](https://www.sap.com/products/data-cloud.html) and [Amazon Sagemaker](https://aws.amazon.com/sagemaker/).

This guide outlines two key joint reference architectures that exemplify how organizations can leverage SAP and AWS services to maximize the value of their enterprise data through AI powered insights, while maintaining flexibility, scalability, and cost efficiency.

**Topics**
+ [Integrating data in SAP BDC with AWS data sources](rise-jra-datatovalue-bdc-aws.md)
+ [AI Innovation with FedML-AWS and Sagemaker](rise-jra-datatovalue-fedml-aws.md)

# Integrating data in SAP BDC with AWS data sources
<a name="rise-jra-datatovalue-bdc-aws"></a>

Non-SAP data from AWS data sources can be harmonized with SAP data via SAP Datasphere data fabric architecture with SAP BDC. The integration architecture supports multiple AWS services, each with specific modes of integration based on live data or replication:

![\[SAP BDC with Managed Services\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-datatovalue-01.png)


 **A. Integration with Amazon Athena** 

Mode of Integration: Federating data live into SAP Datasphere

Amazon Athena is Amazon’s interactive query service that helps query and analyze data in S3. Non-SAP data from Athena can be federated live into remote tables in SAP Datasphere and augmented with SAP data for real-time analytics in [SAP Analytics Cloud](https://www.sap.com/products/data-cloud/cloud-analytics.html).

Here are the steps to integrate Athena with SAP Datasphere:

1. Prepare source with non-SAP and third party data

1. Configure Athena

1. onfigure necessary IAM user and authorizations

1. Setup SAP Datasphere Connection to Athena

1. Build models in SAP Datasphere

This enables live data federation without replicating data, thus reduces cost, provides fast insights, and enterprise-grade security. For detailed step by step, visit [Federating Queries from SAP Datasphere to Amazon S3 via Amazon Athena](https://github.com/SAP-samples/sap-bdc-explore-hyperscaler-data/blob/main/AWS/athena-integration.md).

 **B. Integration with Amazon Redshift** 

Mode of Integration: Federating data live into SAP Datasphere

Amazon Redshift is a fully managed, petabyte-scale data warehouse service optimized for analytical workloads. Through SAP Datasphere data federation architecture, Redshift data can be augmented with SAP data to build unified data models and analytics in SAP Analytics Cloud. [Smart Data Integration (SDI)](https://help.sap.com/docs/HANA_SMART_DATA_INTEGRATION/bf2f0282053648f8a1ef873e65ded81a/323ff4c3c12040bab8f1222a901dd95d.html) connects SAP Datasphere with Redshift via [Camel JDBC Adapter](https://help.sap.com/docs/HANA_SMART_DATA_INTEGRATION/7952ef28a6914997abc01745fef1b607/598cdd48941a41128751892fe68393f4.html?locale=en-US), enabling the creation of virtual tables and real-time or snapshot replication.

Here are the steps to integrate Redshift with SAP Datasphere:

1. Create On-Premise Agent in SAP Datasphere

1. Set Up Redshift Access

1. Configure SAP SDI DP Agent

1. Register Camel JDBC Adapter in SAP Datasphere

1. Upload Third-Party Drivers in SAP Datasphere

1. Create Local Connection to Redshift in SAP Datasphere

1. Import Remote Tables from Redshift

This setup enables live federated queries from SAP Datasphere to Redshift without replicating the data. Benefits include real-time access to Redshift data, pushdown queries for performance optimization, and no data duplication in SAP Datasphere. For detailed step by step, visit [Data Federation between SAP Datasphere and Amazon Redshift](https://github.com/SAP-samples/sap-bdc-explore-hyperscaler-data/blob/main/AWS/redshift-integration.md).

 **C. Integration with Amazon S3** 

Modes of Integration: Replicating data with Replication Flows, Importing data into SAP Datasphere using Data Flows

Amazon S3 provides object storage service which is highly scalable, durable, available and secure. Non-SAP data from S3 buckets can be imported into SAP Datasphere through the Data Flow feature for use with applications such as Financial Planning or business analytics in SAP Analytics Cloud.

Here are the steps to integrate Amazon S3 with SAP Datasphere:

1. Prepare source data in an S3 bucket

1. Configure necessary IAM user and authorizations

1. Create S3 Connection in SAP Datasphere

1. Create a Data Flow

This process allows SAP Datasphere to connect to S3, access non-sap data, and use that data in combination with internal SAP datasets via Data Flows. For detailed step by step, visit [Data integration between SAP Datasphere and in Amazon S3](https://github.com/SAP-samples/sap-bdc-explore-hyperscaler-data/blob/main/AWS/s3-integration.md).

You can find out more from SAP Architecture Center under [Integration with AWS data sources](https://architecture.learning.sap.com/docs/ref-arch/a07a316077/1).

# AI Innovation with FedML-AWS and Sagemaker
<a name="rise-jra-datatovalue-fedml-aws"></a>

In today’s data-driven enterprises, machine learning models are only as powerful as the data they can access. However, business-critical data often resides within SAP systems like SAP BDC, while advanced model development typically takes place in cloud-native platforms like Amazon Sagemaker.

FedML-AWS for Amazon Sagemaker bridges this gap by providing a secure, efficient, and unified framework for federated model training and deployment across SAP and AWS ecosystems. By eliminating data duplication and enabling real-time access to SAP data, FedML-AWS helps accelerate AI initiatives, ensure data governance, and reduce operational complexity, all while leveraging the scalability and performance of AWS and the business context of SAP. With minimal setup, FedML-AWS enables data discovery, model training, and deployment across both SAP and AWS environments to extract value from data.

![\[FedML and Amazon Sagemaker\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-datatovalue-02.png)


FedML, a Python library, is directly imported into Amazon Sagemaker notebook instances. When most training data resides in AWS, but critical SAP data with business semantics is also needed for training, it securely connects to SAP Datasphere (part of BDC) via Python/SQLDBC connectivity, enabling federated access to SAP business data required for model training in Sagemaker.

For more technical details on methods that enable the training data to be read from SAP Datasphere (part of BDC) and trained using Machine Learning model on Amazon Sagemaker, visit [FedML-AWS](https://github.com/SAP-samples/datasphere-fedml/tree/main/AWS). You can find out more from SAP Architecture Center under [Integration with FedML-AWS for Amazon Sagemaker](https://architecture.learning.sap.com/docs/ref-arch/8e1a5fbce3/1).

By combining the strengths of SAP Business Data Cloud (BDC) and AWS services, organizations can unlock the full potential of their enterprise data. From operational systems to advanced AI and analytics, whether harmonizing datasets across Amazon S3, Redshift, and Athena or enabling federated model training with FedML-AWS and Amazon Sagemaker, these architectures provide a scalable and secure foundation for innovation. Together, SAP and AWS empower businesses to move from data silos to data-driven intelligence, accelerating time to insight, optimizing decision-making, and driving measurable business value across the enterprise.

# Artificial Intelligence
<a name="rise-jra-ai"></a>

 [Amazon Bedrock](https://aws.amazon.com/bedrock/) and [SAP Generative AI Hub](https://help.sap.com/docs/ai-launchpad/sap-ai-launchpad/generative-ai-hub) combine through Joint Reference Architecture (JRA) to provide enterprise-grade AI capabilities for RISE with SAP environments. This integration addresses the need for intelligent process automation while maintaining system security and clean core principles.

Amazon Bedrock serves as the foundational AI service layer, providing managed access to various foundation models including Anthropic Claude and Amazon Nova. The service enables organizations to fine-tune these models with proprietary data and implement Retrieval Augmented Generation (RAG) within a secure computing environment.

SAP Generative AI Hub complements this foundation by providing enterprise-specific governance and control mechanisms. The hub manages model selection, knowledge base indexing, and retrieval operations while enforcing necessary safety guardrails and risk controls. This ensures AI deployments remain compliant with enterprise standards and business requirements.

In this documentation, we will focus into JRA aspect as these components create a robust framework for implementing AI capabilities across SAP processes and AWS services, from customer order management to production design, while maintaining enterprise security and reliability standards.

 ** AWS-SAP Joint Reference Architecture in Generative AI** 

![\[Joint Reference Architecture in Generative AI Hub\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-ai-genaihub.png)


Key components from the architecture:
+  [Amazon Bedrock](https://aws.amazon.com/bedrock/) is a service that provides access to various Foundational Models (FMs) through API interfaces. It features models like [Amazon Titan](https://aws.amazon.com/bedrock/amazon-models/titan/), [Amazon Nova](https://aws.amazon.com/ai/generative-ai/nova/) and [Anthropic Claude](https://www.anthropic.com/claude), which are comprehensive new generation FMs with industry leading price performance. These models are versatile and can handle many different applications.
+  [SAP AI Core](https://www.sap.com/india/products/artificial-intelligence/ai-core.html) with [Generative AI Hub](https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/generative-ai-hub-in-sap-ai-core) provides customers access to AI capabilities, including FMs, and offers standardized interfaces for SAP BTP applications. It serves as a management layer that controls access to Bedrock and creates endpoints for applications to utilize FMs. Generative AI Hub enforces centralized safety controls and risk mitigation measures to ensure secure and compliant AI in enterprise deployment. For further details on the SAP’s Generative AI Hub supported models through Bedrock, please refer to [SAP Note 3437766](https://me.sap.com/notes/3437766).
+  [SAP HANA Cloud](https://discovery-center.cloud.sap/serviceCatalog/sap-hana-cloud?region=all) as database management with [vector engine support](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/introduction?locale=en-US) for RAG implementation that can be used for grounding capabilities by efficiently finding and fetching relevant business documents that relate to specific questions or tasks. This information is then used as context for the foundational model by enhancing its ability to provide accurate and context-specific responses.
+  [SAP Cloud Application Programming (CAP)](https://pages.community.sap.com/topics/cloud-application-programming) Model is a development framework that provides a structured approach to enterprise services and applications. CAP simplifies development by providing integrated frameworks with [SAP UI5](https://sapui5.hana.ondemand.com/) frontend.
+  [SAP Identity Provisioning Services](https://help.sap.com/docs/identity-provisioning) is used for authentication and access management to secure the delivery of these AI capabilities.

The above diagram provides reference architecture for consuming generative AI capabilities of Amazon Bedrock with SAP Generative AI Hub. Using this, SAP workloads can now be supplemented with Foundational Models to harness the power of SAP data, resulting in improved business insights and operational efficiencies at a lower cost.

You can find out more from SAP Architecture Center under [Generative AI and SAP BTP](https://architecture.learning.sap.com/docs/ref-arch/e5eb3b9b1d).

 ** AWS-SAP Joint Reference Architecture in Agent2Agent** 

![\[Joint Reference Architecture in Agent2Agent\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-ai-a2a.png)


Key components from the architecture:
+  [Amazon Bedrock Agents](https://aws.amazon.com/bedrock/) is a service that provides capability of reasoning of foundation models, APIs and data to break down user requests, gathers relevant information, and efficiently completes tasks. With its multi-agent collaboration, it allows developers to build, deploy, and manage multiple specialized agents seamlessly working together to address increasingly complex business workflows.
+  [Amazon Bedrock AgentCore](https://aws.amazon.com/bedrock/agentcore/) enables you to deploy and operate highly capable AI agents securely, at scale. AgentCore services can be used together or independently and work with any framework including CrewAI, LangGraph, LlamaIndex, and Strands Agents, as well as any foundation model in or outside of Amazon Bedrock, giving you ultimate flexibility. AgentCore eliminates the undifferentiated heavy lifting of building specialized agent infrastructure, so you can accelerate agents to production.

You can find out more from SAP Architecture Center under [Agent2Agent (A2A) Interoperability in Enterprise AI](https://architecture.learning.sap.com/docs/ref-arch/e5eb3b9b1d/8).

# Integration
<a name="rise-jra-integration"></a>

In the RISE with SAP landscape, SAP Business Technology Platform (BTP), particularly the [SAP Integration Suite](https://help.sap.com/docs/integration-suite/sap-integration-suite/what-is-sap-integration-suite?locale=en-US), often facilitates integration scenarios. This service is capable of supporting integrations across cloud, on-premises, and hybrid environments within the SAP ecosystem.

There are two deployments options for SAP Integration Suite

 **A. Standard Deployment** 

In SAP Integration Suite, integration developers create integration flows and Application Programming Interfaces (APIs). The created integration and API content is deployed to SAP’s Integration Suite runtime environment. Once deployed, the integration content (e.g., a set of integration flows) becomes operational, enabling data exchange with connected sender and receiver systems.

 **B. Hybrid Deployment Using Edge Integration Cell** 

 [Edge Integration Cell](https://help.sap.com/docs/integration-suite/sap-integration-suite/what-is-sap-integration-suite-edge-integration-cell?locale=en-US) is an optional hybrid integration runtime offered as part of SAP Integration Suite, which enables you to manage APIs and run integration scenarios within your private landscape. The hybrid deployment model of Edge Integration Cell enables you to design and monitor your integration content in the cloud. It also allows you to deploy and run your integration content in your private landscape. Its runtime environment is realized as a Kubernetes container, facilitating secure, internal data exchange.

For more detailed information, you can refer to [SAP Note 3426066 FAQ: Edge Integration Cell simple questions](https://me.sap.com/notes/3426066/E) and [SAP Note 3391207 SAP Integration Suite : restrictions for the Edge Integration Cell](https://me.sap.com/notes/3391207/E).

 **Deploy Edge Integration Cell on AWS ** 

Edge Integration Cell (EIC) can be deployed on AWS to leverage its scalable infrastructure while maintaining secure and controlled execution in a customer-managed environment. This architecture combines AWS-native services with EIC’s hybrid capabilities, ensuring a seamless integration experience. Edge integration cell on AWS can be deployed in standard or High Availability (HA) architecture.

You can refer to the detailed EIC architecture, SAP pre-requisites, AWS pre-requisite in [this sap-samples github link](https://github.com/SAP-samples/btp-edge-integration-cell-aws).

![\[Joint Reference Architecture in Edge Integration Cell\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-integration.png)


Key Components
+  **Edge Integration Cell** is a unified runtime pipeline consisting of the following key components:
  +  **Worker** is a Camel-based runtime of Integration Suite that executes integration flows.
  +  **Policy Engine** is an Envoy-based runtime with SAP-built extensions for enforcing policies like security or traffic management on API proxies.
+  **The Message Service** is implements asynchronous integration pattern based on JMS protocol. For the Cloud offering, this instance is managed by SAP.
+  **The PostgreSQL database** is a relational database system for storing structured data and is managed by SAP for the public cloud offering.
+  **Redis** is an in-memory data store used for caching.

 **Edge Integration Cell Sizing** 

Detailed below is the minimum sizing for Edge integration cell (EIC). For a more detailed sizing based on scenarios, you can refer to [SAP Notes 3247839](https://me.sap.com/notes/3247839) and [Sizing Guide for Edge Integration Cell](https://help.sap.com/docs/integration-suite/sap-integration-suite/sizing-guidelines).

 **Sizing of worker node** : Minimum CPU and Memory requirements for High Availability (HA) and non-HA (agent or worker nodes)


| Deployment Type | CPU/Memory | Persistence Storage | 
| --- | --- | --- | 
|  Non-HA  |  8 vCPU/32 GiB ( m6a.2xlarge)  |  101 GiB of Amazon EBS GP3  | 
|  HA  |  16 vCPU/64 GiB( m6a.4xlarge)  |  204 GiB of Amazon GP3  | 

Minimum 3 worker nodes required in both HA and non-HA configuration.

 **External Storage** : Minimum Sizing for Postgres and Redis for HA


| Database | CPU/Memory | Persistence Storage | 
| --- | --- | --- | 
|  Postgres  |  1 CPU / 2 GiB (db.t2.small)  |  50 GiB of EBS GP3  | 
|  Redis  |  1 CPU / 1 GiB (cache.t2.small)  |  N/A  | 


|  | 
| --- |
|   **Pricing example** - With minimum configuration, we calculated an indicative monthly costs in USD to deploy SAP Edge Integration Cell in us-east-1 region Load balancer (NLB), with 10GB/hour data = \$160.23 Amazon EKS cluster = \$173.00 Three worker nodes with m6a.2xlarge = \$1421.75 (3 year No Upfront EC2 Instance Savings Plan) RDS PostgreSQL Multi-AZ = \$1104.21 ElastiCache Redis = \$124.82 Total cost for running EIC in HA mode \$1 \$1684 billed to AWS account managed by customer.  | 

You can find out more from SAP Architecture Center under [Edge Integration Cell on AWS](https://architecture.learning.sap.com/docs/ref-arch/263f576c90/1).

# Custom Application
<a name="rise-jra-customapps"></a>

Custom applications are created by customers to address their unique business needs and challenges that cannot be fully met by off-the-shelf software solutions. Organizations often require specific functionality, workflows, or integrations that align precisely with their business processes, industry regulations, or competitive advantages. By developing custom applications, companies can maintain complete control over their software’s features, security requirements, and user experience while ensuring seamless integration with their existing systems and databases. Custom applications also allow businesses to adapt quickly to changing market conditions and scale their solutions as they grow, ultimately providing them with a tailored tool that directly supports their operational efficiency and strategic objectives.

When developing custom applications that interact with SAP systems, it’s crucial to adhere to [SAP’s clean core concept](https://www.sap.com/sea/products/erp/rise/methodology/clean-core.html), which emphasizes keeping the core SAP system as clean as possible while building extensions and customizations outside the core. This approach ensures long-term maintainability and reduces the total cost of ownership by making it easier to implement SAP updates, upgrades, and innovations without disrupting custom functionality. By leveraging [SAP Business Technology Platform (BTP)](https://www.sap.com/sea/products/technology-platform.html), [AWS Cloud Services](https://aws.amazon.com/products/) and following clean core principles, organizations can create side-by-side extensions, custom applications, and integrations that preserve system stability while maintaining the agility to adapt to changing business requirements. This architectural strategy enables businesses to benefit from both customization and standardization, ensuring their applications remain sustainable and future-proof within the SAP ecosystem.

Some of the key AWS Services that will help on this custom applications:
+  [Amazon Simple Notification Service (Amazon SNS)](https://aws.amazon.com/sns/) is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications. For example: you can send email to notify a failed delivery of goods, trigger an event based programs, and others.
+  [Amazon Simple Queue Services (SQS)](https://aws.amazon.com/sqs/) lets you send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. For example: you can queue burst of high volume incoming messages for sequential processing.
+  [Amazon EventBridge](https://aws.amazon.com/eventbridge) is a service that provides [real-time access to changes](https://aws.amazon.com/eventbridge/integrations/) in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code. For example: you can trigger a near-real-time event based ordering through an API Gateway to external SaaS from SAP when an out-of-stock situation happened in a warehouse.
+  [AWS SDK for ABAP](https://aws.amazon.com/sdk-for-sap-abap/) simplifies the use of AWS services alongside SAP applications with a client library of modules that are consistent and familiar to ABAP developers. For example: you can use this to automatically check the mailing address validation in SAP Business Partner maintenance screen using Amazon Location Services.
+  [AWS AI Services](https://aws.amazon.com/ai/services/), such as : [Amazon Polly](https://aws.amazon.com/polly/) to turn text to lifelike speech, [Amazon Transcribe](https://aws.amazon.com/transcribe/) to convert speech to text, [Amazon Rekognition](https://aws.amazon.com/rekognition/) to extract information and insights from images and videos.
+ For more AWS services that you can use, please refer to [this link](https://aws.amazon.com/products/?nc2=h_prod).

You can upskill yourself and your team members to [Build Resilient Applications on SAP BTP with Amazon Web Services](https://learning.sap.com/courses/build-resilient-applications-on-sap-btp-with-amazon-web-services) learning module which was jointly built by AWS and SAP.

In the following sections, we will cover architectural patterns and reference architectures that leverage SAP and AWS technologies to extend SAP processes while keeping the core clean.

 **Event-Based Application** 

In traditional business process architectures, systems often operate in silos, with tightly coupled components and rigid, predefined workflows. This approach struggles to keep pace with the dynamic nature of modern business environments. Event-based architecture emerged as a solution to these limitations, addressing several critical challenges.

With event-based architectures, you can implement end-to-end Business Processes by decoupling system components by using asynchronous communication. With this approach, you can implement a more resilient systems and business processes that can better handle network issues, service outages, and other disruptions following [AWS Well Architected Framework for SAP Lens](https://docs.aws.amazon.com/wellarchitected/latest/sap-lens/sap-lens.html).

Example of Event Based notification through Amazon SNS :

![\[Event-based notification with SNS\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-sns.png)


In the architecture above, a user updates a Business Partner in SAP S/4HANA, you can trigger the update event through SAP Event Mesh. The CAP Application that is enhanced with AWS SDK for Java to trigger the Amazon SNS topic which enables you to notify Data Owner for this change either through an email, text message and mobile push notification. You can find out more in [this github respository](https://github.com/SAP-samples/cloud-cap-amazon-sns-integration).

Example of Event Based notification through Amazon SQS and EventBridge, as well as [AWS IoT services](https://aws.amazon.com/iot/) :

![\[Event-based notification with SQS and Event Bridge\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-sqs.png)


In the architecture above, Event-Driven Integration Architectures: Leverages SAP BTP for Industry 4.0 scenarios, showcasing the versatility of SAP-AWS integration to support Predictive Maintenance scenario reducing downtime for your manufacturing line. This leverages AWS IoT Services, Amazon SQS as well as Amazon EventBridge to provide early sensor data such as speed, temperature, vibration, and others that will indicate the need of maintenance before any outage or downtime occurs for certain mechanism.

 **Artificial Intelligence and Machine Learning Application** 

Safety hazards in every workplace come in many different forms: sharp edges, falling objects, flying sparks, chemicals, noise, and other potentially dangerous situations. Safety regulators such as Occupational Safety and Health Administration (OSHA) and European Commission often require that businesses protect their employees and customers from hazards that can cause injury by providing them personal protective equipment (PPE) and ensuring their use. With Amazon Rekognition PPE detection, customers can analyze images from their on-premises cameras across all locations to automatically detect if persons in the images are wearing the required Personal Protective Equipment (PPE) such as face covers, hand covers, and head covers. SAP customers use SAP Environment health and safety module to record these detections manually as safety observations.

We provide an integration framework between [Amazon Rekognition](https://aws.amazon.com/rekognition/) and [SAP Environment, Health and Safety (EHS)](https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/1b3596cc5dd5428d887966a4193ddc29/5b22b8d6606b4d32b8af9283901d3bdc.html?locale=en-US) and adopt the open-source Events-to-Business-Actions Framework, which will automate the process of creating safety observations.

![\[Safety at scale with Amazon Rekognition PPE Detection\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-ppe.png)


In the architecture above, the information flow begins with CCTV cameras capturing images at a factory and storing them in [Amazon S3](https://aws.amazon.com/s3/). An [AWS Lambda](https://aws.amazon.com/pm/lambda/) function triggers Amazon Rekognition’s PPE detection model to inspect for safety equipment compliance. If violations are detected, the Lambda function retrieves credentials from AWS Secrets Manager and communicates with [SAP Integration Suite’s Advanced Event Mesh](https://www.sap.com/products/technology-platform/integration-suite/advanced-event-mesh.html). The event is then processed by the Event-to-Business-Action framework, which uses [SAP Build Process Automation](https://www.sap.com/sea/products/technology-platform/process-automation.html)'s Business Rules to determine appropriate actions. Finally, the system creates an EHS Incident Report Safety Observation in the SAP S/4HANA system through SAP Destination Service and Private Link Service. You can find out more in [this github repository](https://github.com/SAP-samples/btp-aws-ppe-detection-ehs).

# Operational Reliability
<a name="rise-jra-operational-reliability"></a>

Modern enterprises face significant hurdles in maintaining continuous availability of SAP services, particularly during regional outages or maintenance windows. Business continuity and operational reliability are critical concerns when deploying SAP Business Technology Platform (SAP BTP) and RISE with SAP.

 [Amazon Route 53](https://aws.amazon.com/route53/) is a highly available, scalable, and globally distributed Domain Name System (DNS) web service, addresses these challenges effectively. It enables customers to implement [AWS multi-region architecture](https://docs.aws.amazon.com/prescriptive-guidance/latest/aws-multi-region-fundamentals/introduction.html) for their SAP environments, providing robust fault tolerance and enhanced reliability. By leveraging Route 53’s capabilities, organizations can build resilient SAP environments that meet stringent availability requirements. This DNS service seamlessly integrates with SAP BTP services, ensuring business operations continue smoothly even during regional disruptions.

 **Understanding Amazon Route 53 in the SAP Context** 

Amazon Route 53 serves as a foundational component for building resilient SAP environments by providing intelligent DNS routing capabilities. In the context of SAP BTP and RISE with SAP, Route 53 addresses critical reliability challenges that cannot be solved through standard Availability Zone (AZ) configurations alone. While SAP BTP services support multi-AZs deployments within a single region, this approach remains vulnerable to region-wide failures. Route 53 extends this resilience by enabling traffic routing across multiple geographic regions, effectively creating a global safety net for mission-critical SAP applications.

Route 53’s architecture is designed with maximum reliability in mind through the separation of control plane and data plane functions. The data plane is explicitly designed to be [statically stable](https://aws.amazon.com/builders-library/static-stability-using-availability-zones/) in the face of, e.g. a control plane failure or partition event. This architectural separation ensures that DNS resolution remains highly available, making Route 53 an ideal foundation for disaster recovery scenarios in SAP environments. The service continuously monitors endpoint health and automatically redirects users to healthy resources when failures are detected.

Beyond simple failover capabilities, Route 53 offers sophisticated routing policies that can be tailored to specific business requirements. These include latency-based routing to direct users to the lowest-latency endpoint, geolocation routing to comply with data sovereignty regulations, and weighted routing to distribute traffic according to defined proportions. For global organizations using SAP services, these capabilities translate into consistent performance and availability for users across different geographic locations, enhancing the overall user experience while maintaining system reliability.

 **Amazon Route 53 Architecture for SAP BTP Multi-Region Resiliency** 

The foundation of a resilient SAP BTP environment using Amazon Route 53 is a well-designed multi-region architecture. This approach begins with geographic redundancy, where critical application components are deployed across different regions to eliminate a [single point of failure](https://en.wikipedia.org/wiki/Single_point_of_failure). Route 53 serves as the intelligent traffic director in this architecture, continuously monitoring the health of endpoints and making real-time routing decisions based on availability and performance metrics. When [integrated with SAP BTP’s Custom Domain service](https://github.com/SAP-samples/btp-services-intelligent-routing/tree/launchpad_aws/04-Map%20Custom%20Domain%20Routes), Route 53 provides a seamless user experience through consistent URLs, even as traffic is redirected between regions during failover events.

You can find out more in [SAP Architecture Center – Architecting Multi-Region Resiliency – Load Balancers](https://architecture.learning.sap.com/docs/ref-arch/81805673c0/3).

 **Amazon Route 53 Routing Options** 

Route 53 offers various [routing policies](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html) for SAP BTP implementations:
+  **Simple routing**: Directs traffic to a single resource
+  **Weighted routing**: Distributes traffic across multiple resources in specified proportions
+  **Latency-based routing**: Routes users to the region with lowest network latency
+  **Failover routing**: Automatically redirects from unhealthy primary to healthy secondary resource
+  **Geolocation routing**: Directs traffic based on users' geographic locations
+  **Geoproximity routing**: Routes based on geographic location with optional biasing
+  **Multi-value answer routing**: Responds with up to eight healthy records selected randomly

These options can be combined to create sophisticated routing strategies tailored to specific SAP environment requirements.

 **Amazon Route 53 Implementation Patterns for SAP Environments** 

Two primary implementation patterns have emerged for SAP environments: active-passive and active-active configuration.

 **Pattern 1. Active-Passive Implementation** 

In an active-passive configuration, Route 53 directs all traffic to a primary SAP BTP region during normal operations, with a secondary region serving as a standby. This approach offers simplicity and cost-effectiveness while still providing disaster recovery capabilities. The active-passive pattern works particularly well for [SAP Build Work Zone](https://discovery-center.cloud.sap/serviceCatalog/sap-build-work-zone-standard-edition?region=all) deployments where consistent user experience is critical.

You implement this by deploying the Work Zone service in the primary region with all necessary configurations, and then using [SAP Cloud Transport Management service](https://www.sap.com/sea/products/technology-platform/cloud-transport-management.html), you replicate this setup to a secondary region. Both regions are configured with identical domains using SAP BTP Custom Domain service, while Route 53 is set up with failover routing policy and health checks monitoring the primary endpoint. When issues occur in the primary region, Route 53 automatically redirects users to the secondary region with minimal disruption.

TTL optimization directly impacts failover speed and DNS query volume. Short TTL values enable fast failover but increase DNS query traffic. The specific TTL value should align with the Recovery Point Objective (RPO) requirements. For detailed implementation steps, refer to the [SAP blog post Route Multi-Region Traffic to SAP Build Work Zone using Amazon Route 53](https://community.sap.com/t5/technology-blog-posts-by-sap/route-multi-region-traffic-to-sap-build-work-zone-standard-edition-using/ba-p/13561468) and [this github repository](https://github.com/SAP-samples/btp-services-intelligent-routing/tree/launchpad_aws).

![\[Active-Passive Implementation\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-opsreliability-active-passive.png)


 **Active-Active Implementation** 

The active-active pattern distributes traffic across multiple regions simultaneously, optimizing resource utilization and minimizing regional failure impact. This approach is ideal for global organizations with users across different geographic locations. A typical implementation for [SAP Cloud Application Programming (CAP)](https://pages.community.sap.com/topics/cloud-application-programming) involves deploying identical applications in multiple SAP BTP subaccounts across different regions, connected to an [Amazon Aurora](https://aws.amazon.com/rds/aurora/), which is a high performance global database cluster spanning multiple regions.

Data consistency is maintained by configuring Aurora for "read local/write global" operations, directing all writes to the primary region while allowing reads from any region. Route 53 implements latency-based or geolocation routing policies to direct users to the nearest healthy region. This setup not only provides resilience against regional outages but also improves performance by reducing latency for globally distributed users.

For implementation details, see [Distributed Resiliency of SAP CAP applications using Amazon Aurora with Amazon Route 53](https://community.sap.com/t5/-/-/m-p/13570134) and [SAP CAP Application Dynamic Data Source Routing](https://community.sap.com/t5/-/-/m-p/13558920). You can also refer to this [github repository](https://github.com/SAP-samples/cap-distributed-resiliency/tree/Data-Source-Routing/source).

![\[Active-Active Implementation\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-opsreliability-active-active.png)


 **Solution guidance and other considerations** 

Each implementation pattern requires careful consideration of data consistency, authentication mechanisms, and operational processes to ensure seamless user experiences during normal operations and failover events.

For broader architectural guidance, refer to [SAP BTP Multi-Region reference architectures for High Availability](https://community.sap.com/t5/-/-/m-p/13524196) and AWS's guide on [Creating Disaster Recovery Mechanisms Using Amazon Route 53](https://aws.amazon.com/blogs/networking-and-content-delivery/creating-disaster-recovery-mechanisms-using-amazon-route-53/).

# Internet of Things
<a name="rise-jra-iot"></a>

Internet of Things (IoT) refers to a network of interconnected physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, and network connectivity, enabling these objects to collect and exchange data. IoT allows objects to be sensed and controlled remotely across existing network infrastructure, creating opportunities for direct integration between the physical world and computer-based systems.

 AWS IoT provides a comprehensive suite of services to connect, manage, and secure IoT devices at scale. At its core, [AWS IoT Core](https://aws.amazon.com/iot-core/) serves as the foundation, enabling secure device connectivity and message routing. [AWS IoT Device Management](https://aws.amazon.com/iot-device-management/) helps register, organize, monitor, and remotely manage IoT devices throughout their lifecycle. [AWS IoT Greengrass](https://aws.amazon.com/greengrass/) extends cloud capabilities to edge devices, allowing them to act locally on data while still maintaining cloud connectivity. Other complementary services in the AWS IoT family include [IoT Events](https://aws.amazon.com/iot-events/), [IoT TwinMaker](https://aws.amazon.com/iot-twinmaker/), [IoT ExpressLink](https://aws.amazon.com/iot-expresslink/), and [IoT FleetWise](https://aws.amazon.com/iot-fleetwise/), each serving specific IoT use cases and requirements.

 ** AWS IoT with SAP** 

![\[IoT with SAP\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-iot-sap.png)


The combination of AWS IoT services and SAP business applications creates a powerful platform for digital transformation, enabling organizations to implement smart solutions across various domains - from connected products to smart city applications. This integration helps organizations harness real-time data for improved operational visibility, enhanced customer experiences, and innovative business models, driving efficiency and accelerating innovation across the enterprise ecosystem.

In [Smart Products & Services](https://aws.amazon.com/industrial/smart-products-and-services/) scenarios, AWS IoT services enable intelligent operations through [AWS IoT SiteWise](https://aws.amazon.com/iot-sitewise/) and other services, delivering real-time insights that integrate seamlessly with SAP business modules. AWS IoT Device Management provides comprehensive monitoring across connected devices, with continuous data streams enriching SAP systems for informed decision-making. Edge computing capabilities through AWS IoT Greengrass ensure efficient data processing at the source, enabling rapid response times and optimal performance, particularly valuable for remote operations.

 AWS IoT services can integrate with [SAP Business Technology Platform (BTP)](https://www.sap.com/products/technology-platform.html) to create powerful end-to-end IoT solutions. Through SAP BTP event-driven architecture and Enterprise Messaging services, IoT data from AWS can be efficiently consumed by SAP applications in real-time. The [Cloud Application Programming (CAP)](https://pages.community.sap.com/topics/cloud-application-programming) model in SAP BTP enables rapid development of IoT-enabled business applications that can process and act on IoT data from AWS. The integration can be achieved through various methods, such as using [SAP Cloud Integration ](https://help.sap.com/docs/cloud-integration/sap-cloud-integration/sap-cloud-integration?locale=en-US), [API Management](https://help.sap.com/docs/sap-api-management/sap-api-management/what-is-api-management?locale=en-US), or direct REST APIs. For example, sensor data collected through AWS IoT Core can trigger events in SAP BTP, which can then be processed by CAP applications to update business processes, generate alerts, or trigger automated workflows in SAP systems.

 ** AWS IoT Security** 

While AWS maintains robust cloud security mechanisms to protect data movement between AWS IoT and other AWS services, customers are responsible for managing device credentials (including X.509 certificates, AWS credentials, Amazon Cognito identities, federated identities, or custom authentication tokens) and implementing appropriate access policies.

 AWS IoT implements comprehensive security measures to ensure secure device connectivity and data transmission. Devices can connect to AWS IoT using X.509 certificates or Amazon Cognito identities over Transport Layer Security (TLS) connections, with additional authentication options available for development and specific API-based applications. The AWS IoT message broker handles device authentication and manages access permissions through AWS IoT policies, while custom authentication can be implemented using custom authorizers.

Furthermore, the AWS IoT rules engine securely forwards device data to other devices or AWS services based on user-defined rules, utilizing AWS Identity and Access Management (IAM) to ensure secure data transfer to intended destinations. Customer may leverage [AWS IoT Device Defender](https://aws.amazon.com/iot-device-defender/?p=ft&c=iot&z=3&refid=a3593a2f-ae1f-4cc4-a14f-ff76e52593aa), a fully managed service that helps you secure your fleet of IoT devices.

You can find out more of [Security in AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/security.html).

 ** AWS and SAP Joint Reference Architecture for Internet of Things** 

JRA architecture below shows the combination of AWS IoT services and SAP BTP services to build loosely coupled Edge-to-Business Process architectures.

![\[JRA for Internet Of Things\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-iot.png)


 **IoT events** - Edge locations can be environments like factories or shop floors where IoT devices such as cameras, PLCs, SCADA systems, IoT sensors or industrial assets collect data including temperature, vibration, and other metrics. The collected data is transmitted to AWS IoT services in the cloud using appropriate connectors running on edge runtime environments like AWS IoT Greengrass, with protocols specific to each device type. Customers have the option to sanitize data at the edge using AWS Edge computing services before transmission to the cloud. AWS IoT SiteWise Edge extends cloud capabilities to industrial edge environments, while AWS IoT Greengrass serves as a general-purpose edge framework. This edge processing helps reduce noise in data, improves data quality, and optimizes costs.

 **IoT Data Processing on AWS ** - Data received from edge locations is first processed by AWS services such as Amazon Rekognition for computer vision use cases or other AWS services for data analysis, where IT (Information Technology) and OT (Operational Technology) data insights are combined to trigger intelligent workflow automation. AWS Lambda then triggers an event to SAP BTP for the next course of action

 **SAP Business Workflow on BTP** - Control is transferred to SAP BTP services like [Event Mesh](https://www.sap.com/products/technology-platform/integration-suite/capabilities/event-mesh.html), which allows applications to communicate through asynchronous events and [Events-to-Business-Actions-Framework](https://github.com/SAP-samples/btp-events-to-business-actions-framework). This framework responds to and integrates events generated from different sources like industrial production processes, warehouses, etc., into enterprise business systems. Based on the events category and type, respective actions are triggered in SAP applications. The processor module leverages the [decisions](https://help.sap.com/docs/build-process-automation/sap-build-process-automation/create-decision) capability of [SAP Build Process Automation](https://www.sap.com/products/technology-platform/process-automation.html) to initiate business actions and also supported by other BTP services, such as HANA Cloud for storing application data. Customers can leverage private connectivity between SAP BTP and SAP RISE on AWS environment through [SAP Private Link](https://help.sap.com/docs/private-link/private-link1/what-is-sap-private-link-service) and [AWS PrivateLink service](https://aws.amazon.com/privatelink/).

 **Business Actions on RISE with SAP** - Finally, based on the business rules, appropriate SAP business processes are triggered on the RISE with SAP systems like creation of maintenance order for predictive maintenance or creation of a safety observation for EHS.

![\[JRA for Internet Of Things and Genenerative AI\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-jra-iot-genai.png)


This is an alternative architecture to the one discussed in the previous section, with the following differences.

 **IoT events** – Same as Figure 1.

 **IoT Data Processing on AWS ** – Data received from edge locations is forwarded directly to the SAP BTP layer for subsequent actions, including data transformation. In this case, we are using SAP Integration Suite, [Advanced Event Mesh](https://www.sap.com/products/technology-platform/integration-suite/advanced-event-mesh.html), which has an out-of-the-box connector for S3.

 **IoT Data Processing on SAP BTP** – Control is transferred to SAP BTP services like SAP Integration Suite, Advanced Event Mesh and Events-to-Business Actions Framework. Data transformation on SAP BTP is handled using GenAI services like [Generative AI Hub](https://help.sap.com/docs/ai-launchpad/sap-ai-launchpad/generative-ai-hub), which leverages AWS Generative Foundation Models such as [Amazon Nova](https://aws.amazon.com/ai/generative-ai/nova/) to derive insights from the data for further processing. Based on the processed data, event categories and types, respective actions are triggered in SAP applications. The processor module, part of the Events-to-Business-Action framework, leverages the Decisions capability of SAP Build Process Automation to initiate business actions. Additionally, SAP HANA Cloud can be used as a vector engine for Retrieval-Augmented Generation (RAG) framework and Knowledge Graph, in addition to storing application data.

This integration enables scenarios such as predictive maintenance, real-time asset monitoring, and supply chain optimization by combining AWS's robust IoT and Generative AI capabilities with SAP’s enterprise business processes and data models.

You can find out more from SAP Architecture Center under [Build Events-to-Business Actions Scenarios with SAP BTP and AWS IoT SiteWise](https://architecture.learning.sap.com/docs/ref-arch/fbdc46aaae/3).

# Extensions
<a name="extensions-rise"></a>

You can extend RISE with SAP by using AWS services to improve performance, security, agility, and reduce costs. The following table provides recommended AWS services based on use case.


| Category | Use case |  AWS services | 
| --- | --- | --- | 
|   [Performance](rise-performance.md)   |  SAP Fiori and SAP GUI access with proactive observability  |   [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html), [Accelerated Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/accelerated-vpn.html), [AWS Internet Monitor](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-InternetMonitor.html)   | 
|   [Application integration](application-integration.md)   |  Application Integration  |   [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html)   | 
|   [Archiving and Document Management](document-management.md)   |  Archiving and Document Management  |   [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html), [AWS S3 File Gateway](https://docs.aws.amazon.com/filegateway/latest/files3/what-is-file-s3.html), [Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html)   | 
|   [Development and Extension](development-extension.md)   |  Development, Compatibility packs and alternatives  |   [AWS SDK for SAP ABAP](https://docs.aws.amazon.com/sdk-for-sapabap/latest/developer-guide/home.html), [AWS Marketplace](https://docs.aws.amazon.com/marketplace/)   | 
|   [Security Extension](security-extension.md)   |  Single Sign On, Zero Trust Access  |   [mTLS Authentication through Amazon ALB](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/mutual-authentication.html), [AWS Verified Access for SAP](https://docs.aws.amazon.com/verified-access/latest/ug/what-is-verified-access.html)   | 
|   [Artificial Intelligence](artificial-intelligence.md)   |  Generative AI  |   [Amazon Q for Business](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/what-is.html), [Amazon Quick Sight](https://docs.aws.amazon.com/quicksuite/latest/userguide/quicksight-gen-bi.html), [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html)   | 

# Performance
<a name="rise-performance"></a>

 **Enhance SAP Fiori performance with Amazon CloudFront** 

 [Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html) is a Content Delivery Network service to increase performance and reduce latency of SAP Fiori launchpad in RISE with SAP. CloudFront creates a cache for the static content and accelerates dynamic content through edge computing.

Global SAP systems accessed by users from across multiple geographical regions, can use [Amazon CloudFront VPC (Virtual Private Cloud) Origins](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-vpc-origins.html) to reduce network latency and improve the SAP end-user experience.

CloudFront VPC Origins is a feature that enhances security and streamlines operations for web applications such as SAP Fiori, hosted in private subnets within the Amazon VPC. This architecture allows CloudFront to serve as the single entry point for SAP Fiori, eliminating the need for public exposure of the SAP servers.

CloudFront VPC Origins is deployed in the customer-managed AWS account, directing SAP users coming through the CloudFront to an internal, [AWS Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). The ALB routes Fiori traffic directly to the SAP systems hosted in the SAP RISE AWS account through the AWS Transit Gateway. The AWS Web Application Firewall (WAF) is optional but recommended to improve security posture.

![\[Request routing with Amazon CloudFront\]](http://docs.aws.amazon.com/sap/latest/general/images/performance.png)


Data flow

1. User accesses SAP Fiori launchpad via Internet browser or mobile device

1. The request is routed to Amazon CloudFront to the closest edge compute of the user location

1. Optionally, AWS Web Application Firewall (WAF) evaluates the request based on the customer’s configured rules to block malicious traffic. Additionally, [Distributed Denial of Service (DDOS) protection](https://aws.amazon.com/developer/application-security-performance/articles/ddos-protection/) is also provided by [AWS Shield Standard](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-standard-summary.html) which is automatically included at no extra cost when you use CloudFront with AWS WAF

1. The request is then parsed to the AWS ALB which forwards the traffic to the SAP system hosted in the SAP managed RISE account.

This improves the security posture of SAP systems by:
+ Eliminating direct exposure of SAP servers to the public internet
+ Reducing the attack surface as CloudFront becomes the only ingress point
+ Simplified security management with centralized control through CloudFront
+ Easy integration with AWS WAF & AWS Shield Standard for additional protection

Integrating CloudFront VPC Origins with SAP can lead to performance improvements:
+ Global users benefit from CloudFront’s worldwide edge locations
+ Traffic is optimized using the [AWS global network backbone](https://aws.amazon.com/about-aws/global-infrastructure). CloudFront traffic stays on the high-throughput AWS global network backbone all the way to your SAP servers, providing optimized performance and low latency
+ Static SAP Fiori content is cached at CloudFront edge locations and dynamic SAP Fiori content is accelerated through CloudFront’s global edge network

To implement CloudFront VPC Origins for SAP:

1. The applications in RISE with SAP are by default hosted in private VPC subnets, in an AWS account – managed by SAP

1. In the AWS account – managed by customer, create an AWS ALB pointing to the SAP system in the RISE account

1. Create a CloudFront distribution with VPC Origins pointing to the AWS ALB

1. Update the security group for your VPC private origin (AWS ALB in this case) to explicitly allow the CloudFront managed prefix list. This restricts traffic coming to the VPC origin

1. Ensure the same fully qualified domain name is used by CloudFront, ALB, and SAP

1. Configure CloudFront to handle both static and dynamic content from SAP systems

1. Optionally, implement AWS WAF for additional security at the edge

Refer to AWS documentation [Restrict access with VPC origins](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-vpc-origins.html) for more information.

 **Optimize performance with Accelerated Site-to-Site VPN connections** 

When you deploy RISE with SAP on AWS for a global roll-out, you can reduce the network latency by leveraging [AWS Global Accelerator](https://aws.amazon.com/global-accelerator/) based [Accelerated Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/accelerated-vpn.html). This service complements the foundational Transit Gateway and Direct Connect to address performance challenges for geographically dispersed users while ensuring efficient and secure access to mission-critical RISE with SAP. It supports both SAP Fiori (HTTPs based) traffic and SAP GUI (TCP based) traffic.

 [AWS Global Accelerator](https://aws.amazon.com/global-accelerator/) is a service which create accelerators to improve the performance of applications for local and global users. It operates as a Layer 4 TCP/UDP proxy, optimizing traffic routing through AWS’s global network infrastructure. It terminates client TCP connections at AWS edge locations and establishes new TCP connections to backend endpoints over AWS’s private backbone. Thus, reduces latency (up to 75% varying by locations) by bypassing public internet hops and ensures congestion-free routing for globally distributed users.

 [Accelerated Site-to-Site VPN connections](https://docs.aws.amazon.com/vpn/latest/s2svpn/accelerated-vpn.html) combines traditional [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) with AWS Global Accelerator to optimize traffic routing. It routes the traffic from on-premises network to an AWS edge location that is closest to customer gateway device, leveraging the AWS backbone. This will reduce latency by up to \$130%-60% compared to standard VPNs.

![\[Accelerated Site-to-Site VPN\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-accelerated-s2s-vpn.png)


 **Enhancing observability of RISE with SAP using AWS Internet Monitor** 

 [AWS Internet Monitor](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-InternetMonitor.html) continuously analyses internet traffic between end users and AWS-hosted applications, detecting network anomalies that may impact RISE with SAP performance. It provides insights into issues like increased latency, packet loss, or regional connectivity disruptions, allowing organizations to proactively address potential outages before they affect SAP workloads.

RISE with SAP relies on stable and predictable network performance, AWS Internet Monitor helps by:
+ Identifying ISP or regional network disruptions that impact SAP response times.
+ Providing early warnings and actionable recommendations to mitigate network-related service degradation.
+ Distinguishing between AWS infrastructure issues and external internet disruptions and streamlining troubleshooting.
+ Improving observability of Internet routing, which is dynamic and lacks predictable service-level agreements (SLAs).
+ Proactive management of external ISPs and transit providers which may introduce unpredictable latency, packet loss, and congestion issues.

To implement you can refer to the Getting started with [Internet Monitor](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-IM-get-started.html).

# Application integration
<a name="application-integration"></a>

Deploy [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) to extract data out of SAP S/4HANA via `HTTP` API. API Gateway can consume data from IDOC, BAPI, and RFC. These need to be translated to a web service call. For more information, see [AWS blogs](https://aws.amazon.com/blogs/awsforsap/category/application-services/amazon-api-gateway-application-services/). The following image shows this scenario.

![\[Data flow with Amazon API Gateway\]](http://docs.aws.amazon.com/sap/latest/general/images/data-integration.png)


Data flow

1. RISE with SAP VPC is connected to your AWS account not managed by SAP, via AWS Transit Gateway.

1. Amazon API Gateway is configured to route the authentication to AWS Lambda and Amazon Cognito

1. Amazon Cognito authenticates the session.

1. Once authenticated, Amazon API Gateway routes the package to AWS Lambda.

1.  AWS Lambda stores the data in an Amazon S3 bucket.

# Archiving and Document Management
<a name="document-management"></a>

SAP Data Archiving and Document Management System (DMS) plays a crucial role both before and after migrating to RISE with SAP. It helps businesses effectively manage database growth and optimize overall costs. Before migrating to S/4HANA, archiving reduces migration expenses, minimizes downtime, and lowers risk by decreasing data volume. After moving to S/4HANA, it helps control operational costs and ensures optimal system performance. Additionally, businesses can decommission legacy SAP ECC systems, eliminating unnecessary expenses while retaining access to historical data

Data archiving for structured data. Data archiving is about moving closed business transactions data from a live SAP systems to an offline or secondary storage. The key aspect of data archiving is to set a process and strategy to reduce manual efforts while ensuring compliance with legal data retention requirements.

Document management for unstructured data. The difference between data and document archiving is the type of data that you are archiving. Document archiving relates to unstructured data likes invoices, sales orders, delivery notes, and others, which usually come in the format such as pdf, words, excels. This archiving occurs in real-time and it can be stored on any content server and linked to the related SAP transactions.

We shall discuss on the available options for your data archiving and document management systems within SAP.

 **Option 1 : SAP Content Server running on MaxDB** 

Many customers migrating to RISE with SAP choose to keep their SAP Content Server on AWS until they transition to [SAP BTP Document Management System](https://help.sap.com/docs/document-management-service?locale=en-US) or [OpenText Archiving solution](https://www.sap.com/documents/2015/08/5217be37-427c-0010-82c7-eda71af511fa.html). [SAP Content Server](https://help.sap.com/docs/document-management-service/sap-document-management-service/content-server) is a standalone component designed to store large volumes of electronic documents in various formats. These documents can be securely saved in one or more SAP MaxDB instances or within the file system. Common examples of documents stored in SAP Content Server include sales invoices, purchase orders, salary slips, emails, agreements, and others. This approach ensures seamless document management integrated into SAP business processes while maintaining accessibility and compliance.

![\[SAP Content Server running on MaxDB.scaledwidth=100%\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-content-server-maxdb.png)


Architecture Description

1. RISE with SAP VPC is connected to an AWS account which you managed via AWS Transit Gateway.

1.  [SAP Content Server](https://help.sap.com/docs/SLTOOLSET/31c5526375554d1b9f4b339fc9012685/2548be9ba8fd4e8eb55ae6ae53b76782.html?version=CURRENT_VERSION) is setup in your AWS account and [configured](https://help.sap.com/doc/saphelp_nw73ehp1/7.31.19/en-US/4d/002cc784ed5c4be10000000a42189e/content.htm?no_cache=true) to serve as the destination for data archiving.

1. SAP MaxDB is setup in your AWS account and [configured](https://help.sap.com/doc/saphelp_nw73ehp1/7.31.19/en-US/4d/002cc784ed5c4be10000000a42189e/content.htm?no_cache=true) to run on AWS EC2 instance.

1.  [SAP Content Server High Availability](https://aws.amazon.com/blogs/awsforsap/sap-content-server-high-availability-using-amazon-efs-and-suse/) using Amazon EFS. You can consider [EFS Infrequent Access](https://aws.amazon.com/efs/features/infrequent-access/) for documents which are not frequently accessed.

 **Option 2: SAP Content Server on Amazon S3** SAP Content Server, along with [Amazon S3](https://aws.amazon.com/s3/) can both meet SAP Data Archiving needs by providing scalable and secure storage for archived data. They offer features like versioning, access control, immutability, and integration with SAP systems. This section is relevant for customers experiencing SAP database growth, seeking performance improvements, aiming to reduce storage costs, or needing to meet compliance requirements for long-term data retention in their SAP environment.

The following image shows an SAP Content Server integrated with Amazon S3.

![\[SAP Content Server running on Amazon S3\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-content-server-s3.png)


Architecture Description

1. RISE with SAP VPC is connected to an AWS account which you managed via AWS Transit Gateway.

1.  [SAP Content Server](https://help.sap.com/docs/SLTOOLSET/31c5526375554d1b9f4b339fc9012685/2548be9ba8fd4e8eb55ae6ae53b76782.html?version=CURRENT_VERSION) is setup in your AWS account and [configured](https://help.sap.com/doc/saphelp_nw73ehp1/7.31.19/en-US/4d/002cc784ed5c4be10000000a42189e/content.htm?no_cache=true) to serve as the destination for data archiving.

1. The SAP Content Server integrates with [Amazon S3 File Gateway](https://aws.amazon.com/storagegateway/file/s3/), which acts as a storage gateway to facilitate file-based storage. [S3 File Gateway](https://aws.amazon.com/storagegateway/file/s3/) enables mounting of [Amazon S3](https://docs.aws.amazon.com/filegateway/latest/files3/GettingStartedAccessFileShare.html) as Network File System (NFS).

1. An Amazon S3 bucket stores the necessary archive files. You can use [S3 Lifecycle configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to manage lifecycle of the objects. For enhanced data protection or regulatory compliance, you can implement [retention policies using S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html). You can move files to different S3 storage classes using automated Lifecycle Management. For more information, see [Using Amazon S3 storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html).

SAP Content Server, in conjunction with Amazon S3, provides a mechanism for transferring archived data to long-term S3 storage such as [Amazon S3 Glacier](https://aws.amazon.com/s3/storage-classes/glacier/). This archived data can then be accessed using SAP’s standard archive read programs.

However, if you require more extensive integration with SAP, third-party solutions like [Syntax CxLink](https://www.syntax.com/software/cxlink/) or [OpenText](https://www.sap.com/documents/2015/08/5217be37-427c-0010-82c7-eda71af511fa.html) offer additional libraries. These enhance the integration capabilities, providing more advanced functionalities for managing and accessing archived data directly within the SAP environment. For organizations employing SAP Information Lifecycle Management (ILM) to manage data retention and governance, see how [Syntax Cxlink for ILM](https://aws.amazon.com/blogs/apn/syntax-cxlink-for-ilm-simplify-sap-data-lifecycle-management-on-aws/) can enhance your ILM strategy by using Amazon S3 as a secondary storage solution for SAP ILM. This approach leverages the scalability and cost-effectiveness of cloud storage while maintaining the robust data management capabilities of SAP ILM.

 **Option 3: SAP OpenText Archiving in RISE** 

SAP OpenText Archiving is enabling secure document storage, compliance, and cost-efficient data management for RISE with SAP. SAP OpenText Archiving is a cloud-based document management and archiving solution that integrates with SAP to store, retrieve, and manage unstructured content (e.g., invoices, contracts, purchase orders). It ensures compliance with regulatory requirements, reduces database footprint, and optimizes SAP S/4HANA performance. Within RISE with SAP, OpenText is included as an optional component in the RISE BOM.

 **Option 4: OpenText infoArchive for RISE** 

OpenText InfoArchive is a modern archive solution and cloud-based service for compliant archiving of both structured and unstructured information that is highly-accessible, scalable, and economical. It’s a centralized platform which enables flexible storage options for unstructured content, including storage on [Amazon Simple Storage Service (Amazon S3)](https://aws.amazon.com/s3/). InfoArchive Cloud Edition on AWS is offered as [customer-deployed](https://aws.amazon.com/blogs/apn/manage-your-business-complete-data-with-opentext-infoarchive-and-aws/) or as a managed solution by OpenText running on AWS.

OpenText infoArchive is a general-purpose archiving platform designed to retire legacy SAP applications and store structured and unstructured data from multiple systems. This beyond supports SAP ECC, CRM, HR, and industry-specific systems (Healthcare, Banking, etc.) OpenText infoArchive can be used to Archive inactive data and decommission retired SAP legacy applications. This comes with pre-built SAP views.

Key Features

1. Application Decommissioning – Retires legacy applications while keeping data accessible.

1. Structured and Unstructured Data Archiving – Stores documents, emails, records, and databases.

1. Multi-System Support – Works with SAP, Oracle, Salesforce, Microsoft, and custom applications.

1. Advanced Search & Analytics – Uses AI/ML for insights into archived data.

1. Regulatory Compliance – HIPAA, GDPR, SEC 17a-4, etc.

You can deploy an OpenText infoArchive Server integrated with Amazon S3 for SAP data decommissioning. The following image shows this scenario with AWS services. OpenText InfoArchive on AWS is deployed on [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks/) (EKS) for hosting its web application, OpenText Directory Service for authentication and authorization, and the InfoArchive server. Customers can also procure it through [AWS marketplace](https://aws.amazon.com/marketplace/pp/prodview-srfvrykqva2zo?sr=0-1&ref_=beagle&applicationId=AWSMPContessa).

![\[OpenText infoArchive for RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-opentext-infoarchive.png)


Architecture Description

1. RISE with SAP VPC is connected to your AWS account via AWS Transit Gateway.

1. OpenText InfoArchive on AWS is deployed on [Amazon Elastic Kubernetes Service (Amazon EKS)](https://aws.amazon.com/eks/) in your AWS account and [configured](https://help.sap.com/doc/saphelp_nw73ehp1/7.31.19/en-US/4d/002cc784ed5c4be10000000a42189e/content.htm?no_cache=true) to serve as the destination for data archiving.

1. OpenText InfoArchive integrates with [Amazon S3 File Gateway](https://aws.amazon.com/storagegateway/file/s3/), which acts as a storage gateway to facilitate file-based storage. [S3 File Gateway](https://aws.amazon.com/storagegateway/file/s3/) enables mounting of [Amazon S3](https://docs.aws.amazon.com/filegateway/latest/files3/GettingStartedAccessFileShare.html) as Network File System (NFS).

1. An Amazon S3 bucket stores the necessary archive files. You can use [S3 Lifecycle configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to manage lifecycle of the objects. For enhanced data protection or regulatory compliance, you can implement [retention policies using S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html).

1. Older documents can be moved to [Amazon S3 Glacier](https://aws.amazon.com/s3/storage-classes/glacier/) for long-term archival.

1. You can move files to different Amazon S3 storage classes using automated Lifecycle Management. For more information, see Using [Amazon S3 storage classes](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html).

# Development and extension
<a name="development-extension"></a>

## AWS SDK for SAP ABAP
<a name="sdk-for-sap-abap"></a>

Deploy AWS SDK for SAP ABAP on RISE with SAP VPC to avail AWS services using the ABAP language. For more information, see [What is AWS SDK for SAP ABAP?](https://docs.aws.amazon.com/sdk-for-sapabap/latest/developer-guide/home.html) 

You can authenticate AWS SDK for SAP ABAP with IAM access key. The following image shows this scenario.

![\[Data flow for SAP ABAP SDK\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-abap.png)


Data flow

1.  AWS SDK for SAP ABAP is installed via a set of transports in SAP S/4HANA within RISE with SAP VPC.

1. SAP S/4HANA is configured with IAM access key for authenticating access to AWS services. For more information, see [Managing access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).

1. Access to AWS services with AWS SDK for SAP ABAP has been established.

## Compatibility packs and alternatives
<a name="compatibility-packs-and-alternatives"></a>

Compatibility packs (CP) are temporary use rights to classic functionality within S/4HANA, created in 2016. It is part of every SAP S/4HANA contract either on-premises and private cloud. This was done with the goal of ensuring a smooth transition for SAP installed-base customers and gaining time to finalize the new simplified application architecture.

During the transition from SAP Business Suite to S/4HANA, business functions moved through these paths in the process. You can find out more from [presentation by Michael Deller (SAP) and Roland Hamm (SAP)](https://assets.dm.ux.sap.com/webinars/sap-user-groups-k4u/pdfs/230927_call_to_action_for_saps4hana_customers_compatibility_packs.pdf).

In [SAP Note 2269324](https://me.sap.com/notes/2269324), SAP defines categories to help organizations plan their strategy for compatibility packs. These categories guide decisions for transitioning away from SAP business suite to SAP S/4HANA.
+ Alternative Exists
+ Alternative Exists with Roadmap - Alternative exists providing core functionality; comprehensive coverage is on roadmap
+ Alternative Planned - Planning of development scope and timeline is work in progress
+ No Alternative Planned - No intention or plan to provide an alternative beyond 2025
+ Clarification - Clarification of strategy in progress

 **How can AWS helps customers to find alternatives ?** 

Organizations should evaluate their current SAP landscape and plan their transition strategy considering both SAP compatibility pack expiration dates and available alternatives. When compatibility packs lack alternatives, you can leverage combined AWS and SAP services. This approach aligns with the [AWS Refactor and re-architect](https://docs.aws.amazon.com/prescriptive-guidance/latest/large-migration-guide/migration-strategies.html#refactor) migration strategy, which focuses on reimagining applications and processes. Here are the details
+  [SAP and AWS joint reference architecture](https://community.sap.com/t5/technology-blogs-by-sap/sap-and-aws-joint-reference-architectures-to-maximize-utilization-and/ba-p/13549809) was developed to address common questions from joint customers and partners on how to utilize SAP BTP and/or AWS services for different business solution scenarios. Refer also to this [blog](https://aws.amazon.com/blogs/awsforsap/amplify-the-value-of-your-sap-investment-with-aws-and-sap-joint-reference-architecture/) for more details.
+  [The AWS SDK for SAP ABAP](https://aws.amazon.com/sdk-for-sap-abap/) simplifies the use of 200 plus AWS services alongside SAP applications with a client library of modules that are consistent and familiar to ABAP developers.
+  [SAP Products and AWS Partner Solutions](https://aws.amazon.com/marketplace/search/results?searchTerms=SAP) on AWS Marketplace
+  [You can contact our SAP on AWS expert team](https://aws.amazon.com/sap/) to help you guide if needed.

One example “SAP Tax Classification and Reporting” has been tagged as “No Alternative Planned” in the [SAP Note 2269324](https://me.sap.com/notes/2269324) (refer to S4HANA CompScope – Way Forward – Info – 06032025.xlsx), in this case, you can explore alternative such as the [Thomson Reuters ONESource Indirect Tax Determination](https://aws.amazon.com/marketplace/seller-profile?id=14aa4071-a059-43f9-a854-968597951447) at AWS Marketplace.

# Security Extension
<a name="security-extension"></a>

## mTLS Authentication
<a name="mtls-authentication"></a>

Mutual Transport Layer Security (mTLS) Authentication establishes a secure, two-way encrypted connection between client and server. Unlike standard TLS, where only the server provides a certificate, mTLS requires both parties to present digital certificates.

The mTLS authentication process works in four steps:

1. The client requests a connection to the server

1. The server presents its certificate

1. The client verifies the server’s certificate

1. The client presents its certificate for server verification and authentication

 **Why is mTLS Authentication for SAP** 

The implementation of mutual TLS (mTLS) authentication for SAP systems will enhance security, improve user experience, and reduce operational overhead. It will modernize user authentication infrastructure to support digital transformation while ensuring compliance with security standards. mTLS address below security requirements in SAP environments:

1. Enhanced Security: mTLS provides two-way authentication, ensuring both the client and server verify each other’s identity. This significantly reduces the risk of unauthorized access and man-in-the-middle attacks.

1. Seamless User Experience with Single Sign On (SSO): mTLS can be integrated with SSO solutions, allowing users to access multiple SAP applications and services without repeatedly entering credentials. This creates a smoother, more efficient user experience across the SAP ecosystem.

1. Automated Certificate Rotation: mTLS allows for automated rotation of certificates, enhancing security by regularly updating authentication credentials without manual intervention. This reduces the risk of using expired or compromised certificates and minimizes administrative overhead.

1. Principal Propagation for Interfaces: mTLS enables secure principal propagation across different SAP interfaces and systems. This eliminates the need for generic and privileged accounts (like SAP user with SAP\$1ALL authorization) for system-to-system communication, significantly improving security and auditability.

1. Scalability and Performance: mTLS can be implemented at the network level, offloading authentication processes from application servers. This can lead to improved performance and scalability of SAP systems.

1. Support for Zero Trust Architecture: mTLS aligns well with zero trust security models, where trust is never assumed and always verified.

 **mTLS Client Authentication with Application Load Balancer** 

 [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) supports mTLS authentication. It offers two modes: verify and passthrough mode.

 **Prerequisite** 

To ensure seamless communication, all SSL (Secure Socket Layer) or TLS certificates used across the infrastructure, including those at the ALB, SAP Web Dispatcher, and S/4HANA systems should originate from a single and trusted root certificate authority to ease the implementation and maintenance of these certificates.

 **mTLS Architecture Diagram** 

The diagram below describes a basic SAP on AWS architecture that is adapted to align with the RISE with SAP SKU offering.

![\[mTLS Architecture Diagram\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-mtls-authentication.png)


 **mTLS Verify Mode** 

To enable mTLS verify mode, create a trust store containing a CA certificate bundle. This can be accomplished using [AWS Certificate Manager (ACM)](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html), AWS Private CA, or by importing your own certificates. Manage revoked certificates using Certificate Revocation Lists (CRLs) stored in Amazon S3 and linked to the trust store.

ALB handles client certificate verification against the trust store, effectively blocking unauthorized requests. This approach offloads mTLS processing from backend targets, improving overall system efficiency. ALB imports CRLs from S3 and performs checks without repeated S3 fetches, minimizing latency.

Beyond client authentication, ALB transmits client certificate metadata through [HTTP Headers](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/mutual-authentication.html) (e.g., X-Amzn-Mtls-Clientcert-Leaf) to the backend SAP Web Dispatcher via HTTP headers. This allows for additional logic implementation on backend targets based on certificate details, to meet the requirement for SAP Servers to preserve original “Host Header” information.

This enables the server to process client certificate metadata consistently, even when originating from non-SAP sources like an AWS load balancer terminating the SSL connection. In the event that you are implementing end-to-end encryption through ALB – SAP Web Dispatcher – SAP Servers, you must configure SAP Web Dispatcher profile parameters such as icm/HTTPS/client\$1certificate\$1header\$1name for more details you can refer to [this link](https://help.sap.com/docs/ABAP_PLATFORM_NEW/683d6a1797a34730a6e005d1e8de6f22/48477e7fe9d771b9e10000000a421937.html).

 **mTLS Passthrough Mode** 

In mTLS passthrough mode, ALB forwards the client’s entire certificate chain to backend targets. This is done via an HTTP header named X-Amzn-Mtls-Clientcert. The chain, including the leaf certificate, is sent in URL-encoded PEM format with \$1, =, and / as safe characters. Below are the consideration while using mTLS Passthrough Mode:
+ ALB adds no headers if client certificates are absent; backends must handle this.
+ Backend targets are responsible for client authentication and error handling.
+ For HTTPS listeners, ALB terminates client-ALB TLS and initiates new ALB-backend TLS using target-installed certificates.
+ ALB’s TLS termination allows use of any ALB routing algorithm for load balancing.

 **NLB Passthrough** 

When you have stringent security compliance rules requiring server-side termination of client TLS connections, you can utilize a [Network Load Balancer (NLB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html).

Key points to note:

1. NLB operates at the transport layer (Layer 4 of the OSI model).

1. It provides low-latency load balancing for TCP/UDP connections.

1. NLB allows the backend servers to handle TLS termination, which can be crucial for certain security compliance scenarios.

This approach ensures that sensitive decryption processes occur on your controlled server environment, potentially meeting specific security mandates while maintaining efficient traffic distribution.

 **Comparison of mTLS verify mode vs mTLS passthrough mode vs NLB passthrough.** 


| Considerations | ALB with mTLS Verify mode | ALB with mTLS passthrough mode | NLB | 
| --- | --- | --- | --- | 
|  OSI Layer  |  Layer 7 (Application)  |  Layer 7 (Application)  |  Layer 4 (Transport)  | 
|  Integration with AWS WAF  |  Supported  |  Supported  |  Not Supported  | 
|  Client Authentication  |  Done by ALB (AWS managed)  |  Done by backend (Customer managed)  |  Done by backend (Customer managed)  | 
|  Client SSL/TLS Termination  |  At ALB (AWS managed)  |  At ALB (AWS managed)  |  At backend target (Customer managed)  | 
|  Header Based Routing  |  Supported  |  Supported  |  Not Supported  | 
|  Trust Store  |  Required at ALB  |  Not required at ALB  |  Not required at NLB  | 
|  Certification Revocation List  |  Managed at ALB  |  Managed by backend (if required)  |  Managed by backend (if required)  | 
|  Backend Processing Load  |  Lower  |  Lower  |  Higher  | 
|  Error Handling  |  Managed by ALB  |  Managed by backend  |  Managed by backend  | 

Note: RISE with SAP on AWS supports ALB with mTLS Verify Mode.

## Zero Trust Access
<a name="zero-trust-access"></a>

 AWS Verified Access is a Zero Trust security solution that replaces traditional VPNs for corporate application security. It validates each access request by checking user identity, device health, and location. The service integrates with Okta, Azure Active Directory, and IAM Identity Center while providing detailed access logging and monitoring. See [AWS Verified Access for more information](https://docs.aws.amazon.com/verified-access/latest/ug/what-is-verified-access.html).

 **Key Features and Benefits of AWS Verified Access for SAP** 

This solution secures SAP landscapes through Zero Trust security, managing both SAPGUI and web-based (HTTPs) access through a unified framework. It encrypts SAPGUI TCP connections and HTTPs access for Fiori applications, eliminating Traditional VPN while maintaining security standards.

Users can access RISE with SAP systems faster (before the VPN connectivity is setup). It allows you to grant secure access to remote users and external consultants, which do not have a VPN access to your corporate network

1. Identity-Centric Security

   Verified access integrates with existing identity providers (IdP), such as Microsoft Azure AD (Entra), Okta, Ping, and others. It provides real-time user authentication and authorization that support for SAML 2.0 and AWS IAM Identity Center

1. Contextual Access Control

   Verified Access is able to implement device security posture assessment, location-based access policies, role-based access management and dynamic policy evaluation.

1. Enhanced Performance

   Verified Access provides a direct and optimized connection paths to SAP systems, thus reducing network latency, improve performance and provide more consistent user experience to SAP systems.

1. Simplified Administration

   Verified Access provides centralized policy management through [AWS Cedar Policy Language](https://docs.aws.amazon.com/prescriptive-guidance/latest/saas-multitenant-api-access-authorization/cedar.html) and authorization engine. It provides automated compliance reporting, real-time access monitoring and reduced infrastructure maintenance

 **Implementation Guide** 

 **Prerequisites** 
+  AWS IAM Identity Center enabled in the AWS Region that you prefer. For more information, see [Enable AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-set-up-for-idc.html).
+ A [security group](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) to allow network access to SAP applications.
+ SAP application running behind an internal AWS Elastic Load Balancer. Associate your security group with the load balancer. (you can use a Network Load Balancer for both SAP GUI and SAP Fiori access, or Application Load Balancer for SAP Fiori access only).
+ A public TLS certificate in [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/) when configuring AWS Verified Access for HTTP(s) based access (i.e. SAP Fiori). Use an RSA certificate with a key length of 1,024 or 2,048.
+ A public hosted domain and the permissions required to update DNS records for the domain. (example: Amazon Route 53)
+ An IAM policy with the permissions required to create an AWS Verified Access instance. For more information, see [Policy for creating Verified Access instances](https://docs.aws.amazon.com/verified-access/latest/ug/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-create-instance).
+ Set the system environment variable **SAP\$1IPV6\$1ACTIVE=1** as per [SAP note 1346768](http://me.sap.com/notes/1346768) (requires a SAP S-user ID to access), this is needed when accessing SAP application using Verified Access endpoint from SAP GUI.

 **How to Implement AWS Verified Access for SAP** 

1. Create a Verified Access Trust Provider. After IAM Identity Center is enabled on your AWS account, you can use the following [procedure](https://docs.aws.amazon.com/verified-access/latest/ug/user-trust.html#identity-center) to set up IAM Identity Center as your trust provider for Verified Access.

1. Create a Verified Access instance. You use a Verified Access instance to organize your trust providers and Verified Access groups. Use the following [procedures](https://docs.aws.amazon.com/verified-access/latest/ug/create-verified-access-instance.html) to create a Verified Access instance, and then attach or detach a trust provider from Verified Access.

1. Create a Verified Access group. You use Verified Access groups to organize endpoints by their security requirements. When you create a Verified Access endpoint, you associate the endpoint with a group. Use the following [procedure](https://docs.aws.amazon.com/verified-access/latest/ug/create-verified-access-group.html) to create a Verified Access group

1. Create a load balancer endpoint for Verified Access. Verified Access endpoint represents an application. Each endpoint is associated witha Verified Access group and inherits the access policy for the group. Use the following [procedure](https://docs.aws.amazon.com/verified-access/latest/ug/create-load-balancer-endpoint.html) to create a load balancer endpoint for Verified Access for SAP application.

1. Configure DNS for the Verified Access endpoint. For this step, you map your SAP application’s domain name (for example, www.myapp.example.com) to the domain name of your Verified Access endpoint. To complete the DNS mapping, create a Canonical Name Record (CNAME) with your DNS provider.

1. Add a Verified Access group-level access policy. AWS Verified Access policies allow you to define rules for accessing your SAP applications hosted in AWS. Refer to the following sample [statements](https://docs.aws.amazon.com/verified-access/latest/ug/auth-policies-policy-statement-struct.html) to derive one for your application as per your requirements.

1. Test the connectivity to your application. You can now test connectivity to your application by entering your SAP application’s domain name into your web browser, for HTTP(S) based access such as SAP Fiori.

![\[Verified Access for RISE\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-verified-access.png)


The preceding diagram describes on how AWS verified Access deployed and integrated with RISE with SAP

# Artificial Intelligence
<a name="artificial-intelligence"></a>

 **Generative AI for SAP on AWS ** 

Generative AI refers to intelligent systems capable of creating new content like text, images, audio, or code based on the data they have been trained on. These systems employ machine learning techniques, particularly deep learning and neural networks, to identify patterns and relationships within the training data, and then generate novel outputs that resemble the learned information.

As organizations embrace generative AI for their employees and customers, cybersecurity practitioners must rapidly assess the risks, governance, and controls associated with this evolving technology. As security leaders working with the largest, most complex customers at [Amazon Web Services (AWS)](https://aws.amazon.com/), we’re regularly consulted on trends, best practices, and the rapidly evolving landscape of generative AI and the associated security and privacy implications. Generative AI solutions cover multiple use cases that affect your security scope. To better understand the scope and corresponding key security disciplines, see the AWS blog post [Securing generative AI: An introduction to the Generative AI Security Scoping Matrix](https://aws.amazon.com/blogs/security/securing-generative-ai-an-introduction-to-the-generative-ai-security-scoping-matrix/).

SAP and AWS have co-innovated services which help customers to combine SAP’s AI innovations and enterprise expertise with Amazon’s cutting-edge AI capabilities and technological solutions, thereby unlocking significant opportunities for business enhancement. RISE customers can accelerate their AI adoption through [SAP Business Technology Platform (BTP)](https://www.sap.com/products/technology-platform.html) AI services like Generative AI Hub and AWS enterprise GenAI services including [Amazon Bedrock](https://aws.amazon.com/bedrock/), and [Amazon Q](https://aws.amazon.com/q/) enabling secure, scalable AI solutions.

 **SAP Data Integration and Management on AWS ** 

Data serves as the cornerstone for the success of any generative AI solution. The quality, quantity, and diversity of data are critical factors that directly influence the performance and efficacy of AI models. We recommend reviewing our [Guidance for SAP Data Integration and Management on AWS](https://aws.amazon.com/solutions/guidance/sap-data-integration-and-management-on-aws/), which provides the essential data foundation for empowering customers to build AI solutions. It shows how to integrate data from SAP ERP source systems and AWS in real-time or batch mode, with change data capture, using AWS services, SAP products, and AWS Partner Solutions. This includes an overview reference architecture showing how to ingest SAP systems to AWS in addition to detailed architectural patterns that complement SAP-supported mechanisms using AWS services, SAP products, and AWS Partner Solutions.

 **Ways to implement Generative AI Solutions for RISE on AWS ** 

This architectural guidance helps you build advanced AI solutions. It shows you how to effectively combine RISE with SAP and AWS's AI services to create powerful and innovative systems.

 **Amazon Q for Business** 

RISE customers can leverage [Amazon Q Business](https://aws.amazon.com/q/business/) to answer questions, provide summaries, generate content, and complete tasks based on enterprise data. End users receive immediate, permission-aware responses from enterprise data sources with citations. Q Business is a fully managed generative-AI powered assistant with 40\$1 pre-built connectors to various enterprise applications and data sources.

Customers who choose to break data silos by creating data warehouse or data lake solutions can use SAP and other enterprise data as source for Q Business to :
+ Create a unified search experience across systems and data thereby extracting key insights
+ Create and share lightweight applications either to select users or add them to an organization’s application library
+ Perform actions across popular business applications and platforms
+ Create and automate complex business workflows

![\[Amazon Q for Business\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-amazon-q-business.png)


The diagram above illustrates a design framework for Q Business based search for RISE customers. It illustrates how SAP data can be extracted utilizing AWS services and using pre-built connectors from Q Business organizations can create a unified search experience.

Solution Flow:

1. Establish connectivity with RISE environment by creating AWS Glue connection for SAP OData

1. Ingest relevant SAP data by creating ETL jobs

1. Utilize pre-built connectors to various data sources and applications to connect with Q Business. Ingest the relevant content while inheriting the existing identities, roles and permissions.

1. End users can interact in natural language to derive business insights from data across multiple applications

 **Amazon Quick Sight** 

 [Amazon Quick Sight](https://aws.amazon.com/quicksuite/quicksight/) revolutionizes SAP data analysis through its advanced 'Generative business intelligence' capabilities, empowering business users with intuitive self-service reporting tools. Using natural language prompts, RISE customers can effortlessly create sophisticated visual dashboards and data narratives without requiring SQL or programming expertise.

This democratization of data analysis dramatically reduces report generation time from days to hours, eliminating dependencies on specialized ABAP developers and/or analytics teams. The system’s AI-driven automation intelligently generates contextual titles, organized sections, coherent story flows, and actionable insights with specific recommendations. For RISE customers, this translates into accelerated decision-making processes, with deeper more accessible insights from their enterprise data.

![\[Amazon Quick Sight\]](http://docs.aws.amazon.com/sap/latest/general/images/rise-amazon-q-in-quicksight.png)


The diagram illustrates a framework of Amazon Quick Sight with SAP Data.

Solution Flow:

1. SAP report to process business logic and upload data to [Amazon S3](https://aws.amazon.com/s3/).

1. With [AWS SDK for SAP ABAP](https://aws.amazon.com/sdk-for-sap-abap/), it will create an [Amazon Athena](https://aws.amazon.com/athena/) query linked to the SAP report data on S3.

1. Create an Quick Sight dataset and topic based on the Athena query.

1. Now using Q in Quick Sight, you can interact with the data generated by SAP reports using natural language and get insights of data, to build dashboard and generate stories.