Operating System Requirements - SAP HANA on AWS

Operating System Requirements

This section outlines the required operating system configurations for SUSE Linux Enterprise Server for SAP (SLES for SAP) cluster nodes. Note that this is not a comprehensive list of configuration requirements for running SAP HANA on AWS, but rather focuses specifically on cluster management prerequisites.

Consider using configuration management tools or automated deployment scripts to ensure accurate and repeatable setup across your cluster infrastructure.

Important

The following configurations must be performed on all cluster nodes. Ensure consistency across nodes to prevent cluster issues.

Root Access

Verify root access on both cluster nodes. The majority of the setup commands in this document are performed with the root user. Assume that commands should be run as root unless there is an explicit call out to choose otherwise.

Install Missing Operating System Packages

This is applicable to all cluster nodes. You must install any missing operating system packages.

The following packages and their dependencies are required for the pacemaker setup. Depending on your baseline image, for example, SLES for SAP, these packages may already be installed.

Package Description Category Required Configuration Pattern

chrony

Time Synchronization

System Support

Mandatory

All

rsyslog

System Logging

System Support

Mandatory

All

pacemaker

Cluster Resource Manager

Core Cluster

Mandatory

All

corosync

Cluster Communication Engine

Core Cluster

Mandatory

All

cluster-glue

Cluster Infrastructure

Core Cluster

Mandatory

All

crmsh

Cluster Management CLI

Core Cluster

Mandatory

All

resource-agents

Basic Resource Agents

Core Cluster

Mandatory

All

fence-agents

Fencing Capabilities

Core Cluster

Mandatory

All

SAPHanaSR-angi

New Generation HANA System Replication Agent

SAP HANA HA

Mandatory*

SAPHANAScaleUp-SAPANGI, SAPHANAScaleOut-SAPANGI

SAPHanaSR

Previous Generation Scale-Up SR Agent

SAP HANA HA

Mandatory*

SAPHANAScaleUp-Classic

SAPHanaSR-doc

Documentation for Scale-Up Configuration

SAP HANA HA

Mandatory*

SAPHANAScaleUp-Classic

SAPHanaSR-ScaleOut

Previous Generation Scale-Out SR Agent

SAP HANA HA

Mandatory*

SAPHANAScaleOut-Classic

SAPHanaSR-ScaleOut-doc

Documentation for Scale-Out Configuration

SAP HANA HA

Mandatory*

SAPHANAScaleOut-Classic

supportutils

System Information Gathering

Support Tools

Mandatory

All

sysstat

Performance Monitoring Tools

Support Tools

Mandatory

All

zypper-lifecycle-plugin

Software Lifecycle Management

Support Tools

Recommended

All

supportutils-plugin-ha-sap

HA/SAP Support Data Collection

Support Tools

Recommended

All

supportutils-plugin-suse-public-cloud

Cloud Support Data Collection

Support Tools

Recommended

All

dstat

System Resource Statistics

Monitoring

Recommended

All

iotop

I/O Monitoring

Monitoring

Recommended

All

Note

Refer to Vendor Support of Deployment Types for more information on Configuration Patterns. Mandatory* indicates that this package is mandatory based on the Configuration Pattern.

#!/bin/bash # Mandatory core packages for SAP HANA HA on AWS mandatory_packages="corosync pacemaker cluster-glue crmsh rsyslog chrony resource-agents fence-agents " # HANA SR packages - New Generation hanaSR_angi="SAPHanaSR-angi" # New generation package for both scale-up and scale-out # HANA SR packages - Previous Generation (still in common use) hanaSR_scaleup="SAPHanaSR SAPHanaSR-doc" # For scale-up deployments hanaSR_scaleout="SAPHanaSR-ScaleOut SAPHanaSR-ScaleOut-doc" # For scale-out deployments # Recommended monitoring and support packages support_packages="supportutils supportutils-plugin-ha-sap supportutils-plugin-suse-public-cloud sysstat dstat iotop zypper-lifecycle-plugin" # Note: Choose either hanaSR_angi OR one of hanaSR_scaleup/hanaSR_scaleout # Uncomment the appropriate line based on your deployment: packages="${mandatory_packages} ${hanaSR_angi} ${support_packages}" #packages="${mandatory_packages} ${hanaSR_scaleup} ${support_packages}" #packages="${mandatory_packages} ${hanaSR_scaleout} ${support_packages}" missingpackages="" for package in ${packages}; do echo "Checking if ${package} is installed..." if ! rpm -q ${package} --quiet; then echo " ${package} is missing and needs to be installed" missingpackages="${missingpackages} ${package}" fi done if [ -z "$missingpackages" ]; then echo "All packages are installed." else echo "Missing mandatory packages: $(echo ${missingpackages} | tr ' ' '\n' | grep -E "^($(echo ${mandatory_packages} | tr ' ' '|'))$")" echo "Missing support packages: $(echo ${missingpackages} | tr ' ' '\n' | grep -E "^($(echo ${support_packages} | tr ' ' '|'))$")" echo -n "Do you want to install the missing packages (y/n)? " read response if [ "$response" = "y" ]; then zypper install -y $missingpackages fi fi

If a package is not installed, and you are unable to install it using zypper, it may be because SUSE Linux Enterprise High Availability extension is not available as a repository in your chosen image. You can verify the availability of the extension using the following command:

$ sudo zypper repos

To install or update a package or packages with confirmation, use the following command:

$ sudo zypper install <package_name(s)>

Update and Check Operating System Versions

You must update and confirm versions across nodes. Apply all the latest patches to your operating system versions. This ensures that bugs are addressed and new features are available.

You can update the patches individually or update all system patches using the zypper update command. A clean reboot is recommended prior to setting up a cluster.

$ sudo zypper update $ sudo reboot

Compare the operating system package versions on the two cluster nodes and ensure that the versions match on both nodes.

System Logging

Both systemd-journald and rsyslog are suggested for comprehensive logging. Systemd-journald (enabled by default) provides structured, indexed logging with immediate access to events, while rsyslog is maintained for backward compatibility and traditional file-based logging. This dual approach ensures both modern logging capabilities and compatibility with existing log management tools and practices.

1. Enable and start rsyslog:

# systemctl enable --now rsyslog
2. (Optional) Configure persistent logging for systemd-journald:

If you are not using a logging agent (like the AWS CloudWatch Unified Agent or Vector) to ship logs to a centralized location, you may want to configure persistent logging to retain logs after system reboots.

# mkdir -p /etc/systemd/journald.conf.d

Create /etc/systemd/journald.conf.d/99-logstorage.conf with:

[Journal] Storage=persistent

Persistent logging requires careful storage management. Configure appropriate retention and rotation settings in journald.conf to prevent logs from consuming excessive disk space. Review man journald.conf for available options such as SystemMaxUse, RuntimeMaxUse, and MaxRetentionSec.

To apply the changes, restart journald:

# systemctl restart systemd-journald

After enabling persistent storage, only new logs will be stored persistently. Existing logs from the current boot session will remain in volatile storage until the next reboot.

3. Verify services are running:

# systemctl status systemd-journald # systemctl status rsyslog

Time Synchronization Services

Time synchronization is important for cluster operation. Ensure that chrony rpm is installed, and configure appropriate time servers in the configuration file.

You can use Amazon Time Sync Service that is available on any instance running in a VPC. It does not require internet access. To ensure consistency in the handling of leap seconds, don’t mix Amazon Time Sync Service with any other ntp time sync servers or pools.

Create or check the /etc/chrony.d/ec2.conf file to define the server:

# Amazon EC2 time source config server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4

Start the chronyd.service, using the following command:

# systemctl enable --now chronyd.service # systemctl status chronyd

For more information, see Set the time for your Linux instance.

AWS CLI Profile

The AWS cluster resource agents use AWS Command Line Interface (AWS CLI). You need to create an AWS CLI profile for the root account.

You can either edit the config file at /root/.aws manually or by using the aws configure AWS CLI command.

You should skip providing the information for the access and secret access keys. The permissions are provided through IAM roles attached to Amazon EC2 instances.

# aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: <region> Default output format [None]:

The profile name is default unless configured. If you choose to use a different name you can specify --profile. The name chosen in this example is cluster. It is used in the AWS resource agent definition for pacemaker. The AWS Region must be the default AWS Region of the instance.

# aws configure --profile cluster AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: <region> Default output format [None]:

On the hosts, you can verify the available profiles using the following command:

# aws configure list-profiles

And review that an assumed role is associated by querying the caller identity:

# aws sts get-caller-identity --profile=<profile_name>

Pacemaker Proxy Settings (Optional)

If your Amazon EC2 instance has been configured to access the internet and/or AWS Cloud through proxy servers, then you need to replicate the settings in the pacemaker configuration. For more information, see Using an HTTP Proxy.

Add the following lines to /etc/sysconfig/pacemaker:

http_proxy=http://<proxyhost>:<proxyport> https_proxy=http://<proxyhost>:<proxyport> no_proxy=127.0.0.1,localhost,169.254.169.254,fd00:ec2::254
  • Modify proxyhost and proxyport to match your settings.

  • Ensure that you exempt the address used to access the instance metadata.

  • Configure no_proxy to include the IP address of the instance metadata service – 169.254.169.254 (IPV4) and fd00:ec2::254 (IPV6). This address does not vary.

Add Overlay IP for Initial Database Access

This step is optional and only needed if you require client connectivity to the SAP HANA database before cluster setup. The Overlay IP will later be managed automatically by the cluster resources.

To enable initial database access, manually add the Overlay IP to the primary instance (where the SAP HANA database is currently running):

# ip addr add <hana_overlayip>/32 dev eth0
  • This configuration is temporary and will be lost after instance reboot

  • Only configure this on the current primary instance

  • The cluster will take over management of this IP once configured

Hostname Resolution

You must ensure that all instances can resolve all hostnames in use. Add the hostnames for cluster nodes to /etc/hosts file on all cluster nodes. This ensures that hostnames for cluster nodes can be resolved even in case of DNS issues. See the following example for a two-node cluster:

# cat /etc/hosts 10.2.10.1 hanahost01.example.com hanahost01 10.2.20.1 hanahost02.example.com hanahost02 172.16.52.1 hanahdb.example.com hanahdb

In this example, the secondary IPs used for the second cluster ring are not mentioned. They are only used in the cluster configuration. You can allocate virtual hostnames for administration and identification purposes.

Important

The Overlay IP is out of VPC range, and cannot be reached from locations not associated with the route table, including on-premises.