Get started with EFA and NIXL for inference workloads on Amazon EC2
The NVIDIA Inference Xfer Library (NIXL) is a high-throughput, low-latency communication
library designed specifically for disaggregated inference workloads. NIXL can be used together
with EFA and Libfabric to support KV-cache transfer between prefill and decode nodes, and
it enables efficient KV-cache movement between various storage layers. For
more information, see the NIXL
Requirements
-
Only Ubuntu 24.04 and Ubuntu 22.04 base AMIs are supported.
-
EFA supports only NIXL 1.0.0 and later.
Steps
An EFA requires a security group that allows all inbound and outbound traffic to and from the security group itself. The following procedure creates a security group that allows all inbound and outbound traffic to and from itself, and that allows inbound SSH traffic from any IPv4 address for SSH connectivity.
Important
This security group is intended for testing purposes only. For your production environments, we recommend that you create an inbound SSH rule that allows traffic only from the IP address from which you are connecting, such as the IP address of your computer, or a range of IP addresses in your local network.
For other scenarios, see Security group rules for different use cases.
To create an EFA-enabled security group
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
. -
In the navigation pane, choose Security Groups and then choose Create security group.
-
In the Create security group window, do the following:
-
For Security group name, enter a descriptive name for the security group, such as
EFA-enabled security group. -
(Optional) For Description, enter a brief description of the security group.
-
For VPC, select the VPC into which you intend to launch your EFA-enabled instances.
-
Choose Create security group.
-
-
Select the security group that you created, and on the Details tab, copy the Security group ID.
-
With the security group still selected, choose Actions, Edit inbound rules, and then do the following:
-
Choose Add rule.
-
For Type, choose All traffic.
-
For Source type, choose Custom and paste the security group ID that you copied into the field.
-
Choose Add rule.
-
For Type, choose SSH.
-
For Source type, choose Anywhere-IPv4.
-
Choose Save rules.
-
-
With the security group still selected, choose Actions, Edit outbound rules, and then do the following:
-
Choose Add rule.
-
For Type, choose All traffic.
-
For Destination type, choose Custom and paste the security group ID that you copied into the field.
-
Choose Save rules.
-
Launch a temporary instance that you can use to install and configure the EFA software components. You use this instance to create an EFA-enabled AMI from which you can launch your EFA-enabled instances.
To launch a temporary instance
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
. -
In the navigation pane, choose Instances, and then choose Launch Instances to open the new launch instance wizard.
-
(Optional) In the Name and tags section, provide a name for the instance, such as
EFA-instance. The name is assigned to the instance as a resource tag (Name=).EFA-instance -
In the Application and OS Images section, select an AMI for one of the supported operating systems. You can also select a supported DLAMI found on the DLAMI Release Notes Page.
-
In the Instance type section, select a supported instance type.
-
In the Key pair section, select the key pair to use for the instance.
-
In the Network settings section, choose Edit, and then do the following:
-
For Subnet, choose the subnet in which to launch the instance. If you do not select a subnet, you can't enable the instance for EFA.
-
For Firewall (security groups), choose Select existing security group, and then select the security group that you created in the previous step.
-
Expand the Advanced network configuration section.
For Network interface 1, select Network card index = 0, Device index = 0, and Interface type = EFA with ENA.
(Optional) If you are using a multi-card instance type, such as
p4d.24xlargeorp5.48xlarge, for each additional network interface required, choose Add network interface, for Network card index select the next unused index, and then select Device index = 1 and Interface type = EFA with ENA or EFA-only.
-
-
In the Storage section, configure the volumes as needed.
Note
You must provision an additional 10 to 20 GiB of storage for the Nvidia CUDA Toolkit. If you do not provision enough storage, you will receive an
insufficient disk spaceerror when attempting to install the Nvidia drivers and CUDA toolkit. -
In the Summary panel on the right, choose Launch instance.
Important
Skip Step 3 if your AMI already includes Nvidia GPU drivers, the CUDA toolkit, and cuDNN, or if you are using a non-GPU instance.
To install the Nvidia GPU drivers, Nvidia CUDA toolkit, and cuDNN
-
To ensure that all of your software packages are up to date, perform a quick software update on your instance.
$sudo apt-get update && sudo apt-get upgrade -y -
Install the utilities that are needed to install the Nvidia GPU drivers and the Nvidia CUDA toolkit.
$sudo apt-get install build-essential -y -
To use the Nvidia GPU driver, you must first disable the
nouveauopen source drivers.-
Install the required utilities and the kernel headers package for the version of the kernel that you are currently running.
$sudo apt-get install -y gcc make linux-headers-$(uname -r) -
Add
nouveauto the/etc/modprobe.d/blacklist.confdeny list file.$cat << EOF | sudo tee --append /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv EOF -
Open
/etc/default/grubusing your preferred text editor and add the following.GRUB_CMDLINE_LINUX="rdblacklist=nouveau" -
Rebuild the Grub configuration.
$sudo update-grub
-
-
Reboot the instance and reconnect to it.
-
Add the CUDA repository and install the Nvidia GPU drivers, NVIDIA CUDA toolkit, and cuDNN.
$sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64/7fa2af80.pub \ && wget -O /tmp/deeplearning.deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64/nvidia-machine-learning-repo-ubuntu2004_1.0.0-1_amd64.deb \ && sudo dpkg -i /tmp/deeplearning.deb \ && wget -O /tmp/cuda.pin https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin \ && sudo mv /tmp/cuda.pin /etc/apt/preferences.d/cuda-repository-pin-600 \ && sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub \ && sudo add-apt-repository 'deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /' \ && sudo apt update \ && sudo apt install nvidia-dkms-535 \ && sudo apt install -o Dpkg::Options::='--force-overwrite' cuda-drivers-535 cuda-toolkit-12-3 libcudnn8 libcudnn8-dev -y -
Reboot the instance and reconnect to it.
-
(
p4d.24xlargeandp5.48xlargeonly) Install the Nvidia Fabric Manager.-
You must install the version of the Nvidia Fabric Manager that matches the version of the Nvidia kernel module that you installed in the previous step.
Run the following command to determine the version of the Nvidia kernel module.
$cat /proc/driver/nvidia/version | grep "Kernel Module"The following is example output.
NVRM version: NVIDIA UNIX x86_64 Kernel Module 450.42.01 Tue Jun 15 21:26:37 UTC 2021In the example above, major version
450of the kernel module was installed. This means that you need to install Nvidia Fabric Manager version450. -
Install the Nvidia Fabric Manager. Run the following command and specify the major version identified in the previous step.
$sudo apt install -o Dpkg::Options::='--force-overwrite' nvidia-fabricmanager-major_version_numberFor example, if major version
450of the kernel module was installed, use the following command to install the matching version of Nvidia Fabric Manager.$sudo apt install -o Dpkg::Options::='--force-overwrite' nvidia-fabricmanager-450 -
Start the service, and ensure that it starts automatically when the instance starts. Nvidia Fabric Manager is required for NV Switch Management.
$sudo systemctl start nvidia-fabricmanager && sudo systemctl enable nvidia-fabricmanager
-
-
Ensure that the CUDA paths are set each time that the instance starts.
-
For bash shells, add the following statements to
/home/andusername/.bashrc/home/.username/.bash_profileexport PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH -
For tcsh shells, add the following statements to
/home/.username/.cshrcsetenv PATH=/usr/local/cuda/bin:$PATH setenv LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH
-
-
To confirm that the Nvidia GPU drivers are functional, run the following command.
$nvidia-smi -q | headThe command should return information about the Nvidia GPUs, Nvidia GPU drivers, and Nvidia CUDA toolkit.
Important
Skip Step 4 if your AMI already includes GDRCopy, or if you are using a non-GPU instance.
Install GDRCopy to improve the performance of Libfabric on GPU-based platforms. For more information about
GDRCopy, see the GDRCopy repository
To install GDRCopy
-
Install the required dependencies.
$sudo apt -y install build-essential devscripts debhelper check libsubunit-dev fakeroot pkg-config dkms -
Download and extract the GDRCopy package.
$wget https://github.com/NVIDIA/gdrcopy/archive/refs/tags/v2.4.tar.gz \ && tar xf v2.4.tar.gz \ && cd gdrcopy-2.4/packages -
Build the GDRCopy DEB packages.
$CUDA=/usr/local/cuda ./build-deb-packages.sh -
Install the GDRCopy DEB packages.
$sudo dpkg -i gdrdrv-dkms_2.4-1_amd64.*.deb \ && sudo dpkg -i libgdrapi_2.4-1_amd64.*.deb \ && sudo dpkg -i gdrcopy-tests_2.4-1_amd64.*.deb \ && sudo dpkg -i gdrcopy_2.4-1_amd64.*.deb
Important
Skip Step 5 if your AMI already includes the latest EFA installer.
Install the EFA-enabled kernel, EFA drivers, and Libfabric stack that is required to support EFA on your instance.
To install the EFA software
-
Connect to the instance you launched. For more information, see Connect to your Linux instance using SSH.
-
Download the EFA software installation files. The software installation files are packaged into a compressed tarball (
.tar.gz) file. To download the latest stable version, use the following command.$curl -O https://efa-installer.amazonaws.com/aws-efa-installer-1.47.0.tar.gz -
Extract the files from the compressed
.tar.gzfile, delete the tarball, and navigate into the extracted directory.$tar -xf aws-efa-installer-1.47.0.tar.gz && rm -rf aws-efa-installer-1.47.0.tar.gz && cd aws-efa-installer -
Run the EFA software installation script.
$sudo ./efa_installer.sh -yLibfabric is installed in the
/opt/amazon/efadirectory. -
If the EFA installer prompts you to reboot the instance, do so and then reconnect to the instance. Otherwise, log out of the instance and then log back in to complete the installation.
-
Confirm that the EFA software components were successfully installed.
$fi_info -p efa -t FI_EP_RDMThe command should return information about the Libfabric EFA interfaces. The following example shows the command output.
-
p3dn.24xlargewith single network interfaceprovider: efa fabric: EFA-fe80::94:3dff:fe89:1b70 domain: efa_0-rdm version: 2.0 type: FI_EP_RDM protocol: FI_PROTO_EFA -
p4d.24xlargeandp5.48xlargewith multiple network interfacesprovider: efa fabric: EFA-fe80::c6e:8fff:fef6:e7ff domain: efa_0-rdm version: 111.0 type: FI_EP_RDM protocol: FI_PROTO_EFA provider: efa fabric: EFA-fe80::c34:3eff:feb2:3c35 domain: efa_1-rdm version: 111.0 type: FI_EP_RDM protocol: FI_PROTO_EFA provider: efa fabric: EFA-fe80::c0f:7bff:fe68:a775 domain: efa_2-rdm version: 111.0 type: FI_EP_RDM protocol: FI_PROTO_EFA provider: efa fabric: EFA-fe80::ca7:b0ff:fea6:5e99 domain: efa_3-rdm version: 111.0 type: FI_EP_RDM protocol: FI_PROTO_EFA
-
Install NIXL. For more information about NIXL, see the
NIXL repository
Install the NIXL Benchmark and run a test to ensure that your temporary instance is properly
configured for EFA and NIXL. The NIXL Benchmark enables you to confirm that NIXL is properly
installed and that it is operating as expected. For more information, see the
nixlbench
repository
NIXL Benchmark (nixlbench) requires ETCD for coordination between client and server. To use ETCD with NIXL requires ETCD Server and Client, and ETCD CPP API.
Install the machine learning applications on the temporary instance. The installation procedure varies depending on the specific machine learning application.
Note
Refer to your machine learning application's documentation for installation instructions.
After you have installed the required software components, you create an AMI that you can reuse to launch your EFA-enabled instances.
To create an AMI from your temporary instance
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
. -
In the navigation pane, choose Instances.
-
Select the temporary instance that you created and choose Actions, Image, Create image.
-
For Create image, do the following:
-
For Image name, enter a descriptive name for the AMI.
-
(Optional) For Image description, enter a brief description of the purpose of the AMI.
-
Choose Create image.
-
-
In the navigation pane, choose AMIs.
-
Locate the AMI tht you created in the list. Wait for the status to change from
pendingtoavailablebefore continuing to the next step.
At this point, you no longer need the temporary instance that you launched. You can terminate the instance to stop incurring charges for it.
To terminate the temporary instance
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
. -
In the navigation pane, choose Instances.
-
Select the temporary instance that you created and then choose Actions, Instance state, Terminate instance.
-
When prompted for confirmation, choose Terminate.
Launch your EFA and NIXL-enabled instances using the EFA-enabled AMI that you created in Step 9, and the EFA-enabled security group that you created in Step 1.
To launch EFA and NIXL-enabled instances
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
. -
In the navigation pane, choose Instances, and then choose Launch Instances to open the new launch instance wizard.
-
(Optional) In the Name and tags section, provide a name for the instance, such as
EFA-instance. The name is assigned to the instance as a resource tag (Name=).EFA-instance -
In the Application and OS Images section, choose My AMIs, and then select the AMI that you created in the previous step.
-
In the Instance type section, select a supported instance type.
-
In the Key pair section, select the key pair to use for the instance.
-
In the Network settings section, choose Edit, and then do the following:
-
For Subnet, choose the subnet in which to launch the instance. If you do not select a subnet, you can't enable the instance for EFA.
-
For Firewall (security groups), choose Select existing security group, and then select the security group that you created in Step 1.
-
Expand the Advanced network configuration section.
For Network interface 1, select Network card index = 0, Device index = 0, and Interface type = EFA with ENA.
(Optional) If you are using a multi-card instance type, such as
p4d.24xlargeorp5.48xlarge, for each additional network interface required, choose Add network interface, for Network card index select the next unused index, and then select Device index = 1 and Interface type = EFA with ENA or EFA-only.
-
-
(Optional) In the Storage section, configure the volumes as needed.
-
In the Summary panel on the right, for Number of instances, enter the number of EFA-enabled instances that you want to launch, and then choose Launch instance.
To enable your applications to run across all of the instances in your cluster, you must enable passwordless SSH access from the leader node to the member nodes. The leader node is the instance from which you run your applications. The remaining instances in the cluster are the member nodes.
To enable passwordless SSH between the instances in the cluster
-
Select one instance in the cluster as the leader node, and connect to it.
-
Disable
strictHostKeyCheckingand enableForwardAgenton the leader node. Open~/.ssh/configusing your preferred text editor and add the following.Host * ForwardAgent yes Host * StrictHostKeyChecking no -
Generate an RSA key pair.
$ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsaThe key pair is created in the
$HOME/.ssh/directory. -
Change the permissions of the private key on the leader node.
$chmod 600 ~/.ssh/id_rsa chmod 600 ~/.ssh/config -
Open
~/.ssh/id_rsa.pubusing your preferred text editor and copy the key. -
For each member node in the cluster, do the following:
-
Connect to the instance.
-
Open
~/.ssh/authorized_keysusing your preferred text editor and add the public key that you copied earlier.
-
-
To test that the passwordless SSH is functioning as expected, connect to your leader node and run the following command.
$sshmember_node_private_ipYou should connect to the member node without being prompted for a key or password.
Important
Follow Step 13 only if you followed Step 7.
Run a test to ensure that your instances are properly configured for EFA and NIXL.
After NIXL is installed, you can use NIXL through LLM inference and serving frameworks such as vLLM, SGLang, and TensorRT-LLM.
To serve your inference workload using vLLM
-
Install vLLM.
$pip install vllm -
Start the vLLM server with NIXL. The following sample commands create one prefill (producer) and one decode (consumer) instance for NIXL handshake connection, KV connector, KV role, and transport backend. For detailed examples and scripts, see the NIXLConnector Usage Guide
. To use NIXL with EFA, set the environment variables based on your setup and use case.
-
Producer (Prefiller) configuration
$vllm serveyour-application\ --port 8200 \ --enforce-eager \ --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_both","kv_buffer_device":"cuda","kv_connector_extra_config":{"backends":["LIBFABRIC"]}}' -
Consumer (Decoder) configuration
$vllm serveyour-application\ --port 8200 \ --enforce-eager \ --kv-transfer-config '{"kv_connector":"NixlConnector","kv_role":"kv_both","kv_buffer_device":"cuda","kv_connector_extra_config":{"backends":["LIBFABRIC"]}}'
The preceding sample configuration sets the following:
-
kv_roletokv_both, which enables symmetric functionality where the connector can act as both producer and consumer. This provides flexibility for experimental setups and scenarios where the role distinction is not predetermined. -
kv_buffer_devicetocuda, which enables using GPU memory. -
NIXL backend to
LIBFABRIC, which enables NIXL traffic to go over EFA.
-