

# Accessing your FSx for ONTAP data
Accessing your data

You can access your Amazon FSx file systems using a variety of supported clients and methods in both the AWS Cloud and on premises environments.

Each SVM has four endpoints that are used to access data or to manage the SVM using the NetApp ONTAP CLI or REST API:
+ `Nfs` – For connecting using the Network File System (NFS) protocol
+ `Smb` – For connecting using the Service Message Block (SMB) protocol (If your SVM is joined to an Active Directory, or you're using a workgroup.)
+ `Iscsi` – For connecting using the Internet Small Computer Systems Interface (iSCSI) protocol for shared block storage support. 
+ `Nvme` – For connecting using the Non-Volatile Memory Express (NVMe) over TCP/IP for shared block storage support.
+ `Management` – For managing SVMs using the NetApp ONTAP CLI or API, or NetApp Console

**Note**  
The iSCSI protocol is available on all file systems that have 6 or fewer [high-availability pairs (HA) pairs](HA-pairs.md). The NVMe/TCP protocol is available on second-generation file systems that have 6 or fewer HA pairs.

**Topics**
+ [

## Supported clients
](#supported-clients-fsx)
+ [

## Using block storage protocols
](#using-block-storage)
+ [

## Accessing data from within the AWS Cloud
](#access-environments)
+ [

## Accessing data from on-premises
](#accessing-data-from-on-premises)
+ [

# Configure routing to access Multi-AZ file systems from outside your VPC
](configuring-routing-using-AWSTG.md)
+ [

# Configure routing to access Multi-AZ file systems from on-premises
](configure-routing-maz-on-prem.md)
+ [

# Mounting volumes on Linux clients
](attach-linux-client.md)
+ [

# Mounting volumes on Microsoft Windows clients
](attach-windows-client.md)
+ [

# Mounting volumes on macOS clients
](attach-mac-client.md)
+ [

# Provisioning iSCSI for Linux
](mount-iscsi-luns-linux.md)
+ [

# Provisioning iSCSI for Windows
](mount-iscsi-windows.md)
+ [

# Provisioning NVMe/TCP for Linux
](provision-nvme-linux.md)
+ [

# Accessing your data via Amazon S3 access points
](accessing-data-via-s3-access-points.md)
+ [

# Accessing data from other AWS services
](using-fsx-with-other-AWS-services.md)

## Supported clients


FSx for ONTAP file systems support accessing data from a wide variety of compute instances and operating systems. It does this by supporting access using the Network File System (NFS) protocol (v3, v4.0, v4.1 and v4.2), all versions of the Server Message Block (SMB) protocol (including 2.0, 3.0, and 3.1.1), and the Internet Small Computer Systems Interface (iSCSI) protocol.

**Important**  
Amazon FSx doesn't support accessing file systems from the public internet. Amazon FSx automatically detaches any Elastic IP address which is a public IP address reachable from the Internet, that gets attached to a file system's elastic network interface.

The following AWS compute instances are supported for use with FSx for ONTAP:
+ Amazon Elastic Compute Cloud (Amazon EC2) instances running Linux with NFS or SMB support, Microsoft Windows, and MacOS. For more information see [Mounting volumes on Linux clients](attach-linux-client.md) [Mounting volumes on Microsoft Windows clients](attach-windows-client.md), and [Mounting volumes on macOS clients](attach-mac-client.md).
+ Amazon Elastic Container Service (Amazon ECS) Docker containers on Amazon EC2 Windows and Linux instances. For more information, see [Using Amazon Elastic Container Service with FSx for ONTAP](mount-ontap-ecs-containers.md).
+ Amazon Elastic Kubernetes Service – To learn more, see [Amazon FSx for NetApp ONTAP CSI driver](https://docs.aws.amazon.com/eks/latest/userguide/fsx-ontap.html) in the *Amazon EKS User Guide*.
+ Red Hat OpenShift Service on AWS (ROSA) – To learn more, see [What is Red Hat OpenShift Service on AWS?](https://docs.aws.amazon.com/ROSA/latest/userguide/what-is-rosa.html) in the Red Hat OpenShift Service on AWS User Guide.
+ Amazon WorkSpaces instances. For more information, see [Using Amazon WorkSpaces with FSx for ONTAP](using-workspaces.md).
+ Amazon AppStream 2.0 instances.
+ AWS Lambda – For more information, see the AWS blog post [Enabling SMB access for server-less workloads with Amazon FSx](https://aws.amazon.com/blogs/storage/enabling-smb-access-for-serverless-workloads/).
+ Virtual machines (VMs) running in VMware Cloud on AWS environments. For more information, see [ Configure Amazon FSx for NetApp ONTAP as External Storage](https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-D55294A3-7C40-4AD8-80AA-B33A25769CCA.html?hWord=N4IghgNiBcIGYGcAeIC+Q) and [VMware Cloud on AWS with Amazon FSx for NetApp ONTAP Deployment Guide](https://vmc.techzone.vmware.com/fsx-guide#overview).

Once mounted, FSx for ONTAP file systems appear as a local directory or drive letter over NFS and SMB, providing fully managed, shared network file storage that can be simultaneously accessed by up to thousands of clients. iSCSI LUNS are accessible as block devices when mounted over iSCSI.

## Using block storage protocols


Amazon FSx for NetApp ONTAP supports the Internet Small Computer Systems Interface (iSCSI) and Non-Volatile Memory Express (NVMe) over TCP (NVMe/TCP) block storage protocols. In Storage Area Network (SAN) environments, storage systems are targets that have storage target devices. For iSCSI, the storage target devices are referred to as logical units (LUNs). For NVMe/TCP, the storage target devices are referred to as namespaces.

You use an SVM's iSCSI logical interface (LIF) to connect to both NVMe and iSCSI block storage.

You configure storage by creating LUNs for iSCSI and by creating namespaces for NVMe. LUNs and namespaces are then accessed by hosts using iSCSI or TCP protocols.

For more information about configuring iSCSI and NVMe/TCP block storage, see:
+ [Provisioning iSCSI for Linux](mount-iscsi-luns-linux.md)
+ [Provisioning iSCSI for Windows](mount-iscsi-windows.md)
+ [Provisioning NVMe/TCP for Linux](provision-nvme-linux.md)

## Accessing data from within the AWS Cloud


Each Amazon FSx file system is associated with a Virtual Private Cloud (VPC). You can access your FSx for ONTAP file system from anywhere in the file system's VPC, regardless of Availability Zone. You can also access your file system from other VPCs that can be in different AWS accounts or AWS Regions. In addition to the requirements described in the following sections for accessing FSx for ONTAP resources, you also need to ensure that your file system's VPC security group is configured so that data and management traffic can flow between your file system and clients. For more information about configuring security groups with the required ports, see [Amazon VPC security groups](limit-access-security-groups.md#fsx-vpc-security-groups).

### Accessing data from within the same VPC
Accessing data from the same VPC

When you create your Amazon FSx for NetApp ONTAP file system, you select the Amazon VPC in which it is located. All SVMs and volumes associated with the Amazon FSx for NetApp ONTAP file system are also located in the same VPC. When mounting a volume, if the file system and the client mounting the volume are located in the same VPC and AWS account, you can use the SVM's DNS name and volume junction or SMB share, depending on the client.

You can achieve optimal performance if the client and the volume are located in the in the same Availability Zone as the file system's subnet, or preferred subnet for Multi-AZ file systems. To identify a file system's subnet or preferred subnet, in the Amazon FSx console, choose **File systems**, then choose the ONTAP file system whose volume you are mounting, and the subnet or preferred subnet (Multi-AZ) is displayed in the **Subnet** or **Preferred subnet** panel.

### Accessing data from outside the deployment VPC
Accessing data from a different VPC

This section describes how to access an FSx for ONTAP file system's endpoints from AWS locations outside of the file system's deployment VPC.

#### Accessing NFS, SMB, and ONTAP management endpoints on Multi-AZ file systems


The NFS, SMB, and ONTAP management endpoints on Amazon FSx for NetApp ONTAP Multi-AZ file systems use floating internet protocol (IP) addresses so that connected clients seamlessly transition between the preferred and standby file servers during a failover event. For more information about failovers, see [Failover process for FSx for ONTAP](high-availability-AZ.md#Failover).

These floating IP addresses are created in the VPC route tables that you associate with your file system, and are within the file system's `EndpointIPv4AddressRange` or `EndpointIPv6AddressRange` which you specify during creation. The endpoint IP address range uses the following address ranges, depending on how a file system is created:
+ Multi-AZ dual-stack file systems created with the Amazon FSx console or Amazon FSx API by default use an available /118 IP address range selected by Amazon FSx from one of the VPC's CIDR ranges. You can have overlapping endpoint IP addresses for file systems deployed in the same VPC/route tables, as long as they don't overlap with any subnet.
+ Multi-AZ IPv4-only file systems created using the Amazon FSx console use the last 64 IP addresses in the VPC's primary CIDR range for the file system's endpoint IP address range by default.

  Multi-AZ IPv4-only file systems created using the AWS CLI or Amazon FSx API use an IP address range within the `198.19.0.0/16` address block for the endpoint IP address range by default.
+ For either network type, you can also specify your own IP address range when you use the **Standard create** option. The IP address range that you choose can either be inside or outside the VPC’s IP address range, as long as it doesn't overlap with any subnet, and as long as it isn't already used by another file system with the same VPC and route tables. For this option we recommend using a range that is inside the VPC's IP address range.

Only [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) supports routing to floating IP addresses, which is also known as transitive peering. VPC Peering, Direct Connect, and Site-to-Site VPN don't support transitive peering. Therefore, you are required to use Transit Gateway in order to access these interfaces from networks that are outside of your file system's VPC. 

The following diagram illustrates using Transit Gateway for NFS, SMB, or management access to a Multi-AZ file system that is in a different VPC than the clients that are accessing it.

![\[\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/fsx-ontap-multi-az-access-transit-gateway.png)


**Note**  
Ensure that all of the route tables you're using are associated with your Multi-AZ file system. Doing so helps prevent unavailability during a failover. For information about associating your Amazon VPC route tables with your file system, see [Updating file systems](updating-file-system.md).

For information about when you need to use Transit Gateway to access your FSx for ONTAP file system, see [When is Transit Gateway required?](#when-is-transit-gateway-required).

Amazon FSx manages VPC route tables for Multi-AZ file systems using tag-based authentication. These route tables are tagged with `Key: AmazonFSx; Value: ManagedByAmazonFSx`. When creating or updating FSx for ONTAP Multi-AZ file systems using CloudFormation we recommend that you add the `Key: AmazonFSx; Value: ManagedByAmazonFSx` tag manually.

#### Accessing NFS, SMB, or the ONTAP CLI and API for Single-AZ file systems


The endpoints used to access FSx for ONTAP Single-AZ file systems over NFS or SMB, and for administering file systems using the ONTAP CLI or REST API, are secondary IP addresses on the ENI of the active file server. The secondary IP addresses are within the VPC’s CIDR range, so clients can access data and management ports using VPC Peering, AWS Direct Connect, or Site-to-Site VPN without requiring AWS Transit Gateway.

The following diagram illustrates using Site-to-Site VPN or Direct Connect for NFS, SMB, or management access to a Single-AZ file system that is in a different VPC than the clients accessing it.

![\[\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/fsx-ontap-single-az-access-vpc-peering.png)


#### When is Transit Gateway required?


Whether or not Transit Gateway is required for your Multi-AZ file systems depends on the method you use to access your file system data. Single-AZ file systems do not require Transit Gateway. The following table describes when you will need to use AWS Transit Gateway to access Multi-AZ file systems.


| Data access | Requires Transit Gateway? | 
| --- | --- | 
|  Accessing FSx over NFS, SMB, or the NetApp ONTAP REST API, CLI. or NetApp Console  |  Only if: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/supported-fsx-clients.html)  | 
| Accessing data over iSCSI | No | 
| Accessing data over NVMe | No | 
| Joining an SVM to an Active Directory | No | 
| SnapMirror | No | 
| FlexCache Caching | No | 
| Global File Cache | No | 

#### Accessing NVMe, iSCSI and inter-cluster endpoints outside of the deployment VPC


You can use either VPC Peering or AWS Transit Gateway to access your file system's NVMe, iSCSI, and inter-cluster endpoints from outside of the file system's deployment VPC. You can use VPC Peering to route NVMe, iSCSI, and inter-cluster traffic between VPCs. A VPC peering connection is a networking connection between two VPCs, and is used to route traffic between them using private IPv4 or IPv6 addresses. You can use VPC peering to connect VPCs within the same AWS Region or between different AWS Regions. For more information on VPC peering, see [What is VPC peering?](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) in the *Amazon VPC Peering Guide*.

## Accessing data from on-premises


You can access your FSx for ONTAP file systems from on-premises using [Site-to-Site VPN](https://aws.amazon.com/vpn/) and [Direct Connect](https://aws.amazon.com/getting-started/projects/connect-data-center-to-aws/); more specific use case guidelines are available in the following sections. In addition to any requirements listed below for accessing different FSx for ONTAP resources from on-premises, you also need to ensure that your file system's VPC security group allows data to flow between your file system and clients; for a list of required ports, see [Amazon VPC security groups](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/limit-access-security-groups.html#fsx-vpc-security-groups).

### Accessing NFS, SMB, and ONTAP CLI and REST API endpoints from on-premises
Accessing NFS, SMB, ONTAP CLI and API from on-premises

This section describes how to access the NFS, SMB, and ONTAP management ports on FSx for ONTAP file systems from on-premises networks.

#### Accessing Multi-AZ file systems from on-premises


Amazon FSx requires that you use AWS Transit Gateway or that you configure remote NetApp Global File Cache or NetApp FlexCache to access Multi-AZ file systems from an on-premises network. In order to support failover across availability zones for Multi-AZ file systems, Amazon FSx uses floating IP addresses for the interfaces used for NFS, SMB, and ONTAP management endpoints.

Because the NFS, SMB, and management endpoints use floating IP addresses, you must use [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) in conjunction with AWS Direct Connect or Site-to-Site VPN to access these interfaces from an on-premises network. The floating IP addresses used for these interfaces are within the `EndpointIPv4AddressRange` or `EndpointIPv6AddressRange` you specify when creating your Multi-AZ file system. The endpoint IP address range uses the following address ranges, depending on how a file system is created:
+ Multi-AZ dual-stack file systems created with the Amazon FSx console or Amazon FSx API by default use an available /118 IP address range selected by Amazon FSx from one of the VPC's CIDR ranges. You can have overlapping endpoint IP addresses for file systems deployed in the same VPC/route tables, as long as they don't overlap with any subnet.
+ Multi-AZ IPv4-only file systems created using the Amazon FSx console use the last 64 IP addresses in the VPC's primary CIDR range for the file system's endpoint IP address range by default.

  Multi-AZ IPv4-only file systems created using the AWS CLI or Amazon FSx API use an IP address range within the `198.19.0.0/16` address block for the endpoint IP address range by default.
+ For either network type, you can also specify your own IP address range when you use the **Standard create** option. The IP address range that you choose can either be inside or outside the VPC’s IP address range, as long as it doesn't overlap with any subnet, and as long as it isn't already used by another file system with the same VPC and route tables. For this option we recommend using a range that is inside the VPC's IP address range.

The floating IP addresses are used to enable a seamless transition of your clients to the standby file system in the event a failover is required. For more information, see [Failover process for FSx for ONTAP](high-availability-AZ.md#Failover).

**Important**  
To access a Multi-AZ file system using a Transit Gateway, each of the Transit Gateway's attachments must be created in a subnet whose route table is associated with your file system.

For more information, see [Configure routing to access Multi-AZ file systems from on-premises](configure-routing-maz-on-prem.md).

#### Accessing Single-AZ file systems from on-premises


The requirement to use AWS Transit Gateway to access data from an on-premises network doesn’t exist for Single-AZ file systems. Single-AZ file systems are deployed in a single subnet, and a floating IP address is not required to provide failover between nodes. Instead, the IP addresses you access on Single-AZ file systems are implemented as secondary IP addresses within the file system’s VPC CIDR range, enabling you to access your data from another network without requiring AWS Transit Gateway.

### Accessing inter-cluster endpoints from on-premises


FSx for ONTAP’s inter-cluster endpoints are dedicated to replication traffic between NetApp ONTAP file systems, including between on-premises NetApp deployments and FSx for ONTAP. Replication traffic includes SnapMirror, FlexCache, and FlexClone relationships between storage virtual machines (SVMs) and volumes across different file systems, and NetApp Global File Cache. The inter-cluster endpoints are also used for Active Directory traffic. 

Because a file system's inter-cluster endpoints use IP addresses that are within the CIDR range of the VPC you provide when you create your FSx for ONTAP file system, you are not required to use a Transit Gateway for routing inter-cluster traffic between on-premises and the AWS Cloud. However, on-premises clients still must use Site-to-Site VPN or Direct Connect to establish a secure connection to your VPC. 

For more information, see [Configure routing to access Multi-AZ file systems from on-premises](configure-routing-maz-on-prem.md).

# Configure routing to access Multi-AZ file systems from outside your VPC


If you have a Multi-AZ file system with an `EndpointIPv4AddressRange` or `EndpointIPv6AddressRange` that's outside your VPC's IP address range, you need to set up additional routing in your AWS Transit Gateway to access your file system from peered or on-premises networks.

**Important**  
To access a Multi-AZ file system using a Transit Gateway, each of the Transit Gateway's attachments must be created in a subnet whose route table is associated with your file system.

**Note**  
No additional Transit Gateway configuration is required for Single-AZ file systems or Multi-AZ file systems with an endpoint IP address range that's within your VPC's IP address range.

**To configure routing using AWS Transit Gateway**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Choose the FSx for ONTAP file system for which you are configuring access from a peered network.

1. In **Network & security** copy the endpoint IP address range.

1. Add a route to Transit Gateway that routes traffic destined for this IP address range to your file system's VPC. For more information, see [Work with transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/working-with-transit-gateways.html) in the *Amazon VPC Transit Gateways*.

1. Confirm that you can access your FSx for ONTAP file system from the peered network.

To add the route table to your file system, see [Updating file systems](updating-file-system.md).

**Note**  
DNS records for the management, NFS, and SMB endpoints are only resolvable from within the same VPC as the file system. In order to mount a volume or connect to a management port from another network, you need to use the endpoint's IP address. These IP addresses do not change over time.

# Configure routing to access Multi-AZ file systems from on-premises


**To configure AWS Transit Gateway for access to Multi-AZ file systems from on-premises**

If you have a Multi-AZ file system with an `EndpointIPv4AddressRange` or `EndpointIPv6AddressRange` that's outside your VPC's CIDR range, you need to set up additional routing in your AWS Transit Gateway to access your file system from peered or on-premises networks.
**Note**  
No additional Transit Gateway configuration is required for Single-AZ file systems or Multi-AZ file systems with an endpoint IP address range that's within your VPC's IP address range.

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Choose the FSx for ONTAP file system for which you are configuring access from a peered network.

1. In **Network & security** copy the **Endpoint IPv4 or IPv6 address range**. 

1. Add a route to the Transit Gateway that routes traffic destined for this IP address range to your file system's VPC. For more information, see [Work with transit gateways](https://docs.aws.amazon.com/vpc/latest/tgw/working-with-transit-gateways.html) in the *Amazon VPC Transit Gateway User Guide*.

1. Confirm that you can access your FSx for ONTAP file system from the peered network.

**Important**  
To access a Multi-AZ file system using a Transit Gateway, each of the Transit Gateway's attachments must be created in a subnet whose route table is associated with your file system. Where you have separate Transit Gateway attachment subnets, you must also associate the route tables for those subnets with Amazon FSx so that they are updated with the Amazon FSx endpoint addresses.

To add a route table to your file system, see [Updating file systems](updating-file-system.md).

# Mounting volumes on Linux clients
Mounting on Linux clients

We recommend that the volumes you want to mount with Linux clients have a security style setting of `UNIX`. For more information, see [Managing FSx for ONTAP volumes](managing-volumes.md).

**Note**  
By default, FSx for ONTAP NFS mounts are `hard` mounts. To ensure a smooth failover in the event that one occurs, we recommend that you use the default `hard` mount option.

**To mount an ONTAP volume on a Linux client**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Create or select an Amazon EC2 instance running Amazon Linux 2 that is in the same VPC as the file system.

   For more information on launching an EC2 Linux instance, see [ Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) in the *Amazon EC2 User Guide*.

1. Connect to your Amazon EC2 Linux instance. For more information, see [ Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the *Amazon EC2 User Guide*.

1. Open a terminal on your EC2 instance using secure shell (SSH), and log in with the appropriate credentials.

1. Create a directory on the EC2 instance for mounting the SVM volume as follows:

   ```
   sudo mkdir /fsx
   ```

1. Mount the volume to the directory you just created using the following command:

   ```
   sudo mount -t nfs svm-dns-name:/volume-junction-path /fsx
   ```

   The following example uses sample values.

   ```
   sudo mount -t nfs svm-01234567890abdef0.fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/vol1 /fsx
   ```

   You can also use the SVM's IP address instead of its DNS name. We recommend using the DNS name to mount clients to second-generation file systems because it helps ensure that your clients are balanced across your file system's high-availability (HA) pairs. 

   ```
   sudo mount -t nfs 198.51.100.1:/vol1 /fsx
   ```
**Note**  
For second-generation file systems, the parallel NFS (pNFS) protocol is enabled by default and is used by default for any clients mounting volumes with NFS v4.1 or greater. 

## Using /etc/fstab to mount automatically on instance reboot


To automatically remount your FSx for ONTAP volume when an Amazon EC2 Linux instance reboots, use the `/etc/fstab` file. The `/etc/fstab` file contains information about file systems. The command `mount -a`, which runs during instance start-up, mounts the file systems listed in `/etc/fstab`.

**Note**  
FSx for ONTAP file systems do not support automatic mounting using `/etc/fstab` on Amazon EC2 Mac instances.

**Note**  
Before you can update the `/etc/fstab` file of your EC2 instance, make sure that you already created your FSx for ONTAP file system. For more information, see [Creating file systems](creating-file-systems.md).

**To update the /etc/fstab file on your EC2 instance**

1. Connect to your EC2 instance:
   + To connect to your instance from a computer running macOS or Linux, specify the .pem file for your SSH command. To do this, use the `-i` option and the path to your private key.
   + To connect to your instance from a computer running Windows, you can either use MindTerm or PuTTY. To use PuTTY, install it and convert the .pem file to a .ppk file.

   For more information, see the following topics in the *Amazon EC2 User Guide*:
   +  [Connecting to your Linux instance using SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html)
   +  [Connecting to your Linux instance from Windows using PuTTY](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html) 

1. Create a local directory that will be used to mount the SVM volume.

   ```
   sudo mkdir /fsx
   ```

1. Open the `/etc/fstab` file in an editor of your choice.

1. Add the following line to the `/etc/fstab` file. Insert a tab character between each parameter. It should appear as one line with no line breaks.

   ```
   svm-dns-name:volume-junction-path /fsx nfs nfsvers=version,defaults 0 0
   ```

   You can also use the IP address of volume's SVM. The last three parameters indicate NFS options (which we set to default), dumping of file system and filesystem check (these are typically not used so we set them to 0).

1. Save the changes to the file.

1. Now mount the file share using the following command. The next time the system starts, the folder will be mounted automatically.

   ```
   sudo mount /fsx
   sudo mount svm-dns-name:volume-junction-path
   ```

Your EC2 instance is now configured to mount the ONTAP volume whenever it restarts.

# Mounting volumes on Microsoft Windows clients
Mounting on Windows clients

This section describes how to access data in your FSx for ONTAP file system with clients running the Microsoft Windows operating system. Review the following requirements, regardless of the type of client you are using.

This procedure assumes that the client and the file system are located in the same VPC and AWS account. If the client is located on-premise or in a different VPC, AWS account, or AWS Region, this procedure also assumes that you've set up AWS Transit Gateway or a dedicated network connection using AWS Direct Connect or a private, secure tunnel using AWS Virtual Private Network. For more information, see [Accessing data from outside the deployment VPC](supported-fsx-clients.md#access-from-outside-deployment-vpc).

We recommend that you attach volumes to your Windows clients using the SMB protocol.

## Prerequisites


To access an ONTAP storage volume using a Microsoft Windows client, you have to satisfy the following prerequisites:
+ The SVM of the volume you are attaching must be joined to your organization's Active Directory, or you must be using a workgroup. For more information on joining your SVM to an Active Directory, see [Managing FSx for ONTAP storage virtual machines](managing-svms.md). For more information on using workgroups, see [Setting up an SMB server in a workgroup](smb-server-workgroup-setup.md). 
+ The volume you are attaching should have a security style setting of `NTFS`. For more information, see [Volume security style](managing-volumes.md#volume-security-style).

**To mount a volume on a Windows client using SMB and Active Directory**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Create or select an Amazon EC2 instance running Microsoft Windows that is in the same VPC as the file system, and joined to the same Microsoft Active Directory as the volume's SVM.

   For more information on launching an instance, see [Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/EC2_GetStarted.html#ec2-launch-instance) in the *Amazon EC2 User Guide*.

   For more information about joining an SVM to an Active Directory, see [Managing FSx for ONTAP storage virtual machines](managing-svms.md).

1. Connect to your Amazon EC2 Windows instance. For more information, see [Connecting to your Windows instance](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html) in the *Amazon EC2 User Guide*.

1. Open a command prompt.

1. Run the following command. Replace the following:
   + Replace `Z:` with any available drive letter.
   + Replace `DNS_NAME` with the DNS name or the IP address of the SMB endpoint for the volume's SVM.
   + Replace `SHARE_NAME` with the name of an SMB share. `C$` is the default SMB share at the root of the SVM's namespace, but you shouldn't mount it as that exposes storage to the root volume and can cause security and service disruption. You should provide an SMB share name to mount instead of `C$`. For more information about creating SMB shares, see [Managing SMB shares](create-smb-shares.md).

   ```
   net use Z: \\DNS_NAME\SHARE_NAME
   ```

   The following example uses sample values.

   ```
   net use Z: \\corp.example.com\group_share
   ```

   You can also use the IP address of the SVM instead of its DNS name. We recommend using the DNS name to mount clients to second-generation file systems because it helps ensure that your clients are balanced across your file system's high-availability (HA) pairs. 

   ```
   net use Z: \\198.51.100.5\group_share
   ```

# Mounting volumes on macOS clients
Mounting on macOS clients

This section describes how to access data in your FSx for ONTAP file system with clients running the macOS operating system. Review the following requirements, regardless of the type of client you are using.

This procedure assumes that the client and the file system are located in the same VPC and AWS account. If the client is located on-premise, or in a different VPC, AWS account or AWS Region, you've set up AWS Transit Gateway or a dedicated network connection using AWS Direct Connect or a private, secure tunnel using AWS Virtual Private Network. For more information, see [Accessing data from outside the deployment VPC](supported-fsx-clients.md#access-from-outside-deployment-vpc).

We recommend that you attach volumes to your Mac clients using the SMB protocol.

**To mount an ONTAP volume on a macOS client using SMB**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Create or select an Amazon EC2 Mac instance running the macOS that is in the same VPC as the file system.

   For more information on launching an instance, see [ Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html#mac-instance-launch) in the *Amazon EC2 User Guide*.

1. Connect to your Amazon EC2 Mac instance. For more information, see [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the *Amazon EC2 User Guide*.

1. Open a terminal on your EC2 instance using secure shell (SSH), and log in with the appropriate credentials.

1. Create a directory on the EC2 instance for mounting the volume as follows:

   ```
   sudo mkdir /fsx
   ```

1. Mount the volume using the following command.

   ```
   sudo mount -t smbfs filesystem-dns-name:/smb-share-name mount-point
   ```

   The following example uses sample values.

   ```
   sudo mount -t smbfs svm-01234567890abcde2.fs-01234567890abcde5.fsx.us-east-1.amazonaws.com:/C$ /fsx
   ```

   You can also use the SVM's IP address instead of its DNS name. We recommend using the DNS name to mount clients to second-generation file systems because it helps ensure that your clients are balanced across your file system's high-availability (HA) pairs. 

   ```
   sudo mount -t smbfs 198.51.100.10:/C$ /fsx
   ```

   `C$` is the default SMB share that you can mount to see the root of the SVM's namespace. If you’ve created any Server Message Block (SMB) shares in your SVM, provide the SMB share names instead of `C$`. For more information about creating SMB shares, see [Managing SMB shares](create-smb-shares.md).

**To mount an ONTAP volume on a macOS client using NFS**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Create or select an Amazon EC2 instance running Amazon Linux 2 that is in the same VPC as the file system.

   For more information on launching an EC2 Linux instance, see [ Step 1: Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) in the *Amazon EC2 User Guide*.

1. Connect to your Amazon EC2 Linux instance. For more information, see [ Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html) in the *Amazon EC2 User Guide*.

1. Mount your FSx for ONTAP volume on the Linux EC2 instance by either using a user-data script during instance launch, or by running the following commands:

   ```
   sudo mount -t nfs -o nfsvers=NFS_version svm-dns-name:/volume-junction-path /mount-point
   ```

   The following example uses sample values.

   ```
   sudo mount -t nfs -o nfsvers=4.1 svm-01234567890abdef0.fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/vol1 /fsxontap
   ```

   You can also use the SVM's IP address instead of its DNS name. We recommend using the DNS name to mount clients to second-generation file systems because it helps ensure that your clients are balanced across your file system's HA pairs. 

   ```
   sudo mount -t nfs -o nfsvers=4.1 198.51.100.1:/vol1 /fsxontap
   ```

1. Mount the volume to the directory you just created using the following command.

   ```
   sudo mount -t nfs svm-dns-name:/volume-junction-path /fsx
   ```

   The following example uses sample values.

   ```
   sudo mount -t nfs svm-01234567890abdef0.fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/vol1 /fsx
   ```

   You can also use the SVM's IP address instead of its DNS name. We recommend using the DNS name to mount clients to second-generation file systems because it helps ensure that your clients are balanced across your file system's high-availability (HA) pairs. 

   ```
   sudo mount -t nfs 198.51.100.1:/vol1 /fsx
   ```

# Provisioning iSCSI for Linux


FSx for ONTAP supports the iSCSI protocol. You need to provision iSCSI on both the Linux client and your file system in order to use the iSCSI protocol to transport data between clients and your file system. The iSCSI protocol is available on all file systems that have 6 or fewer [high-availability (HA) pairs](HA-pairs.md).

There are three main steps to process of configuring iSCSI on your Amazon FSx for NetApp ONTAP, which are covered in the following procedures:

1. Install and configure the iSCSI client on the Linux host.

1. Configure iSCSI on the file system's SVM.
   + Create an iSCSI initiator group.
   + Map the initiator group to the LUN.

1. Mount an iSCSI LUN on the Linux client.

## Before you begin


Before you begin the process of configuring your file system for iSCSI, you need to have the following items completed.
+ Create an FSx for ONTAP file system. For more information, see [Creating file systems](creating-file-systems.md).
+ Create an iSCSI LUN on the file system. For more information, see [Creating an iSCSI LUN](create-iscsi-lun.md).
+ Create an EC2 instance running the Amazon Linux 2 Amazon Machine Image (AMI) in the same VPC as the file system. This is the Linux host on which you will configure iSCSI and access your file data.

  Beyond the scope of these procedures, if the host is located in another VPC, you can use VPC peering or AWS Transit Gateway to grant other VPCs access to the volume's iSCSI endpoints. For more information, see [Accessing data from outside the deployment VPC](supported-fsx-clients.md#access-from-outside-deployment-vpc).
+ Configure the Linux host's VPC security groups to allow inbound and outbound traffic as described in [File System Access Control with Amazon VPC](limit-access-security-groups.md).
+ Obtain the credentials for the ONTAP user with `fsxadmin` privileges that you will use to access the ONTAP CLI. For more information, see [ONTAP roles and users](roles-and-users.md).
+ The Linux host that you will configure for iSCSI and use to access the FSx for ONTAP file system are located in the same VPC and AWS account.
+ We recommend that the EC2 instance be in the same availability zone as your file system's preferred subnet, as shown in the following graphic.  
![\[Image showing an Amazon FSx for NetApp ONTAP file system with an iSCSI LUN and an Amazon EC2 instance located in the same availability zone as that of the file system's preferred subnet.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/fsx-ontap-iscsi-mnt-client.png)

If your EC2 instance runs a different Linux AMI than Amazon Linux 2, some of the utilities used in these procedures and examples might already be installed, and you might use different commands to install required packages. Aside from installing packages, the commands used in this section are valid for other EC2 Linux AMIs.

**Topics**
+ [

## Before you begin
](#iscsi-linux-byb)
+ [

## Install and configure iSCSI on the Linux host
](#configure-iscsi-on-linux-client)
+ [

## Configure iSCSI on the FSx for ONTAP file system
](#configure-iscsi-on-fsx-ontap)
+ [

## Mount an iSCSI LUN on your Linux client
](#mount-iscsi-lun-on-linux-client)

## Install and configure iSCSI on the Linux host


**To install the iSCSI client**

1. Confirm that `iscsi-initiator-utils` and `device-mapper-multipath` are installed on your Linux device. Connect to your Linux instance using an SSH client. For more information, see [ Connect to your Linux instance using SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html).

1. Install `multipath` and the iSCSI client using the following command. Installing `multipath` is required if you want to automatically failover between your file servers.

   ```
   ~$ sudo yum install -y device-mapper-multipath iscsi-initiator-utils
   ```

1. To facilitate a faster response when automatically failing over between file servers when using `multipath`, set the replacement timeout value in the `/etc/iscsi/iscsid.conf` file to a value of `5` instead of using the default value of `120`.

   ```
   ~$ sudo sed -i 's/node.session.timeo.replacement_timeout = .*/node.session.timeo.replacement_timeout = 5/' /etc/iscsi/iscsid.conf; sudo cat /etc/iscsi/iscsid.conf | grep node.session.timeo.replacement_timeout
   ```

1. Start the iSCSI service.

   ```
   ~$ sudo service iscsid start
   ```

   Note that depending on your Linux version, you may have to use this command instead:

   ```
   ~$ sudo systemctl start iscsid
   ```

1. Confirm that the service is running using the following command.

   ```
   ~$ sudo systemctl status iscsid.service
   ```

   The system responds with the following output:

   ```
   iscsid.service - Open-iSCSI 
       Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled; vendor preset: disabled) 
       Active: active (running) since Fri 2021-09-02 00:00:00 UTC; 1min ago
       Docs: man:iscsid(8)
       man:iscsiadm(8)
       Process: 14658 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
       Main PID: 14660 (iscsid)
       CGroup: /system.slice/iscsid.service
       ├─14659 /usr/sbin/iscsid
       └─14660 /usr/sbin/iscsid
   ```

**To configure iSCSI on your Linux client**

1. To enable your clients to automatically failover between your file servers, you must configure multipath. Use the following command:

   ```
   ~$ sudo mpathconf --enable --with_multipathd y
   ```

1. Determine the initiator name of your Linux host using the following command. The location of the initiator name depends on your iSCSI utility. If you are using `iscsi-initiator-utils`, the initiator name is located in the file `/etc/iscsi/initiatorname.iscsi`.

   ```
   ~$ sudo cat /etc/iscsi/initiatorname.iscsi
   ```

   The system responds with the initiator name.

   ```
   InitiatorName=iqn.1994-05.com.redhat:abcdef12345
   ```

## Configure iSCSI on the FSx for ONTAP file system


1. Connect to the NetApp ONTAP CLI on the FSx for ONTAP file system on which you created the iSCSI LUN using the following command. For more information, see [Using the NetApp ONTAP CLI](managing-resources-ontap-apps.md#netapp-ontap-cli).

   ```
   ~$ ssh fsxadmin@your_management_endpoint_ip
   ```

1. Create the initiator group (`igroup`) using the NetApp ONTAP CLI [https://docs.netapp.com/us-en/ontap-cli-9111/lun-igroup-create.html](https://docs.netapp.com/us-en/ontap-cli-9111/lun-igroup-create.html) command. An initiator group maps to iSCSI LUNs and control which initiators (clients) have access to LUNs. Replace `host_initiator_name` with the initiator name from your Linux host that you retrieved in the previous procedure.

   ```
   ::> lun igroup create -vserver svm_name -igroup igroup_name -initiator host_initiator_name -protocol iscsi -ostype linux 
   ```

   If you want to make the LUNs mapped to this igroup available to multiple hosts, you can specify multiple initiator names separated with a comma. For more information, see [ lun igroup create](https://docs.netapp.com/us-en/ontap-cli-9111/lun-igroup-create.html) in the *NetApp ONTAP Documentation Center*. 

1. Confirm that the `igroup` exists using the [https://docs.netapp.com/us-en/ontap-cli-9111/lun-igroup-show.html](https://docs.netapp.com/us-en/ontap-cli-9111/lun-igroup-show.html) command:

   ```
   ::> lun igroup show
   ```

   The system responds with the following output:

   ```
   Vserver   Igroup       Protocol OS Type  Initiators
   --------- ------------ -------- -------- ------------------------------------
   svm_name  igroup_name  iscsi    linux    iqn.1994-05.com.redhat:abcdef12345
   ```

1. This step assumes that you have already created an iSCSI LUN. If you have not, see [Creating an iSCSI LUN](create-iscsi-lun.md) for step-by-step instructions to do so.

   Create a mapping from the LUN you created to the igroup you created, using the [https://docs.netapp.com/us-en/ontap-cli-9111/lun-mapping-create.html](https://docs.netapp.com/us-en/ontap-cli-9111/lun-mapping-create.html), specifying the following attributes:
   + `svm_name` – The name of the storage virtual machine providing the iSCSI target. The host uses this value to reach the LUN.
   + `vol_name` – The name of the volume hosting the LUN.
   + `lun_name` – The name that you assigned to the LUN.
   + `igroup_name` – The name of the initiator group.
   + `lun_id` – The LUN ID integer is specific to the mapping, not to the LUN itself. This is used by the initiators in the igroup as the Logical Unit Number use this value for the initiator when accessing the storage.

   ```
   ::> lun mapping create -vserver svm_name -path /vol/vol_name/lun_name -igroup igroup_name -lun-id lun_id
   ```

1. Use the [https://docs.netapp.com/us-en/ontap-cli-9111/lun-show.html](https://docs.netapp.com/us-en/ontap-cli-9111/lun-show.html) command to confirm the LUN is created, online, and mapped.

   ```
   ::> lun show -path /vol/vol_name/lun_name -fields state,mapped,serial-hex
   ```

   The system responds with the following output:

   ```
    Vserver    Path                           serial-hex               state    mapped
    --------- ------------------------------- ------------------------ -------- --------
    svm_name  /vol/vol_name/lun_name          6c5742314e5d52766e796150 online   mapped
   ```

   Save the `serial_hex` value (in this example, it is `6c5742314e5d52766e796150`), you will use it in a later step to create a friendly name for the block device.

1. Use the [https://docs.netapp.com/us-en/ontap-cli-9111/network-interface-show.html](https://docs.netapp.com/us-en/ontap-cli-9111/network-interface-show.html) command to retrieve the addresses of the `iscsi_1` and `iscsi_2` interfaces for the SVM in which you've created your iSCSI LUN.

   ```
   ::> network interface show -vserver svm_name
   ```

   The system responds with the following output:

   ```
               Logical               Status     Network            Current                    Current Is 
   Vserver     Interface             Admin/Oper Address/Mask       Node                       Port    Home
   ----------- ----------            ---------- ------------------ -------------              ------- ----
   svm_name
               iscsi_1               up/up      172.31.0.143/20    FSxId0123456789abcdef8-01  e0e     true
               iscsi_2               up/up      172.31.21.81/20    FSxId0123456789abcdef8-02  e0e     true
               nfs_smb_management_1 
                                     up/up      198.19.250.177/20  FSxId0123456789abcdef8-01  e0e     true
   3 entries were displayed.
   ```

   In this example, the IP address of `iscsi_1` is `172.31.0.143` and `iscsi_2` is `172.31.21.81`.

## Mount an iSCSI LUN on your Linux client


The process of mounting the iSCSI LUN on your Linux client involves three steps:

1. Discovering the target iSCSI nodes

1. Partitioning the iSCSI LUN

1. Mounting the iSCSI LUN on the client

These are covered in the following procedures.

**To discover the target iSCSI nodes**

1. On your Linux client, use the following command to discover the target iSCSI nodes using `iscsi_1`’s IP address *iscsi\$11\$1IP*.

   ```
   ~$ sudo iscsiadm --mode discovery --op update --type sendtargets --portal iscsi_1_IP
   ```

   ```
   172.31.0.143:3260,1029 iqn.1992-08.com.netapp:sn.9cfa2c41207a11ecac390182c38bc256:vs.3
   172.31.21.81:3260,1028 iqn.1992-08.com.netapp:sn.9cfa2c41207a11ecac390182c38bc256:vs.3
   ```

   In this example, `iqn.1992-08.com.netapp:sn.9cfa2c41207a11ecac390182c38bc256:vs.3` corresponds to the `target_initiator` for the iSCSI LUN in the preferred availability zone.

1. (Optional) To drive higher throughput than the Amazon EC2 single client maximum of 5 Gbps (\$1625 MBps) to your iSCSI LUN, follow the procedures described in [Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) in the Amazon Elastic Compute Cloud User Guide for Linux Instances to establish additional sessions for greater throughput.

   The following command establishes 8 sessions per initiator per ONTAP node in each availability zone, enabling the client to drive up to 40 Gbps (5,000 MBps) of aggregate throughput to the iSCSI LUN.

   ```
   ~$ sudo iscsiadm --mode node -T target_initiator --op update -n node.session.nr_sessions -v 8
   ```

1. Log into the target initiators. Your iSCSI LUNs are presented as available disks.

   ```
   ~$ sudo iscsiadm --mode node -T target_initiator --login
   ```

   ```
   Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.9cfa2c41207a11ecac390182c38bc256:vs.3, portal: 172.31.14.66,3260] (multiple)
   Login to [iface: default, target: iqn.1992-08.com.netapp:sn.9cfa2c41207a11ecac390182c38bc256:vs.3, portal: 172.31.14.66,3260] successful.
   ```

   The output above is truncated; you should see one `Logging in` and one `Login successful` response for each session on each file server. In the case of 4 sessions per node, there will be 8 `Logging in` and 8 `Login successful` responses.

1. Use the following command to verify that `dm-multipath` has identified and merged the iSCSI sessions by showing a single LUN with multiple policies. There should be an equal number of devices that are listed as `active` and those listed as `enabled`. 

   ```
   ~$ sudo multipath -ll
   ```

   In the output, the disk name is formatted as `dm-xyz`, where `xyz` is an integer. If there are no other multipath disks, this value is `dm-0`.

   ```
   3600a09806c5742314e5d52766e79614f dm-xyz NETAPP  ,LUN C-Mode      
   size=10G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
   |-+- policy='service-time 0' prio=50 status=active
   | |- 0:0:0:1 sda     8:0   active ready running
   | |- 1:0:0:1 sdc     8:32  active ready running
   | |- 3:0:0:1 sdg     8:96  active ready running
   | `- 4:0:0:1 sdh     8:112 active ready running
   `-+- policy='service-time 0' prio=10 status=enabled
     |- 2:0:0:1 sdb     8:16  active ready running
     |- 7:0:0:1 sdf     8:80  active ready running
     |- 6:0:0:1 sde     8:64  active ready running
     `- 5:0:0:1 sdd     8:48  active ready running
   ```

   Your block device is now connected to your Linux client. It is located under the path `/dev/dm-xyz`. You should not use this path for administrative purposes; instead, use the symbolic link that is under the path `/dev/mapper/wwid`, where `wwid` is a unique identifier for your LUN that is consistent across devices. In the next step, you’ll provide a friendly name for the `wwid` so you can distinguish it from other multipathed disks.

**To assign the block device a friendly name**

1. To provide your device a friendly name, create an alias in the `/etc/multipath.conf` file. To do this, add the following entry to the file using your preferred text editor, replacing the following placeholders:
   + Replace `serial_hex` with the value the you saved in the [Configure iSCSI on the FSx for ONTAP file system](#configure-iscsi-on-fsx-ontap) procedure.
   + Add the prefix `3600a0980` to the `serial_hex` value as shown in the example. This is a unique preamble for the NetApp ONTAP distribution that Amazon FSx for NetApp ONTAP uses.
   + Replace `device_name` with the friendly name you want to use for your device.

   ```
   multipaths {
       multipath {
           wwid 3600a0980serial_hex
           alias device_name
       }
   }
   ```

   As an alternative, you can copy and save the following script as a bash file, such as `multipath_alias.sh`. You can run the script with sudo privileges, replacing `serial_hex` (without the 3600a0980 prefix) and `device_name` with your respective serial number and the desired friendly name. This script searches for an uncommented `multipaths` section in the `/etc/multipath.conf` file. If one exists, it appends a `multipath` entry to that section; otherwise, it will create a new `multipaths` section with a `multipath` entry for your block device.

   ```
   #!/bin/bash
   SN=serial_hex
   ALIAS=device_name
   CONF=/etc/multipath.conf
   grep -q '^multipaths {' $CONF
   UNCOMMENTED=$?
   if [ $UNCOMMENTED -eq 0 ]
   then
           sed -i '/^multipaths {/a\\tmultipath {\n\t\twwid 3600a0980'"${SN}"'\n\t\talias '"${ALIAS}"'\n\t}\n' $CONF
   else
           printf "multipaths {\n\tmultipath {\n\t\twwid 3600a0980$SN\n\t\talias $ALIAS\n\t}\n}" >> $CONF
   fi
   ```

1. Restart the `multipathd` service for the changes to `/etc/multipathd.conf` take effect.

   ```
   ~$ systemctl restart multipathd.service
   ```

**To partition the LUN**

The next step is to format and partition your LUN using `fdisk`.

1. Use the following command to verify that the path to your `device_name` is present.

   ```
   ~$ ls /dev/mapper/device_name
   ```

   ```
   /dev/device_name
   ```

1. Partition the disk using `fdisk`. You’ll enter an interactive prompt. Enter the options in the order shown. You can make multiple partitions by using a value smaller than the last sector (`20971519` in this example).
**Note**  
The `Last sector` value will vary depending on the size of your iSCSI LUN (10GB in this example).

   ```
   ~$ sudo fdisk /dev/mapper/device_name
   ```

   The `fsdisk` interactive prompt starts.

   ```
   Welcome to fdisk (util-linux 2.30.2). 
   
   Changes will remain in memory only, until you decide to write them. 
   Be careful before using the write command. 
   
   Device does not contain a recognized partition table. 
   Created a new DOS disklabel with disk identifier 0x66595cb0. 
   
   Command (m for help): n 
   Partition type 
      p primary (0 primary, 0 extended, 4 free) 
      e extended (container for logical partitions) 
   Select (default p): p 
   Partition number (1-4, default 1): 1 
   First sector (2048-20971519, default 2048): 2048 
   Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): 20971519
                                       
   Created a new partition 1 of type 'Linux' and of size 512 B.
   Command (m for help): w
   The partition table has been altered.
   Calling ioctl() to re-read partition table. 
   Syncing disks.
   ```

   After entering `w`, your new partition `/dev/mapper/partition_name` becomes available. The *partition\$1name* has the format *<device\$1name>**<partition\$1number>*. `1` was used as the partition number used in the `fdisk` command in the previous step.

1. Create your file system using `/dev/mapper/partition_name` as the path.

   ```
   ~$ sudo mkfs.ext4 /dev/mapper/partition_name
   ```

   The system responds with the following output:

   ```
   mke2fs 1.42.9 (28-Dec-2013)
   Discarding device blocks: done 
   Filesystem label=
   OS type: Linux
   Block size=4096 (log=2)
   Fragment size=4096 (log=2)
   Stride=0 blocks, Stripe width=16 blocks
   655360 inodes, 2621184 blocks
   131059 blocks (5.00%) reserved for the super user
   First data block=0
   Maximum filesystem blocks=2151677952
   80 block groups
   32768 blocks per group, 32768 fragments per group
   8192 inodes per group
   Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
   Allocating group tables: done 
   Writing inode tables: done 
   Creating journal (32768 blocks): done
   Writing superblocks and filesystem accounting information: done
   ```

**To mount the LUN on the Linux client**

1. Create a directory *directory\$1path* as the mount point for your file system.

   ```
   ~$ sudo mkdir /directory_path/mount_point
   ```

1. Mount the file system using the following command.

   ```
   ~$ sudo mount -t ext4 /dev/mapper/partition_name /directory_path/mount_point
   ```

1. (Optional) If you want to give a specific user ownership of the mount directory, replace *`username`* with the owner's username.

   ```
   ~$ sudo chown username:username /directory_path/mount_point
   ```

1. (Optional) Verify that you can read from and write data to the file system.

   ```
   ~$ echo "Hello world!" > /directory_path/mount_point/HelloWorld.txt
   ~$ cat directory_path/HelloWorld.txt
   Hello world!
   ```

   You have successfully created and mounted an iSCSI LUN on your Linux client.

# Provisioning iSCSI for Windows


FSx for ONTAP supports the iSCSI protocol. You need to provision iSCSI on both the Windows client and the SVM and volume in order to use the iSCSI protocol to transport data between clients and your file system. The iSCSI protocol is available on all file systems that have 6 or fewer [high-availability (HA) pairs](HA-pairs.md).

The examples presented in these procedures show how to provision the iSCSI protocol on the client and FSx for ONTAP file system, and use the following set up:
+ The iSCSI LUN that is getting mounted to a Windows host is already created. For more information, see [Creating an iSCSI LUN](create-iscsi-lun.md).
+ The Microsoft Windows host that is mounting the iSCSI LUN is an Amazon EC2 instance running a Microsoft Windows Server 2019 Amazon Machine Image (AMI). It has VPC security groups configured to allow inbound and outbound traffic as described in [File System Access Control with Amazon VPC](limit-access-security-groups.md).

  You may be using a different Microsoft Windows AMI in your set up.
+ The client and the file system are located in the same VPC and AWS account. If the client is located in another VPC, you can use VPC peering or AWS Transit Gateway to grant other VPCs access to the iSCSI endpoints. For more information, see [Accessing data from outside the deployment VPC](supported-fsx-clients.md#access-from-outside-deployment-vpc).

  We recommend that the EC2 instance be in the same availability zone as your file system's preferred subnet, as shown in the following graphic.

![\[Image showing an Amazon FSx for NetApp ONTAP file system with an iSCSI LUN and an Amazon EC2 instance located in the same availability zone as that of the file system's preferred subnet.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/fsx-ontap-iscsi-mnt-client.png)


**Topics**
+ [

## Configure iSCSI on the Windows client
](#configure-iscsi-win-client)
+ [

## Configure iSCSI on the FSx for ONTAP file system
](#configure-iscsi-on-ontap-win)
+ [

## Mount an iSCSI LUN on the Windows client
](#configure-iscsi-on-fsx)
+ [

## Validating your iSCSI configuration
](#validate-iscsi-windows)

## Configure iSCSI on the Windows client


1. Use Windows Remote Desktop to connect to the Windows client on which you want to mount the iSCSI LUN. For more information, see [Connect to your Windows instance using RDP](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/connecting_to_windows_instance.html#connect-rdp) in the *Amazon Elastic Compute Cloud User Guide*.

1. Open a Windows PowerShell as an Administrator. Use the following commands to enable iSCSI on your Windows instance and configure the iSCSI service to start automatically.

   ```
   PS C:\> Start-Service MSiSCSI 
   PS C:\> Set-Service -Name msiscsi -StartupType Automatic
   ```

1. Retrieve the initiator name of your Windows instance. You’ll use this value in configuring iSCSI on your FSx for ONTAP file system using the NetApp ONTAP CLI.

   ```
   PS C:\> (Get-InitiatorPort).NodeAddress
   ```

   The system responds with the initiator port:

   ```
   iqn.1991-05.com.microsoft:ec2amaz-abc123d 
   ```

1. To enable your clients to automatically failover between your file servers, you need install `Multipath-IO` (MPIO) on your Windows instance. Use the following command:

   ```
   PS C:\> Install-WindowsFeature Multipath-IO
   ```

1. Restart your Windows instance after the `Multipath-IO` installation has completed. Keep your Windows instance open to perform steps for mounting the iSCSI LUN in a section that follows.

## Configure iSCSI on the FSx for ONTAP file system


1. To access the ONTAP CLI, establish an SSH session on the management port of the Amazon FSx for NetApp ONTAP file system or SVM by running the following command. Replace `management_endpoint_ip` with the IP address of the file system's management port.

   ```
   [~]$ ssh fsxadmin@management_endpoint_ip
   ```

   For more information, see [Managing file systems with the ONTAP CLI](managing-resources-ontap-apps.md#fsxadmin-ontap-cli). 

1. Using the ONTAP CLI [https://docs.netapp.com/us-en/ontap-cli-9141/lun-igroup-create.html](https://docs.netapp.com/us-en/ontap-cli-9141/lun-igroup-create.html), create the initiator group, or `igroup`. An initiator group maps to iSCSI LUNs and controls which initiators (clients) have access to LUNs. Replace `host_initiator_name` with the initiator name from your Windows host that you retrieved in the previous procedure.

   ```
   ::> lun igroup create -vserver svm_name -igroup igroup_name -initiator host_initiator_name -protocol iscsi -ostype windows
   ```

   Io make the LUNs mapped to this `igroup` available to multiple hosts, you can specify multiple comma-separated initiator names using [https://docs.netapp.com/us-en/ontap-cli-9141/lun-create.html#parameters](https://docs.netapp.com/us-en/ontap-cli-9141/lun-create.html#parameters) ONTAP CLI command.

1. Confirm that the `igroup` was created successfully using the [lun igroup show](https://docs.netapp.com/us-en/ontap-cli-9141/lun-igroup-show.html) ONTAP CLI command:

   ```
   ::> lun igroup show
   ```

   The system responds with the following output:

   ```
   Vserver    Igroup        Protocol OS Type  Initiators 
   ---------  ------------  -------- -------- ------------------------------------ 
   svm_name   igroup_name   iscsi    windows  iqn.1994-05.com.windows:abcdef12345
   ```

   With the `igroup` created, you are ready to create LUNs and map them to the `igroup`.

1. This step assumes that you have already created an iSCSI LUN. If you have not, see [Creating an iSCSI LUN](create-iscsi-lun.md) for step-by-step instructions to do so.

   Create a LUN mapping from the LUN to your new `igroup`.

   ```
   ::> lun mapping create -vserver svm_name -path /vol/vol_name/lun_name -igroup igroup_name -lun-id lun_id
   ```

1. Confirm that the LUN is created, online, and mapped with the following command:

   ```
   ::> lun show -path /vol/vol_name/lun_name 
   Vserver     Path                            State   Mapped   Type     Size 
   ---------   ------------------------------- ------- -------- -------- -------- 
   svm_name    /vol/vol_name/lun_name          online  mapped   windows  10GB
   ```

   You are now ready to add the iSCSI target on your Windows instance.

1. Retrieve the IP addresses of the `iscsi_1` and `iscsi_2` interfaces for your SVM using the following command:

   ```
   ::> network interface show -vserver svm_name
   ```

   ```
               Logical    Status     Network            Current       Current Is 
   Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home 
   ----------- ---------- ---------- ------------------ ------------- ------- ---- 
   svm_name 
               iscsi_1    up/up      172.31.0.143/20    FSxId0123456789abcdef8-01 
                                                                      e0e     true 
               iscsi_2    up/up      172.31.21.81/20    FSxId0123456789abcdef8-02 
                                                                      e0e     true 
               nfs_smb_management_1 
                          up/up      198.19.250.177/20  FSxId0123456789abcdef8-01 
                                                                      e0e     true 
   3 entries were displayed.
   ```

   In this example, the IP address of `iscsi_1` is `172.31.0.143` and `iscsi_2` is `172.31.21.81`.

## Mount an iSCSI LUN on the Windows client


1. On your Windows instance, open a PowerShell terminal as an Administrator.

1. You will create a `.ps1` script that does the following:
   + Connects to each of your file system’s iSCSI interfaces.
   + Adds and configures MPIO for iSCSI.
   + Establishes 8 sessions for each iSCSI connection, which enables the client to drive up to 40 Gbps (5,000 MBps) of aggregate throughput to the iSCSI LUN. Having 8 sessions ensures a single client can drive the full 4,000 MBps throughput capacity for the highest-level FSx for ONTAP throughput capacity. You can optionally change the number of sessions to a higher or lower number of sessions (each session provides up to 625 MBps of throughput) by modifying the `RecommendedConnectionCount` variable. For more information, see [ Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-network-bandwidth.html) in the *Amazon Elastic Compute Cloud User Guide for Windows Instances*.

   Copy the following set of commands into a file to create the `.psl` script.
   + Replace `iscsi_1` and `iscsi_2` with the IP addresses you retrieved in the previous step.
   + Replace `ec2_ip` with the IP address of your Windows instance.

   ```
   Write-Host "Starting iSCSI connection setup..."
        $TargetPortalAddresses = @("iscsi_1","iscsi_2"); $LocaliSCSIAddress = "ec2_ip"
        $RecommendedConnectionCount = 8
   
        Foreach ($TargetPortalAddress in $TargetPortalAddresses) {
            New-IscsiTargetPortal -TargetPortalAddress $TargetPortalAddress -TargetPortalPortNumber 3260 -InitiatorPortalAddress $LocaliSCSIAddress
        }
   
        New-MSDSMSupportedHW -VendorId MSFT2005 -ProductId iSCSIBusType_0x9
   
        $currentMPIOSettings = Get-MPIOSetting
        if ($currentMPIOSettings.PathVerificationState -ne 'Enabled') {
            Write-Host "Setting MPIO path verification state to Enabled"; Set-MPIOSetting -NewPathVerificationState Enabled
        } else { Write-Host "MPIO path verification state already Enabled" }
   
        $portalConnectionCounts = @{}
        foreach ($TargetPortalAddress in $TargetPortalAddresses) { $portalConnectionCounts[$TargetPortalAddress] = 0 }
   
        $sessions = Get-IscsiSession
        if ($sessions) {
            foreach ($session in $sessions) {
                if ($session.IsConnected) {
                    $targetPortal = (Get-IscsiTargetPortal -iSCSISession $session).TargetPortalAddress
                    if ($portalConnectionCounts.ContainsKey($targetPortal)) { $portalConnectionCounts[$targetPortal]++ }
                }
            }
        }
   
        foreach ($TargetPortalAddress in $TargetPortalAddresses) {
            $existingCount = $portalConnectionCounts[$TargetPortalAddress]; $remainingConnections = $RecommendedConnectionCount - $existingCount
            Write-Host "Portal $TargetPortalAddress has $existingCount existing connections, $remainingConnections remaining (max recommended: $RecommendedConnectionCount)"
            if ($remainingConnections -gt 0) {
                Write-Host "Creating $remainingConnections connections for portal $TargetPortalAddress"
                1..$remainingConnections | ForEach-Object {
                    Get-IscsiTarget | Connect-IscsiTarget -IsMultipathEnabled $true -TargetPortalAddress $TargetPortalAddress -InitiatorPortalAddress $LocaliSCSIAddress -IsPersistent $true
                }
            } else { Write-Host "Maximum connections (8) reached for portal $TargetPortalAddress" }
        }
   
        Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
   ```

1. Launch the Windows Disk Management application. Open the Windows Run dialog box, and enter `diskmgmt.msc` and press **Enter**. The Disk Management application opens.  
![\[The Windows Disk Management window is displayed.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/DiskMgmt.png)

1. Locate the unallocated disk This is the iSCSI LUN. In the example, Disk 1 is the iSCSI disk. It is offline.  
![\[The panel that displays when the cursor is placed over Disk 1.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/GoOnline.png)

   Bring the volume online by placing the cursor over **Disk 1** and right-click then choose **Online**.
**Note**  
You can modify the storage area network (SAN) policy so that new volumes are automatically brought online. For more information, see [ SAN policies](https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/san) in the *Microsoft Windows Server Command Reference*.

1. To initialize the disk, place the cursor over **Disk 1** right-click, and choose **Initialize**. The Initialize dialog appears. Choose **OK** initialize the disk.

1. Format the disk as you would normally. After formatting is complete, the iSCSI drive appears as a usable drive on the Windows client.

## Validating your iSCSI configuration


We have provided a script to check that your iSCSI setup is properly configured. The script examines parameters such as session count, node distribution, and Multipath I/O (MPIO) status. The following task explains how to install and use the script. <a name="validate-iscsi-windows-procedure"></a>

**To validate your iSCSI configuration**

1. Open a Windows PowerShell window.

1. Download the script using the following command.

   ```
   PS C:\> Invoke-WebRequest "https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/samples/CheckiSCSI.zip" -OutFile "CheckiSCSI.zip"
   ```

1. Expand the zip file using the following command.

   ```
   PS C:\> Expand-Archive -Path ".\CheckiSCSI.zip" -DestinationPath "./"
   ```

1. Run the script using the following command.

   ```
   PS C:\> ./CheckiSCSI.ps1
   ```

1. Review the output to understand your configuration's current state. The following example demonstrates a successful iSCSI configuration.

   ```
   PS C:\> ./CheckiSCSI.ps1
   
   This script checks the iSCSI configuration on the local instance.
   It will provide information about the number of connected sessions, connected file servers, and MPIO status.
                               
   MPIO is installed on this server.
   
   MPIO Load Balance Policy is set to Round Robin (RR).
   Initiator: 'iqn.1991-05.com.microsoft:ec2amaz-d2cebnb'
   to Target: 'iqn.1992-08.com.netapp:sn.13266b10e61411ee8bc0c76ad263d613:vs.3'
   has 16 total sessions (16 active, 0 non-active)
   spread across 2 node(s).
   MPIO: Yes
   ```

# Provisioning NVMe/TCP for Linux


FSx for ONTAP supports the Non-Volatile Memory Express over TCP (NVMe/TCP) block storage protocol. With NVMe/TCP, you use the ONTAP CLI to provision namespaces and subsystems and then map the namespaces to subsystems, similar to the way LUNs are provisioned and mapped to initiator groups (igroups) for iSCSI. The NVMe/TCP protocol is available on second-generation file systems that have 6 or fewer [high-availability (HA) pairs](HA-pairs.md).

**Note**  
FSx for ONTAP file systems use an SVM's iSCSI endpoints for both iSCSI and NVMe/TCP block storage protocols.

There are three main steps to process of configuring NVMe/TCP on your Amazon FSx for NetApp ONTAP, which are covered in the following procedures:

1. Install and configure the NVMe client on the Linux host.

1. Configure NVMe on the file system's SVM.
   + Create an NVMe namespace.
   + Create an NVMe subsystem.
   + Map the namespace to the subsystem.
   + Add the client NQN to the subsystem.

1. Mount an NVMe device on the Linux client.

## Before you begin


Before you begin the process of configuring your file system for NVMe/TCP, you need to have the following items completed.
+ Create an FSx for ONTAP file system. For more information, see [Creating file systems](creating-file-systems.md).
+ Create an EC2 instance running Red Hat Enterprise Linux (RHEL) 9.3 in the same VPC as the file system. This is the Linux host on which you will configure NVMe and access your file data using NVMe/TCP for Linux.

  Beyond the scope of these procedures, if the host is located in another VPC, you can use VPC peering or AWS Transit Gateway to grant other VPCs access to the volume's iSCSI endpoints. For more information, see [Accessing data from outside the deployment VPC](supported-fsx-clients.md#access-from-outside-deployment-vpc).
+ Configure the Linux host's VPC security groups to allow inbound and outbound traffic as described in [File System Access Control with Amazon VPC](limit-access-security-groups.md).
+ Obtain the credentials for the ONTAP user with `fsxadmin` privileges that you will use to access the ONTAP CLI. For more information, see [ONTAP roles and users](roles-and-users.md).
+ The Linux host that you will configure for NVMe and use to access the FSx for ONTAP file system are located in the same VPC and AWS account.
+ We recommend that the EC2 instance be in the same availability zone as your file system's preferred subnet.

If your EC2 instance runs a different Linux AMI than RHEL 9.3, some of the utilities used in these procedures and examples might already be installed, and you might use different commands to install required packages. Aside from installing packages, the commands used in this section are valid for other EC2 Linux AMIs.

**Topics**
+ [

## Before you begin
](#nvme-tcp-linux-byb)
+ [

## Install and configure NVMe on the Linux host
](#configure-nvme-on-rhel93)
+ [

## Configure NVMe on the FSx for ONTAP file system
](#configure-nvme-on-svm)
+ [

## Mount an NVMe device on your Linux client
](#add-nvme-on-rhel93-host)

## Install and configure NVMe on the Linux host


**To install the NVMe client**

1. Connect to your Linux instance using an SSH client. For more information, see [ Connect to your Linux instance from Linux or macOS using SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-linux-inst-ssh.html).

1. Install `nvme-cli` using the following command:

   ```
   ~$ sudo yum install -y nvme-cli
   ```

1. Load the `nvme-tcp` module onto the host:

   ```
   $ sudo modprobe nvme-tcp
   ```

1. Get the Linux host's NVMe Qualified Name (NQN) by using the following command:

   ```
   $ cat /etc/nvme/hostnqn
   nqn.2014-08.org.nvmexpress:uuid:9ed5b327-b9fc-4cf5-97b3-1b5d986345d1
   ```

   Record the response for use in a later step.

## Configure NVMe on the FSx for ONTAP file system


**To configure NVMe on the file system**

Connect to the NetApp ONTAP CLI on the FSx for ONTAP file system on which you plan to create the NVMe device(s).

1. To access the ONTAP CLI, establish an SSH session on the management port of the Amazon FSx for NetApp ONTAP file system or SVM by running the following command. Replace `management_endpoint_ip` with the IP address of the file system's management port.

   ```
   [~]$ ssh fsxadmin@management_endpoint_ip
   ```

   For more information, see [Managing file systems with the ONTAP CLI](managing-resources-ontap-apps.md#fsxadmin-ontap-cli). 

1. Create a new volume on the SVM that you are using to access the NVMe interface.

   ```
   ::> vol create -vserver fsx -volume nvme_vol1 -aggregate aggr1 -size 1t
        [Job 597] Job succeeded: Successful
   ```

1. Create the NVMe namespace `ns_1` using the [https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-namespace-create.html](https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-namespace-create.html) NetApp ONTAP CLI command. A namespace maps to initiators (clients) and controls which initiators (clients) have access to NVMe devices.

   ```
   ::> vserver nvme namespace create -vserver fsx -path /vol/nvme_vol1/ns_1 -size 100g -ostype linux
   Created a namespace of size 100GB (107374182400).
   ```

1. Create the NVMe subsystem using the [https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-subsystem-create.html](https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-subsystem-create.html) NetApp ONTAP CLI command.

   ```
   ~$ vserver nvme subsystem create -vserver fsx -subsystem sub_1 -ostype linux
   ```

1. Map the namespace to the subsystem you just created.

   ```
   ::> vserver nvme subsystem map add -vserver fsx -subsystem sub_1 -path /vol/nvme_vol1/ns_1
   ```

1. Add the client to the subsystem using the NQN that you retrieved previously.

   ```
   ::> vserver nvme subsystem host add -subsystem sub_1 -host-nqn nqn.2014-08.org.nvmexpress:uuid:ec21b083-1860-d690-1f29-44528e4f4e0e -vserver fsx
   ```

   If you want to make the devices mapped to this subsystem available to multiple hosts, you can specify multiple initiator names in a comma separated list. For more information, see [vserver nvme subsystem host add](https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-subsystem-host-add.html) in the NetApp ONTAP Docs.

1. Confirm that the namespace exists using the [https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-namespace-show.html](https://docs.netapp.com/us-en/ontap-cli-9141/vserver-nvme-namespace-show.html) command:

   ```
   ::> vserver nvme namespace show -vserver fsx -instance
   Vserver Name: fsx
               Namespace Path: /vol/nvme_vol1/ns_1
                         Size: 100GB
                    Size Used: 90.59GB
                      OS Type: linux
                      Comment: 
                   Block Size: 4KB
                        State: online
            Space Reservation: false
   Space Reservations Honored: false
                 Is Read Only: false
                Creation Time: 5/20/2024 17:03:08
               Namespace UUID: c51793c0-8840-4a77-903a-c869186e74e3
                     Vdisk ID: 80d42c6f00000000187cca9
         Restore Inaccessible: false
      Inconsistent Filesystem: false
          Inconsistent Blocks: false
                       NVFail: false
   Node Hosting the Namespace: FsxId062e9bb6e05143fcb-01
                  Volume Name: nvme_vol1
                   Qtree Name: 
             Mapped Subsystem: sub_1
               Subsystem UUID: db526ec7-16ca-11ef-a612-d320bd5b74a9               
                 Namespace ID: 00000001h
                 ANA Group ID: 00000001h
                 Vserver UUID: 656d410a-1460-11ef-a612-d320bd5b74a9
                   Vserver ID: 3
                  Volume MSID: 2161388655
                  Volume DSID: 1029
                    Aggregate: aggr1
               Aggregate UUID: cfa8e6ee-145f-11ef-a612-d320bd5b74a9
    Namespace Container State: online
           Autodelete Enabled: false
             Application UUID: -
                  Application: -
     Has Metadata Provisioned: true
     
   1 entries were displayed.
   ```

1. Use the [https://docs.netapp.com/us-en/ontap-cli-9141/network-interface-show.html](https://docs.netapp.com/us-en/ontap-cli-9141/network-interface-show.html) command to retrieve the addresses of the block storage interfaces for the SVM in which you've created your NVMe devices.

   ```
   ::> network interface show -vserver svm_name -data-protocol nvme-tcp
               Logical               Status     Network            Current                    Current Is 
   Vserver     Interface             Admin/Oper Address/Mask       Node                       Port    Home
   ----------- ----------            ---------- ------------------ -------------              ------- ----
   svm_name
               iscsi_1               up/up      172.31.16.19/20    FSxId0123456789abcdef8-01  e0e     true
               iscsi_2               up/up      172.31.26.134/20   FSxId0123456789abcdef8-02  e0e     true
   2 entries were displayed.
   ```
**Note**  
The `iscsi_1` LIF is used for both iSCSI and NVMe/TCP.

   In this example, the IP address of iscsi\$11 is 172.31.16.19 and iscsi\$12 is 172.31.26.134.

## Mount an NVMe device on your Linux client


The process of mounting the NVMe device on your Linux client involves three steps:

1. Discovering the NVMe nodes

1. Partitioning the NVMe device

1. Mounting the NVMe device on the client

These are covered in the following procedures.

**To discover the target NVMe nodes**

1. On your Linux client, use the following command to discover the target NVMe nodes. Replace *`iscsi_1_IP`* with `iscsi_1`’s IP address, and *`client_IP`* the client's IP address.
**Note**  
`iscsi_1` and `iscsi_2` LIFs are used for both iSCSI and NVMe storage.

   ```
   ~$ sudo nvme discover -t tcp -w client_IP -a iscsi_1_IP
   ```

   ```
   Discovery Log Number of Records 4, Generation counter 11
   =====Discovery Log Entry 0======
   trtype:  tcp
   adrfam:  ipv4
   subtype: current discovery subsystem
   treq:    not specified
   portid:  0
   trsvcid: 8009
   subnqn:  nqn.1992-08.com.netapp:sn.656d410a146011efa612d320bd5b74a9:discovery
   traddr:  172.31.26.134
   eflags:  explicit discovery connections, duplicate discovery information
   sectype: none
   =====Discovery Log Entry 1======
   trtype:  tcp
   adrfam:  ipv4
   subtype: current discovery subsystem
   treq:    not specified
   portid:  1
   trsvcid: 8009
   subnqn:  nqn.1992-08.com.netapp:sn.656d410a146011efa612d320bd5b74a9:discovery
   traddr:  172.31.16.19
   eflags:  explicit discovery connections, duplicate discovery information
   sectype: none
   ```

1. (Optional) To drive higher throughput than the Amazon EC2 single client maximum of 5 Gbps (\$1625 MBps) to your file NVMe device, follow the procedures described in [Amazon EC2 instance network bandwidth](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-network-bandwidth.html) in the Amazon Elastic Compute Cloud User Guide for Linux Instances to establish additional sessions.

1. Log into the target initiators with a controller loss timeout of at least 1800 seconds, again using `iscsi_1`’s IP address for *`iscsi_1_IP`* and the client's IP address for *`client_IP`*. Your NVMe devices are presented as available disks.

   ```
   ~$ sudo nvme connect-all -t tcp -w client_IP -a iscsi_1 -l 1800
   ```

1. Use the following command to verify that the NVMe stack has identified and merged the multiple sessions and configured multipathing. The command returns `Y` if the configuration was successful.

   ```
   ~$ cat /sys/module/nvme_core/parameters/multipath
   Y
   ```

1. Use the following commands to verify that the NVMe-oF setting `model` is set to `NetApp ONTAP Controller` and the load balancing `iopolicy` is set to `round-robin` for the respective ONTAP namespaces to distribute the I/O on all available paths

   ```
   ~$ cat /sys/class/nvme-subsystem/nvme-subsys*/model
   Amazon Elastic Block Store              
   NetApp ONTAP Controller
   ~$ cat /sys/class/nvme-subsystem/nvme-subsys*/iopolicy
   numa
   round-robin
   ```

1. Use the following command to verify that the namespaces are created and correctly discovered on the host:

   ```
   ~$ sudo nvme list
   Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
   --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
   /dev/nvme0n1          /dev/ng0n1            vol05955547c003f0580 Amazon Elastic Block Store               0x1         25.77  GB /  25.77  GB    512   B +  0 B   1.0     
   /dev/nvme2n1          /dev/ng2n1            lWB12JWY/XLKAAAAAAAC NetApp ONTAP Controller                  0x1        107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
   ```

   The new device in the output is `/dev/nvme2n1`. This naming scheme may differ depending on your Linux installation.

1. Verify that the controller state of each path is live and has the correct Asymmetric Namespace Access (ANA) multipathing status:

   ```
   ~$ nvme list-subsys /dev/nvme2n1
   nvme-subsys2 - NQN=nqn.1992-08.com.netapp:sn.656d410a146011efa612d320bd5b74a9:subsystem.rhel
                  hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec2a70bf-3ab2-6cb0-f997-8730057ceb24
                  iopolicy=round-robin
   \
    +- nvme2 tcp traddr=172.31.26.134,trsvcid=4420,host_traddr=172.31.25.143,src_addr=172.31.25.143 live non-optimized
    +- nvme3 tcp traddr=172.31.16.19,trsvcid=4420,host_traddr=172.31.25.143,src_addr=172.31.25.143 live optimized
   ```

   In this example, the NVMe stack has automatically discovered your file system’s alternate LIF, `iscsi_2`, 172.31.26.134.

1. Verify that the NetApp plug-in displays the correct values for each ONTAP namespace device:

   ```
   ~$ sudo nvme netapp ontapdevices -o column
   Device           Vserver                   Namespace Path                                     NSID UUID                                   Size     
   ---------------- ------------------------- -------------------------------------------------- ---- -------------------------------------- ---------
   /dev/nvme2n1     fsx                       /vol/nvme_vol1/ns_1                                1    0441c609-3db1-4b0b-aa83-790d0d448ece   107.37GB
   ```

**To partition the device**

1. Use the following command to verify that the path to your device\$1name `nvme2n1` is present.

   ```
   ~$ ls /dev/mapper/nvme2n1
   /dev/nvme2n1
   ```

1. Partition the disk using `fdisk`. You’ll enter an interactive prompt. Enter the options in the order shown. You can make multiple partitions by using a value smaller than the last sector (`20971519` in this example).
**Note**  
The `Last sector` value will vary depending on the size of your NVMe device (100 GiB in this example).

   ```
   ~$ sudo fdisk /dev/mapper/nvme2n1
   ```

   The `fsdisk` interactive prompt starts.

   ```
   Welcome to fdisk (util-linux 2.37.4). 
   Changes will remain in memory only, until you decide to write them. 
   Be careful before using the write command. 
   
   Device does not contain a recognized partition table. 
   Created a new DOS disklabel with disk identifier 0x66595cb0. 
   
   Command (m for help): n
   Partition type 
      p primary (0 primary, 0 extended, 4 free) 
      e extended (container for logical partitions) 
   Select (default p): p
   Partition number (1-4, default 1): 1 
   First sector (256-26214399, default 256): 
   Last sector, +sectors or +size{K,M,G,T,P} (256-26214399, default 26214399): 20971519
                                       
   Created a new partition 1 of type 'Linux' and of size 100 GiB.
   Command (m for help): w
   The partition table has been altered.
   Calling ioctl() to re-read partition table. 
   Syncing disks.
   ```

   After entering `w`, your new partition `/dev/nvme2n1` becomes available. The *partition\$1name* has the format *<device\$1name>**<partition\$1number>*. `1` was used as the partition number in the `fdisk` command in the previous step.

1. Create your file system using `/dev/nvme2n1` as the path.

   ```
   ~$ sudo mkfs.ext4 /dev/nvme2n1
   ```

   The system responds with the following output:

   ```
   mke2fs 1.46.5 (30-Dec-2021)
   Found a dos partition table in /dev/nvme2n1
   Proceed anyway? (y,N) y
   Creating filesystem with 26214400 4k blocks and 6553600 inodes
   Filesystem UUID: 372fb2fd-ae0e-4e74-ac06-3eb3eabd55fb
   Superblock backups stored on blocks: 
       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
       4096000, 7962624, 11239424, 20480000, 23887872
   
   Allocating group tables: done                            
   Writing inode tables: done                            
   Creating journal (131072 blocks): done
   Writing superblocks and filesystem accounting information: done
   ```

**To mount the NVMe device on the Linux client**

1. Create a directory *directory\$1path* as the mount point for your file system on the Linux instance.

   ```
   ~$ sudo mkdir /directory_path/mount_point
   ```

1. Mount the file system using the following command.

   ```
   ~$ sudo mount -t ext4 /dev/nvme2n1 /directory_path/mount_point
   ```

1. (Optional) If you want to give a specific user ownership of the mount directory, replace *`username`* with the owner's username.

   ```
   ~$ sudo chown username:username /directory_path/mount_point
   ```

1. (Optional) Verify that you can read from and write data to the file system.

   ```
   ~$ echo "Hello world!" > /directory_path/mount_point/HelloWorld.txt
   ~$ cat directory_path/HelloWorld.txt
   Hello world!
   ```

   You have successfully created and mounted an NVMe device on your Linux client.

# Accessing your data via Amazon S3 access points
Accessing data via S3 access points

You can also use S3 access points to access file data stored on Amazon FSx file systems as if it were in S3, allowing you to use it with applications and services that work with S3 without application changes or moving data out of file storage. Amazon S3 access points are S3 endpoints that attach to either S3 buckets or FSx for ONTAP and FSx for OpenZFS volumes. Amazon S3 access points simplify managing data access for any application or AWS service that works with S3. With S3 access points, customers with shared datasets, including data lakes, media archives, and user-generated content, can easily control and scale data access for hundreds of applications, teams, or individuals by creating individualized access points with names and permissions customized for each.

S3 access points attached to Amazon FSx for NetApp ONTAP volumes support read and write access to your file data using S3 object operations (for example, `GetObject`, `PutObject`, and `ListObjectsV2`) against an Amazon S3 endpoint.

Each S3 access point attached to an FSx for ONTAP file system has an AWS Identity and Access Management (IAM) access point policy and an associated UNIX or Windows file system user that is used to authorize all requests made through the access point. For each request, S3 first evaluates all the relevant policies, including those on the user, access point, S3 VPC Endpoint, and service control policies, to authorize the request. Once the request is authorized by S3, the request is then authorized by the file system, which evaluates whether the file system user associated with the S3 access point has permission to access to the data on the file system. You can configure an access point to accept requests only from a virtual private cloud (VPC) to restrict Amazon S3 data access to a private network. Amazon S3 enforces Block public access by default for all access points attached to an FSx for ONTAP volume, and you cannot modify or disable this setting.

You use the Amazon FSx console, CLI, and API to [create an S3 access point and attach](fsxn-creating-access-points.md) it to an FSx for ONTAP volume. The access point allows you to access your file data using the S3 API, though your data continues to reside on your FSx for ONTAP file system and you can continue using the NFS and SMB protocols to access your data alongside the S3 API.

Amazon S3 access points for FSx for ONTAP ﬁle systems deliver latency in the tens of milliseconds range, consistent with S3 bucket access. The throughput and requests per second you can drive to an Amazon FSx file system via the S3 API depends on the file system's provisioned throughput. For more information about file system performance capabilities, see [Amazon FSx for NetApp ONTAP performancePerformance](performance.md)

**Topics**
+ [

## AWS Regions with Amazon S3 access points for FSx for ONTAP
](#access-points-for-fsx-ontap-supported-regions)
+ [

# Access points naming rules, restrictions, and limitations
](access-point-for-fsxn-restrictions-limitations-naming-rules.md)
+ [

# Referencing access points with ARNs, access point aliases, or virtual-hosted-style URIs
](referencing-access-points-for-fsxn.md)
+ [

# Access point compatibility
](access-points-for-fsxn-object-api-support.md)
+ [

# Managing access point access
](s3-ap-manage-access-fsxn.md)
+ [

# Creating an access point
](fsxn-creating-access-points.md)
+ [

# Managing Amazon S3 access points
](access-points-for-fsxn-manage.md)
+ [

# Using access points
](access-points-for-fsxn-usage-examples.md)
+ [

# Troubleshooting S3 access point issues
](troubleshooting-access-points-for-fsxn.md)

## AWS Regions with Amazon S3 access points for FSx for ONTAP


Amazon S3 access points for FSx for ONTAP are supported in the following AWS Regions: Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Israel (Tel Aviv), Middle East (Bahrain, UAE), South America (São Paulo), US East (N. Virginia, Ohio), and US West (N. California, Oregon).

# Access points naming rules, restrictions, and limitations
Naming rules, restrictions, and limitations

When creating an S3 access point you choose a name for it. The following topics provide information about S3 access point naming rules and restrictions and limitations.

**Topics**
+ [

## Access points naming rules
](#access-points-for-fsxn-naming-rules)
+ [

## Access points restrictions and limitations
](#access-points-for-fsxn-restrictions-limitations)

## Access points naming rules


When you create an S3 access point you choose its name. Access point names do not need to be unique across AWS accounts or AWS Regions. The same AWS account may create access points with the same name in different AWS Regions or two different AWS accounts may use the same access point name. However, within a single AWS Region an AWS account may not have two identically named access points.

S3 access point names can't end with the suffix `-ext-s3alias`, which is reserved for access point alias. For a complete list of access point naming rules, see [Naming rules for Amazon S3 access points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-restrictions-limitations-naming-rules.html#access-points-names) in the *Amazon Simple Storage Service User Guide*.

## Access points restrictions and limitations


S3 access points attached to FSx for ONTAP volumes have the following restrictions, which do not apply to access points attached to S3 buckets:
+ You can only create an S3 access point in the same AWS Region as the FSx for ONTAP volume that you are attaching it to. 
+ The same AWS account must own the FSx for ONTAP file system and the S3 access point. You can only create S3 access points that are attached to FSx for ONTAP volumes that you own. You cannot create an S3 access point that is attached to a volume owned by another AWS account.
+ You can only create and attach S3 access points to FSx for ONTAP file systems running NetApp ONTAP version 9.17.1 and later.

For a complete list of all access point restrictions and limitations, see [Restrictions and limitations for access points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-restrictions-limitations-naming-rules.html) in the *Amazon Simple Storage Service User Guide*.

# Referencing access points with ARNs, access point aliases, or virtual-hosted-style URIs
Referencing access points

After you create an access point attached to an FSx for ONTAP volume, you can access your data via the AWS CLI and S3 API, as well as S3-compatible AWS and third-party services and applications. When referring to an access point in an AWS service or application you can use the Amazon Resource Name (ARN), the access point alias, or virtual-hosted–style URI.

**Topics**
+ [

## Access point ARNs
](#access-point-arns)
+ [

## Access point aliases
](#access-point-aliases)
+ [

## Virtual-hosted–style URI
](#virtual-hosted-style-uri)

## Access point ARNs


Access points have Amazon Resource Names (ARNs). Access point ARNs are similar to S3 bucket ARNs, but they are explicitly typed and encode the access point's AWS Region and the AWS account ID of the access point's owner. For more information about ARNs, see [Identify AWS resources with Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) in the *AWS Identity and Access Management User Guide*.

Access point ARNs have the following format:

```
arn:aws::s3:region:account-id:accesspoint/resource
```

`arn:aws:s3:us-west-2:777777777777:accesspoint/test` represents the access point named *test*, owned by account 777777777777 in the Region *us-west-2*.

ARNs for objects and files accessed through an access point use the following format:

```
arn:aws::s3:region:account-id:accesspoint/access-point-name/object/resource
```

`arn:aws:s3:us-west-2:111122223333:accesspoint/test/object/lions.jpg` represents the file *lions.jpg*, accessed through the access point named *test*, owned by account 111122223333 in the Region *us-west-2*.

For more information about access point ARNs, see [Access point ARNs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-naming.html#access-points-arns) in the *Amazon Simple Storage Service User Guide*.

## Access point aliases


When you create an access point, Amazon S3 automatically generates an access point alias that you can use anywhere you can use S3 bucket names to access data.

An access point alias cannot be changed. For an access point attached to an FSx for ONTAP volume, the access point alias consists of the following parts:

```
access point prefix-metadata-ext-s3alias
```

The following shows the ARN and access point alias for an S3 access point attached to an FSx for ONTAP volume, returned as part of the response to a `describe-s3-access-point-attachments` FSx CLI command. The access point in this example is named `my-ontap-ap`.

```
...
        "S3AccessPoint": {
            "ResourceARN": "arn:aws:s3:us-east-1:111122223333:accesspoint/my-ontap-ap",
            "Alias": "my-ontap-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias",
...
```

**Note**  
The `-ext-s3alias` suffix is reserved for the aliases of S3 access points attached to an FSx for ONTAP volume, and can't be used for access point names.

You can use the access point alias instead of an Amazon S3 access point ARN in some S3 data plane operations. For a list of the supported operations, see [Access point compatibility](access-points-for-fsxn-object-api-support.md).

For a full set of access point alias limitations, see [Access point alias limitations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-naming.html#access-points-alias) in the *Amazon Simple Storage Service User Guide*.

## Virtual-hosted–style URI


Access points only support virtual-host-style addressing. In a virtual-hosted–style URI, the access point name, AWS account, and AWS Region is part of the domain name in the URL. To view the S3 URI for an access point attached to an FSx for ONTAP volume, in the access point details page under **S3 access point details**, choose the access point name listed for **S3 access point**. This takes you to the access point details page in the Amazon S3 console. You can find the **S3 URI** under **Properties**.

For more information, see [Virtual-hosted–style URI](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-naming.html#accessing-a-bucket-through-s3-access-point) in the *Amazon Simple Storage Service User Guide*.

# Access point compatibility


You can use access points to access data stored on an FSx for ONTAP volume using the following Amazon S3 APIs for data acccess. All the operations listed below can accept either access point ARNs or access point aliases.

The following table is a partial list of Amazon S3 operations and if they are compatible with access points. The table shows which operations are supported by access points using an FSx for ONTAP volume as a data source.


| S3 operation | Access point attached to an FSx for ONTAP volume | 
| --- | --- | 
|  `[AbortMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)`  |  Supported  | 
|  `[CompleteMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)`  |  Supported  | 
|  `[CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)` (same-Region copies only)  |  Supported, if source and destination are within the same access point  | 
|  `[CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)`  |  Supported  | 
|  `[DeleteObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)`  |  Supported  | 
|  `[DeleteObjects](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)`  |  Supported  | 
|  `[DeleteObjectTagging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjectTagging.html)`  |  Supported  | 
|  `[GetBucketAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketAcl.html)`  |  Not supported  | 
|  `[GetBucketCors](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketCors.html)`  |  Not supported  | 
|  `[GetBucketLocation](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLocation.html)`  |  Supported  | 
|  `[GetBucketNotificationConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketNotificationConfiguration.html)`  |  Not supported  | 
|  `[GetBucketPolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)`  |  Not supported  | 
|  `[GetObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)`  |  Supported  | 
|  `[GetObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAcl.html)`  |  Not supported  | 
|  `[GetObjectAttributes](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)`  |  Supported  | 
|  `[GetObjectLegalHold](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectLegalHold.html)`  |  Not supported  | 
|  `[GetObjectRetention](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectRetention.html)`  |  Not supported  | 
|  `[GetObjectTagging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html)`  |  Supported  | 
|  `[HeadBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)`  |  Supported  | 
|  `[HeadObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)`  |  Supported  | 
|  `[ListMultipartUploads](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)`  |  Supported  | 
|  `[ListObjects](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html)`  |  Supported  | 
|  `[ListObjectsV2](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)`  |  Supported  | 
|  `[ListObjectVersions](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html)`  |  Not supported  | 
|  `[ListParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)`  |  Supported  | 
|  `[Presign](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html)`  |  Not supported  | 
|  `[PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)`  |  Supported  | 
|  `[PutObjectAcl](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectAcl.html)`  |  Not supported  | 
|  `[PutObjectLegalHold](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectLegalHold.html)`  |  Not supported  | 
|  `[PutObjectRetention](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectRetention.html)`  |  Not supported  | 
|  `[PutObjectTagging](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html)`  |  Supported  | 
|  `[RestoreObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html)`  |  Not supported  | 
|  `[UploadPart](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)`  |  Supported  | 
|  `[UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)` (same-Region copies only)  |  Supported, if source and destination are within the same access point  | 

Limitations to using Amazon S3 operations are the following:
+ Maximum object size is 5 GB for uploads, but you can download objects larger than that
+ `FSX_ONTAP` is the only supported storage class
+ SSE-FSX is the only supported server-side encryption mode
+ The following Amazon S3 features are not supported: access control lists (ACLs) other than `bucket-owner-full-control`, Requester Pays, Object Versioning, Object Lock, Object Lifecycle, Static Website Hosting (e.g., website redirection), multi-factor authentication (MFA), and conditional writes

For examples of using access points to perform data access operations on file data, see [Using access points](access-points-for-fsxn-usage-examples.md).

**Object ETag**  
The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag is not an MD5 digest of the object data.

**Object Checksums**  
You can use checksum values to verify the integrity of the data that you upload. When you upload data and specify a checksum algorithm, the AWS SDK uses your chosen checksum algorithm to compute a checksum value before data transmission. Amazon S3 then independently calculates a checksum of your data and validates it against the provided checksum value. Objects are accepted only after confirming data integrity was maintained during transit to Amazon S3. Unlike with checksums for objects in Amazon S3 General Purpose buckets, the checksum value is not stored in the FSx for NetApp ONTAP volume as object metadata and the object itself. This means that the checksum values are not returned in the response and are not used to verify object integrity on download.

**Server-side encryption with Amazon FSx (SSE-FSX)**  
All Amazon FSx file systems have encryption configured by default and are encrypted at rest with keys managed using AWS Key Management Service. Data is automatically encrypted and decrypted on the file system as data is being written to and read from the file system. These processes are handled transparently by Amazon FSx.

**Multipart upload**  
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently, and in any order. Multipart upload has the following considerations when using S3 access points with FSx for ONTAP: 
+ The parts associated with in-progress multipart uploads (i.e. incomplete uploads) are not included in FSx for ONTAP volume backups.
+ The used storage associated with in-progress multipart upload (i.e. incomplete upload) parts is not reflected in the destination volume’s `StorageUsed` storage capacity CloudWatch metric but is reflected in the parent file system’s `StorageUsed` storage capacity CloudWatch metric.
+ Once a multipart upload operation is complete, the associated part metadata is no longer stored with the object. This means you cannot retrieve object part metdata using `GetObjectAttributes` or download a single part of an object by the part number of the object being read.

**Access Control List (ACL)**  
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. S3 access points for FSx only support the `bucket-owner-full-control` ACL value. Using any other ACL value will result in a `InvalidArgument` exception.

# Managing access point access
Managing access

You can configure each S3 access point with distinct permissions and network controls that S3 applies for any request that is made using that access point. S3 access points support AWS Identity and Access Management (IAM) resource policies that you can use to control the use of the access point by resource, user, or other conditions. For an application or user to access files through an access point, both the access point and the underlying volume must permit the request. For more information, see [IAM access point policies](#access-points-for-fsxn-policies).

Amazon S3 access points for FSx for ONTAP use a dual-layer authorization model that combines AWS IAM permissions with file system-level permissions. This approach ensures that data access requests are properly authorized at both the AWS service level and the underlying file system level.

For an application or user to successfully access data through an access point, both the S3 access point policy and the underlying FSx for ONTAP volume must permit the request.

**Topics**
+ [

## File system user identity and authorization
](#fsxn-file-system-user-identity)
+ [

## S3 API request authorization
](#access-points-for-fsxn-s3-iam-auth)
+ [

## S3 Block Public Access
](#access-points-for-fsxn-bpa)
+ [

## IAM access point policies
](#access-points-for-fsxn-policies)

## File system user identity and authorization


When you create an S3 access point for an FSx for ONTAP volume, you specify a file system identity that will be used to authorize all file system requests made through that access point. This file system identity determines what level of access is granted to the underlying files and directories based on the file system's permission model. The file system user is a user account on the underlying Amazon FSx file system. If the file system user has *read-only* access, then only read requests made using the access point are authorized, and write requests are blocked. If the file system user has read-write access, then both read and write requests to the attached volume made using the access point are authorized.

The file system identity can be one of two types:
+ **UNIX identity** – Use a UNIX identity (username) when accessing volumes with UNIX security style
+ **Windows Identity** – Use a Windows identity (domain and username) when accessing volumes with NTFS security style.

When you specify a UNIX or Windows identity, all S3 API operations performed through the access point are authorized using that user's permissions on the file system.

The file system identity you associate with the access point determines the level of access to files and directories. For example, if you associate the access point with the root UNIX identity (UID 0), which typically has full file access permissions on the file system, then all file operations would be authorized. Conversely, if you associate the access point with a restricted user identity, file operations would be limited to what that user can access based on the file system's permission model.

You should use the UNIX file system identity type for volumes with UNIX security style and the Windows identity type for volumes with NTFS security style. This alignment ensures that the authorization model matches the volume's security configuration. 

For UNIX security style volumes, the file system uses mode-bits or NFSv4 ACLs to control access. For NTFS security style volumes, the file system uses Windows ACLs to control access.

**Important**  
Attaching an S3 access point to an FSx for ONTAP volume doesn't change the volume's behavior when the volume is accessed directly via NFS or SMB. All existing operations against the volume will continue to work as before. Restrictions that you include in an S3 access point policy apply only to requests made using the access point.

## S3 API request authorization


When you make an S3 API request through an access point attached to an FSx for NetApp ONTAP volume, Amazon S3 evaluates the IAM permissions of the calling principal against the access point's IAM resource policy. The IAM principal caller must have the necessary permissions granted through their identity-based policies, and the access point's resource policy must also permit the requested action.

Amazon S3 evaluates all relevant policies—including user policies, the access point policy, VPC endpoint policies, and service control policies—to determine whether to authorize the request.

You can also configure an S3 access point to only accept requests from a specific virtual private cloud (VPC) to restrict data access. For more information, see [Creating access points restricted to a virtual private cloud](access-points-for-fsxn-vpc.md).

## S3 Block Public Access


Amazon S3 access points attached to an FSx for ONTAP volume are automatically configured with block public access enabled, which you cannot change.

## IAM access point policies


Amazon S3 access points support AWS Identity and Access Management (IAM) resource policies that allow you to control the use of the access point by resource, user, or other conditions. For an application or user to be able to access objects through an access point, both the access point and the underlying data source must permit the request.

The `s3:PutAccessPointPolicy` permission is required to create an optional access point policy.

After you attach an S3 access point to an Amazon FSx volume, all existing operations against the volume will continue to work as before. Restrictions that you include in an access point policy apply only to requests made through that access point. For more information, see [Configuring IAM policies for using access points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-policies.html) in the *Amazon Simple Storage Service User Guide*.

You can configure an access point policy when you create an access point attached to an FSx for ONTAP volume using the Amazon FSx console. To add, modify, or delete an access point policy on an existing S3 access point, you can use the S3 console, CLI, or API. 

# Creating an access point


You can create and manage S3 access point that attach to Amazon FSx volumes using the Amazon FSx console, CLI, API, and supported SDKs. 

**Note**  
Because you might want to publicize your S3 access point name so that other users can use the access point, avoid including sensitive information in the S3 access point name. Access point names are published in a publicly accessible database known as the Domain Name System (DNS). For more information about access point names, see [Access points naming rules](access-point-for-fsxn-restrictions-limitations-naming-rules.md#access-points-for-fsxn-naming-rules).

## Required permissions


The following permissions are required to create an S3 access point attached to an Amazon FSx volume:
+ `fsx:CreateAndAttachS3AccessPoint`
+ `s3:CreateAccessPoint`
+ `s3:GetAccessPoint`

The `s3:PutAccessPointPolicy` permission is required to create an optional Access Point policy using either the Amazon FSx or S3 console. For more information, see [IAM access point policies](s3-ap-manage-access-fsxn.md#access-points-for-fsxn-policies).

To create an access point, see the following topics.

**Topics**
+ [

## Required permissions
](#create-ap-permissions)
+ [

# Creating access points
](create-access-points.md)
+ [

# Creating access points restricted to a virtual private cloud
](access-points-for-fsxn-vpc.md)

# Creating access points


**Important**  
To attach an S3 access point to an FSx for ONTAP volume, the volume must be mounted (have a junction path). See [ONTAP documentation](https://docs.netapp.com/us-en/ontap/nfs-admin/mount-unmount-existing-volumes-nas-namespace-task.html) for more details.

The FSx for ONTAP volume must already exist in your account when creating an S3 access point for your volume.

To create the S3 access point attached to an FSx for ONTAP volume, you specify the following properties:
+ The access point name. For information about access point naming rules, see [Access points naming rules](access-point-for-fsxn-restrictions-limitations-naming-rules.md#access-points-for-fsxn-naming-rules).
+ The file system user identity to use for authorizing file access requests made using the access point. Specify either the UNIX or Windows the POSIX username that you want to include. For more information, see [File system user identity and authorization](s3-ap-manage-access-fsxn.md#fsxn-file-system-user-identity).
+ The access point's network configuration determines whether the access point is accessible from the internet or if access is restricted to a specific virtual private cloud (VPC). For more information, see [Creating access points restricted to a virtual private cloud](access-points-for-fsxn-vpc.md).

## To create an S3 access point attached to an FSx volume (FSx console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the navigation bar on the top of the page, choose the AWS Region in which you want to create an access point. The access point must be created in the same Region as the associated volume.

1. In the left navigation pane, choose **Volumes**.

1. On the **Volumes** page, choose the FSx for ONTAP volume that you want to attach the access point to.

1. Display the **Create S3 access point** page by choosing **Create S3 access point** from the **Actions** menu.

1. For **Access point name**, enter the name for the access point. For more information about guidelines and restrictions for access point names, see [Access points naming rules](access-point-for-fsxn-restrictions-limitations-naming-rules.md#access-points-for-fsxn-naming-rules).

   The **Data source details** are populated with the information of the volume you chose in Step 3.

1. The file system user identity is used by Amazon FSx for authenticating file access requests that are made using this access point. Be sure that the file system user you specify has the correct permissions on the FSx for ONTAP volume.

   For **File system user identity type**, select either UNIX or Windows.

1. For **Username** enter the user's username.

1. In the **Network configuration** panel you choose whether the access point is accessible from the Internet, or access is restricted to a specific virtual private cloud.

   For **Network origin**, choose **Internet** to make the access point accessible from the internet, or choose **Virtual private cloud (VPC)**, and enter the **VPC ID** that you want to limit access to the access point from.

   For more information about network origins for access points, see [Creating access points restricted to a virtual private cloud](access-points-for-fsxn-vpc.md).

1. (Optional) Under **Access Point Policy - *optional***, specify an optional access point policy. Be sure to resolve any policy warnings, errors, and suggestions. For more information about specifying an access point policy, see [Configuring IAM policies for using access points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-policies.html) in the *Amazon Simple Storage Service User Guide*.

1. Choose **Create access point** to review the access point attachment configuration.

## To create an S3 access point attached to an FSx volume (CLI)


The following example command creates an access point named *`my-ontap-ap`* that is attached to the FSx for ONTAP volume *`fsvol-0123456789abcdef9`* in the account *`111122223333`*.

```
$ aws fsx create-and-attach-s3-access-point --name my-ontap-ap --type ONTAP --ontap-configuration \
   VolumeId=fsvol-0123456789abcdef9,FileSystemIdentity='{Type=UNIX,UnixUser={Name=ec2-user}}' \
   --s3-access-point VpcConfiguration='{VpcId=vpc-0123467},Policy=access-point-policy-json
```

For a successful request, the system responds by returning the new S3 access point attachment.

```
$ {
  {
     "S3AccessPointAttachment": {
        "CreationTime": 1728935791.8,
        "Lifecycle": "CREATING",
        "LifecycleTransitionReason": {
            "Message": "string"
        },
        "Name": "my-ontap-ap",
        "OntapConfiguration": {
            "VolumeId": "fsvol-0123456789abcdef9",
            "FileSystemIdentity": {
                "Type": "UNIX",
                "UnixUser": {
                    "Name": "ec2-user"
                }
            }
        },
        "S3AccessPoint": {
            "ResourceARN": "arn:aws:s3:us-east-1:111122223333:accesspoint/my-ontap-ap",
            "Alias": "my-ontap-ap-aqfqprnstn7aefdfbarligizwgyfouse1a-ext-s3alias",
            "VpcConfiguration": {
                "VpcId": "vpc-0123467"
        }
     }
  }
}
```

# Creating access points restricted to a virtual private cloud
Creating access points restricted to a VPC

When you create an access point, you can choose to make the access point accessible from the internet, or you can specify that all requests made through that access point must originate from a specific Amazon Virtual Private Cloud. An access point that's accessible from the internet is said to have a network origin of `Internet`. It can be used from anywhere on the internet, subject to any other access restrictions in place for the access point, underlying bucket or Amazon FSx volume, and related resources, such as the requested objects. An access point that's only accessible from a specified Amazon VPC has a network origin of `VPC`, and Amazon S3 rejects any request made to the access point that doesn't originate from that Amazon VPC.

**Important**  
You can only specify an access point's network origin when you create the access point. After you create the access point, you can't change its network origin.

To restrict an access point to Amazon VPC-only access, you include the `VpcConfiguration` parameter with the request to create the access point. In the `VpcConfiguration` parameter, you specify the Amazon VPC ID that you want to be able to use the access point. If a request is made through the access point, the request must originate from the Amazon VPC or Amazon S3 will reject it. 

You can retrieve an access point's network origin using the AWS CLI, AWS SDKs, or REST APIs. If an access point has a Amazon VPC configuration specified, its network origin is `VPC`. Otherwise, the access point's network origin is `Internet`.

**Example**  
***Example: Create an access point that's restricted to Amazon VPC access***  
The following example creates an access point named `example-vpc-ap` for bucket `amzn-s3-demo-bucket` in account `123456789012` that allows access only from the `vpc-1a2b3c` Amazon VPC. The example then verifies that the new access point has a network origin of `VPC`.  

```
$ aws fsx create-and-attach-s3-access-point --name example-vpc-ap --type ONTAP --ontap-configuration \
   VolumeId=fsvol-0123456789abcdef9,FileSystemIdentity='{Type=UNIX,UnixUser={Name=ec2-user}}' \
   --s3-access-point VpcConfiguration='{VpcId=vpc-id},Policy=access-point-policy-json
```

```
$ {
  {
     "S3AccessPointAttachment": {
        "Lifecycle": "CREATING",
        "CreationTime": 1728935791.8,
        "Name": "example-vpc-ap",
        "OntapConfiguration": {
            "VolumeId": "fsvol-0123456789abcdef9",
            "FileSystemIdentity": {
                "Type": "UNIX",
                "UnixUser": {
                    "Name": "my-unix-user"
                }
            }
        },
        "S3AccessPoint": {
            "ResourceARN": "arn:aws:s3:us-east-1:111122223333:accesspoint/example-vpc-ap",
            "Alias": "access-point-abcdef0123456789ab12jj77xy51zacd4-ext-s3alias",
            "VpcConfiguration": { 
                "VpcId": "vpc-1a2b3c"
            }
        }
     }
  }
```

To use an access point with a Amazon VPC, you must modify the access policy for your Amazon VPC endpoint. Amazon VPC endpoints allow traffic to flow from your Amazon VPC to Amazon S3. They have access control policies that control how resources within the Amazon VPC are allowed to interact with Amazon S3. Requests from your Amazon VPC to Amazon S3 only succeed through an access point if the Amazon VPC endpoint policy grants access to both the access point and the underlying bucket.

**Note**  
To make resources accessible only within a Amazon VPC, make sure to create a [private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html) for your Amazon VPC endpoint. To use a private hosted zone, [modify your Amazon VPC settings](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating) so that the [Amazon VPC network attributes](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-support) `enableDnsHostnames` and `enableDnsSupport` are set to `true`.

The following example policy statement configures an Amazon VPC endpoint to allow calls to `GetObject` and an access point named `example-vpc-ap`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Principal": "*",
        "Action": [
            "s3:GetObject"
        ],
        "Effect": "Allow",
        "Resource": [
            "arn:aws:s3:us-east-1:123456789012:accesspoint/example-vpc-ap/object/*"
        ]
    }]
}
```

------

**Note**  
The `Resource` declaration in this example uses an Amazon Resource Name (ARN) to specify the access point. 

For more information about Amazon VPC endpoint policies, see [Gateway endpoints for Amazon S3](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) in the *Amazon VPC User Guide*.

# Managing Amazon S3 access points
Managing access points

This section explains how to manage and use your Amazon S3 access points using the AWS Management Console, AWS Command Line Interface, or API.

**Topics**
+ [

# Listing S3 access point attachments
](access-points-list.md)
+ [

# Viewing access point details
](access-points-details.md)
+ [

# Deleting an S3 access point attachment
](delete-access-point.md)

# Listing S3 access point attachments


This section explains how to list S3 access point using the AWS Management Console, AWS Command Line Interface, or REST API.

## To list all the S3 access points attached to an FSx for ONTAP volume (Amazon FSx console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the navigation pane on the left side of the console, choose **Volumes**.

1. On the **Volumes** page, choose the **ONTAP** volume that you want to view the access point attachments for.

1. On the Volume details page, choose **S3** to view a list of all the S3 access points attached to the volume.

## To list all the S3 access points attached to an FSx for ONTAP volume (AWS CLI)


The following [https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeS3AccessPointAttachments.html](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeS3AccessPointAttachments.html) example command shows how you can use the AWS CLI to list S3 access point attachments.

The following command lists all the S3 access points attached to volumes on the FSx for ONTAP file system fs-0abcdef123456789.

```
aws fsx describe-s3-access-point-attachments --filter {[Name: file-system-id, Values:{[fs-0abcdef123456789]}} 
```

The following command lists S3 access points attached to an FSx for ONTAP volume vol-9abcdef123456789].

```
aws fsx describe-s3-access-point-attachments --filter {[Name: volume-id, Values:{[vol-9abcdef123456789]}} 
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points.html) in the *AWS CLI Command Reference*.

# Viewing access point details


This section explains how to view the details of S3 access points using the AWS Management Console, AWS Command Line Interface, or REST API.

## To view the details of S3 access points attached to an FSx for ONTAP volume (Amazon FSx console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to the volume that is attached to the access point whose details you want to view.

1. Choose **S3** to display the list of access points attached to the volume.

1. Choose the access point whose details you want to view.

1. Under **S3 access point attachment summary**, view configuration details and properties for the selected access point.

   The **File system user identity** configuration and the **S3 access point permissions** policy are also listed for the access point attachment.

1. To view the access point's S3 configuration in the Amazon S3 console, choose the S3 access point name displayed under **S3 access point**. It takes you to the access point's detail page in the Amazon S3 console.

# Deleting an S3 access point attachment


This section explains how to delete S3 access points using the AWS Management Console, AWS Command Line Interface, or REST API.

The `fsx:DetatchAndDeleteS3AccessPoint` and `s3control:DeleteAccessPoint` permissions are required to delete an S3 access point attachment.

## To delete an S3 access point attached to an FSx for ONTAP volume (Amazon FSx console)


1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to the volume that the S3 access point attachment that you want to delete is attached to.

1. Choose **S3** to display the list of S3 access points attached to the volume.

1. Select the S3 access point attachment that you want to delete.

1. Choose **Delete**.

1. Confirm that you want to delete the S3 access point, and choose **Delete**.

## To delete an S3 access point attached to an FSx for ONTAP volume (AWS CLI)

+ To delete an S3 access point attachments, use the [detach-and-delete-s3-access-point](https://docs.aws.amazon.com/cli/latest/reference/fsx/detach-and-delete-s3-access-point.html) CLI command (or the equivalent [DetachAndDeleteS3AccessPoint](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DetachAndDeleteS3AccessPoint.html) API operation), as shown in the following example. Use the `--name` property to specify the name of the S3 access point attachment that you want to delete.

  ```
  aws fsx detach-and-delete-s3-access-point \
      --region us-east-1 \
      --name my-ontap-ap
  ```

# Using access points
Using access points

The following examples demonstrate how to use access points to access file data stored on an FSx for ONTAP volume using the S3 API. For a full list of the Amazon S3 API operations supported by access points attached to an FSx for ONTAP volume, see [Access point compatibility](access-points-for-fsxn-object-api-support.md). 

**Note**  
Files on FSx for ONTAP volumes are identified with a `StorageClass` of `FSX_ONTAP`.

**Topics**
+ [

# Downloading a file using an S3 access point
](get-object-ap.md)
+ [

# Uploading a file using an S3 access point
](put-object-ap.md)
+ [

# Listing files using an S3 access point
](list-object-ap.md)
+ [

# Tagging a file using an S3 access point
](add-tag-set-ap.md)
+ [

# Deleting a file using an S3 access point
](delete-object-ap.md)

# Downloading a file using an S3 access point
Download a file

The following `get-object` example command shows how you can use the AWS CLI to download a file through an access point. You must include an outfile, which is a file name for the downloaded object.

The example requests the file *`my-image.jpg`* through the access point *`my-ontap-ap`* and saves the downloaded file as *`download.jpg`*.

```
$ aws s3api get-object --key my-image.jpg --bucket my-ontap-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias download.jpg
{
    "AcceptRanges": "bytes",
    "LastModified": "Mon, 14 Oct 2024 17:01:48 GMT",
    "ContentLength": 141756,
    "ETag": "\"00751974dc146b76404bb7290f8f51bb-1\"",
    "ContentType": "binary/octet-stream",
    "ServerSideEncryption": "SSE_FSX",
    "Metadata": {},
    "StorageClass": "FSX_ONTAP"
}
```

You can also use the REST API to download an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) in the *Amazon Simple Storage Service API Reference*.

# Uploading a file using an S3 access point
Upload a file

The following `put-object` example command shows how you can use the AWS CLI to upload a file through an access point. You must include an outfile, which is a file name for the uploaded object.

The example uploads the file *`my-new-image.jpg`* through the access point *`my-ontap-ap`* and saves the uploaded file as *`my-new-image.jpg`*.

```
$ aws s3api put-object --bucket my-ontap-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias --key my-new-image.jpg --body  my-new-image.jpg
```

You can also use the REST API to upload an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) in the *Amazon Simple Storage Service API Reference*.

# Listing files using an S3 access point
List files

The following example lists files through the access point alias `my-ontap-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias` owned by account ID *`111122223333`* in Region *`us-east-2`*.

```
$ aws s3api list-objects-v2 --bucket my-ontap-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias
{
    "Contents": [
        {
            "Key": ".hidden-dir-with-data/file.txt",
            "LastModified": "2024-10-29T14:22:05.4359",
            "ETag": "\"88990077ab44cd55ef66aa77-1\"",
            "Size": 18,
            "StorageClass": "FSX_ONTAP"
        },
        {
            "Key": "documents/report.rtf",
            "LastModified": "2024-11-02T10:18:15.6621",
            "ETag": "\"ab12cd34ef56a89219zg6aa77-1\"",
            "Size": 1048576,
            "StorageClass": "FSX_ONTAP"
        },
    ]
}
```

You can also use the REST API to list your files. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html) in the *Amazon Simple Storage Service API Reference*.

# Tagging a file using an S3 access point
Add a tag-set

The following `put-object-tagging` example command shows how you can use the AWS CLI to add a tag-set through an access point. Each tag is a key-value pair. For more information, see [Categorizing your storage using tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html) in the *Amazon Simple Storage Service User Guide*.

The example adds a tag-set to the existing file `my-image.jpg` using the access point *`my-ontap-ap`*.

```
$ aws s3api put-object-tagging --bucket my-ontap-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias --key my-image.jpg --tagging TagSet=[{Key="finance",Value="true"}] 
```

You can also use the REST API to add a tag-set to an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObjectTagging.html) in the *Amazon Simple Storage Service API Reference*.

# Deleting a file using an S3 access point
Delete a file

The following `delete-object` example command shows how you can use the AWS CLI to delete a file through an access point.

```
$ aws s3api delete-object --bucket my-ontap-ap-hrzrlukc5m36ft7okagglf3gmwluquse1b-ext-s3alias --key my-image.jpg 
```

You can also use the REST API to delete an object through an access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html) in the *Amazon Simple Storage Service API Reference*.

# Troubleshooting S3 access point issues
Troubleshooting access points

This section describes symptoms, causes, and resolutions for when you encounter issues accessing your FSx data from S3 access points.

## S3 access point creation failed due to file system user identity lookup failure


When creating and attaching an S3 Access Point, a [https://docs.aws.amazon.com/fsx/latest/APIReference/API_OntapFileSystemIdentity.html#FSx-Type-OntapFileSystemIdentity-Type](https://docs.aws.amazon.com/fsx/latest/APIReference/API_OntapFileSystemIdentity.html#FSx-Type-OntapFileSystemIdentity-Type) must be provided. You are responsible for configuring the provided UNIX or Windows user within ONTAP. 

If a [https://docs.aws.amazon.com/fsx/latest/APIReference/API_OntapUnixFileSystemUser.html](https://docs.aws.amazon.com/fsx/latest/APIReference/API_OntapUnixFileSystemUser.html) is provided, ONTAP must be able to map the UnixUser name to UNIX UID/GIDs. ONTAP determines how to perform this mapping using the [name service switch configuration](https://docs.netapp.com/us-en/ontap/nfs-admin/ontap-name-service-switch-config-concept.html). 

```
> vserver services name-service ns-switch show
```

```
Vserver         Database       Order
--------------- ------------   ---------
svm_1           hosts          files,
                               dns
svm_1           group          files,
                               ldap
svm_1           passwd         files,
                               ldap
svm_1           netgroup       nis,
                               files
```

 Please ensure your UnixUser has an entry in the `passwd` and `group` databases using a valid source (`files`,`ldap`, etc). The `files` source can be configured using the `vserver services name-service unix-user` and `vserver services name-service unix-group` commands. The `ldap` source can be configured using the `vserver services name-service ldap` command. 

 If a [https://docs.aws.amazon.com/fsx/latest/APIReference/API_OntapWindowsFileSystemUser.html](https://docs.aws.amazon.com/fsx/latest/APIReference/API_OntapWindowsFileSystemUser.html) is provided, ONTAP must be able to find the WindowsUser name in the joined Active Directory domain. 

 To confirm if a provided UnixUser or WindowsUser is mapped correctly, using `fsxadmin` you can use the following command (replace `-unix-user-name` with `-win-name` for WindowsUsers): 

```
> vserver services access-check authentication show-creds -node FsxId0fd48ff588b9d3eee-01 -vserver svm_name -unix-user-name root -show-partial-unix-creds true
```

Example successful output:

```
 UNIX UID: root

 GID: daemon
 Supplementary GIDs:
  daemon
```

Example unsuccessful output:

```
Error: Acquire UNIX credentials procedure failed
  [  2 ms] Entry for user-name: unmapped-user not found in the
           current source: FILES. Entry for user-name: unmapped-user
           not found in any of the available sources
**[     3] FAILURE: Unable to retrieve UID for UNIX user
**         unmapped-user

Error: command failed: Failed to resolve user name to a UNIX ID. Reason: "SecD Error: object not found".
```

 An incorrect user mapping may result in `Access Denied` errors from S3. See example failure reasons below. 

**`Entry for user-name not found in the current source: LDAP`**

If your `ns-switch` is configured to use an `ldap` source, please ensure ONTAP is configured to use your LDAP server properly. See [NetApp's Technical Report for configuring LDAP](https://www.netapp.com/pdf.html?item=/media/19423-tr-4835.pdf) for more information.

**`RESULT_ERROR_DNS_CANT_REACH_SERVER` or `RESULT_ERROR_SECD_IN_DISCOVERY`**

This error indicates an issue with the vserver's DNS configuration in ONTAP. Run the following to ensure your vserver's DNS is configured properly:

```
> dns check -vserver svm_name
```

**`NT_STATUS_PENDING`**

This error indicates an issue communicating with the domain controller. The underlying cause may be due to a lack of SMB credits. See [NetApp KB](https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/How_ONTAP_implements_SMB_crediting) for more information.

## S3 access point creation failed because the volume is not mounted.


S3 access points can only be attached to FSx for ONTAP volumes that are mounted (have junction paths). This also applies to DP (Data Protection) volumes types. See [ONTAP volume mount documentation](https://docs.netapp.com/us-en/ontap/nfs-admin/mount-unmount-existing-volumes-nas-namespace-task.html) for more information.

## S3 access point creation failed because the S3 protocol is disabled on the SVM


S3 access points require the S3 protocol to be enabled on the Storage Virtual Machine (SVM). To enable the S3 protocol, run the following command in the ONTAP CLI using `fsxadmin`:

```
> vserver add-protocols -vserver svm_name -protocols s3
```

To verify the protocol is enabled:

```
> vserver show -vserver svm_name -fields allowed-protocols,disallowed-protocols
```

## The file system is unable to handle S3 requests


If the S3 request volume for a particular workload exceeds the file system’s capacity to handle the traffic, you may experience S3 request errors (for example, `Internal Server Error`, `503 Slow Down`, and `Service Unavailable`). You can proactively monitor and alarm on the performance of your file system using Amazon CloudWatch metrics (for example, `Network throughput utilization` and `CPU utilization`). If you observe degraded performance, you can resolve this issue by increasing the file system's throughput capacity.

## Access Denied with default S3 access point permissions for automatically created service roles


Some S3-integrated AWS services will create a custom service role and customize the attached permissions to your specific usecase. When specifying your S3 access point alias as the S3 resource, those attached permissions may include your access point using a bucket ARN format (for example, `arn:aws:s3:::my-fsx-ap-foo7detztxouyjpwtu8krroppxytruse1a-ext-s3alias`) rather than the access point ARN format (for example, `arn:aws:s3:us-east-1:1234567890:accesspoint/my-fsx-ap`). To resolve this, modify the policy to use the ARN of the access point.

# Accessing data from other AWS services


In addition to Amazon EC2, you can use other AWS services with your volumes to access your data.

**Topics**
+ [

# Using Amazon WorkSpaces with FSx for ONTAP
](using-workspaces.md)
+ [

# Using Amazon Elastic Container Service with FSx for ONTAP
](mount-ontap-ecs-containers.md)
+ [

# Using Amazon Elastic VMware Service with FSx for ONTAP
](evs-ontap.md)
+ [

# Using VMware Cloud with FSx for ONTAP
](vmware-cloud-ontap.md)

# Using Amazon WorkSpaces with FSx for ONTAP
Using Amazon WorkSpaces

FSx for ONTAP can be used with Amazon WorkSpaces to provide shared network-attached storage (NAS) or to store roaming profiles for Amazon WorkSpaces accounts. After connecting to an SMB file share with a WorkSpaces instance, the user can create and edit files on the file share.

The following procedures shows how to use Amazon FSx with Amazon WorkSpaces to provide roaming profile and home folder access a consistent experience and to provide a shared team folder for Windows and Linux WorkSpaces users. If you are new to Amazon WorkSpaces, you can create your first Amazon WorkSpaces environment with the instructions in [Get started with WorkSpaces Quick Setup](https://docs.aws.amazon.com/workspaces/latest/adminguide/getting-started.html) in the *Amazon WorkSpaces Administration Guide*.

**Topics**
+ [

## Provide Roaming Profile support
](#workspace-roaming-profile)
+ [

## Provide a shared folder to access common files
](#workspace-shared-folder)

## Provide Roaming Profile support


You can use Amazon FSx to provide Roaming Profile support to users in your organization. A user will have permissions to access only their Roaming Profile. The folder will be automatically connected using Active Directory Group Policies. With a Roaming Profile, users’ data and desktop settings are saved when they log off an Amazon FSx file share enabling documents and settings to be shared between different WorkSpaces instances, and automatically backed up using Amazon FSx daily automatic backups.

**Step 1: Create a profile folder location for domain users using Amazon FSx**

1. Create an FSx for ONTAP file system using the Amazon FSx console. For more information, see [To create a file system (console)](creating-file-systems.md#create-MAZ-file-system-console).
**Important**  
Each FSx for ONTAP file system has an endpoint IP address range from which the endpoints associated with the file system are created. For multi-AZ file systems, FSx for ONTAP chooses a default unused IP address range from 198.19.0.0/16 as the endpoint IP address range. This IP address range is also used by WorkSpaces for management traffic range, as described in [IP address and port requirements for WorkSpaces](https://docs.aws.amazon.com/workspaces/latest/adminguide/workspaces-port-requirements.html) in the *Amazon WorkSpaces Administration Guide*. As a result, to access your *multi-AZ* FSx for ONTAP file system from WorkSpaces, you must select an endpoint IP address range that does not overlap with 198.19.0.0/16.

1. If you don't have a storage virtual machine (SVM) joined to an Active Directory, create one now. For example, you can provision an SVM named `fsx` and set the security style to `NTFS`. For more information, see [To create a storage virtual machine (console)](creating-svms.md#create-svm-console).

1. Create a volume for your SVM. For example, you can create a volume named `fsx-vol` which inherits the security style of your SVM's root volume. For more information, see [To create a FlexVol volume (console)](creating-volumes.md#create-volume-console).

1. Create an SMB share on your volume. For example, you can create a share called `workspace` on your volume named `fsx-vol`, in which you create a folder named `profiles`. For more information, see [Managing SMB shares](create-smb-shares.md).

1. Access your Amazon FSx SVM from an Amazon EC2 instance running Windows Server or from a WorkSpace. For more information, see [Accessing your FSx for ONTAP data](supported-fsx-clients.md).

1. You map your share to `Z:\` on your Windows WorkSpaces instance:  
![\[Shows the Windows Map Network Drive dialog for mapping an ONTAP SMB share to a letter on a WorkSpace.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/workspace-map-drive.png)

**Step 2: Link the FSx for ONTAP file share to User Accounts**

1. On your test user's WorkSpace, choose **Windows > System > Advanced System Settings**. 

1. In **System Properties**, select the **Advanced** tab and press the **Settings** button in the **User Profiles** section. The logged-in user will have a profile type of `Local`.

1. Log out the test user from the WorkSpace.

1. Set the test user to have a roaming profile located on your Amazon FSx file system. In your administrator WorkSpaces, open a PowerShell console and use a command similar to the following example (which uses the `profiles` folder you previously created in Step 1):

   ```
   Set-ADUser username -ProfilePath \\filesystem-dns-name\sharename\foldername\username
   ```

   For example:

   ```
   Set-ADUser testuser01 -ProfilePath \\fsx.fsxnworkspaces.com\workspace\profiles\testuser01
   ```

1. Log on to the test user WorkSpace.

1. In **System Properties**, select the **Advanced** tab and press the **Settings** button in the **User Profiles** section. The logged-in user will have a profile type of `Roaming`.  
![\[The Windows User Profiles dialog showing a profile configured for a WorkSpace user.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/workspace-profiles.png)

1. Browse the FSx for ONTAP shared folder. In the `profiles` folder, you'll see a folder for the user.  
![\[The Windows File Explorer dialog showing a new folder for a WorkSpace user.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/workspace-new-folder.png)

1. Create a document in the test user's `Documents` folder

1. Log out the test user from their WorkSpace.

1. If you log back on as the test user and browse to their profile store, you will see the document you created.  
![\[The Windows File Explorer dialog showing a new file for a WorkSpace user.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/workspace-new-file.png)

## Provide a shared folder to access common files


You can use Amazon FSx to provide a shared folder to users in your organization. A shared folder can be used to store files used by your user community, such as demo files, code examples, and instruction manuals needed by all users. Typically, you have drives mapped for shared folders; however because mapped drives use letters, there’s a limit to the number of shares you can have. This procedure creates an Amazon FSx shared folder that’s available without a drive letter, giving you greater flexibility in assigning shares to teams.

**To mount a shared folder for cross-platform access from both Linux and Windows WorkSpaces**

1. From the **Taskbar**, choose **Places > Connect to Server**.

   1. For **Server**, enter *file-system-dns-name*.

   1. Set **Type** to `Windows share`.

   1. Set **Share** to the name of the SMB share, such as `workspace`.

   1. You can leave **Folder** as `/` or set it to a folder, such as a folder named `team-shared`.

   1. For a Linux WorkSpace, you don't need to enter your user details if your Linux WorkSpace is in the same domain as the Amazon FSx share.

   1. Choose **Connect**.  
![\[The Connect to Server dialog showing a connection to an SMB share.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/workspace-connect.png)

1. After the connection is made, you can see the shared folder (named `team-shared` in this example) in the SMB share named `workspace`.  
![\[A Windows dialog showing a shared folder.\]](http://docs.aws.amazon.com/fsx/latest/ONTAPGuide/images/workspace-mounted.png)

# Using Amazon Elastic Container Service with FSx for ONTAP
Using Amazon ECS

You can access your Amazon FSx for NetApp ONTAP file systems from an Amazon Elastic Container Service (Amazon ECS) Docker container on an Amazon EC2 Linux or Windows instance.

## Mounting on an Amazon ECS Linux container


1. Create an ECS cluster using the EC2 Linux \$1 Networking cluster template for your Linux containers. For more information, see [Creating a cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html) in the *Amazon Elastic Container Service Developer Guide*.

1. Create a directory on the EC2 instance for mounting the SVM volume as follows:

   ```
   sudo mkdir /fsxontap
   ```

1. Mount your FSx for ONTAP volume on the Linux EC2 instance by either using a user-data script during instance launch, or by running the following commands:

   ```
   sudo mount -t nfs svm-ip-address:/vol1 /fsxontap
   ```

1. Mount the volume using the following command:

   ```
   sudo mount -t nfs -o nfsvers=NFS_version svm-dns-name:/volume-junction-path /fsxontap
   ```

   The following example uses sample values.

   ```
   sudo mount -t nfs -o nfsvers=4.1 svm-01234567890abdef0.fs-01234567890abcdef1.fsx.us-east-1.amazonaws.com:/vol1 /fsxontap
   ```

   You can also use the SVM's IP address instead of its DNS name.

   ```
   sudo mount -t nfs -o nfsvers=4.1 198.51.100.1:/vol1 /fsxontap
   ```

1. When creating your Amazon ECS task definitions, add the following `volumes` and `mountPoints` container properties in the JSON container definition. Replace the `sourcePath` with the mount point and directory in your FSx for ONTAP file system.

   ```
   {
       "volumes": [
           {
               "name": "ontap-volume",
               "host": {
                   "sourcePath": "mountpoint"
               }
           }
       ],
       "mountPoints": [
           {
               "containerPath": "containermountpoint",
               "sourceVolume": "ontap-volume"
           }
       ],
       .
       .
       .
   }
   ```

## Mounting on an Amazon ECS Windows container


1. Create an ECS cluster using the EC2 Windows \$1 Networking cluster template for your Windows containers. For more information, see [Creating a cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html) in the *Amazon Elastic Container Service Developer Guide*.

1. Add a domain-joined Windows EC2 instance to the ECS Windows cluster and map an SMB share.

   Launch an ECS optimized Windows EC2 instance that is joined to your Active Directory domain and initialize the ECS agent by running the following command.

   ```
   PS C:\Users\user> Initialize-ECSAgent -Cluster windows-fsx-cluster -EnableTaskIAMRole
   ```

   You can also pass the information in a script to the user-data text field as follows.

   ```
   <powershell>
   Initialize-ECSAgent -Cluster windows-fsx-cluster -EnableTaskIAMRole
   </powershell>
   ```

1. Create an SMB global mapping on the EC2 instance so that you can map your SMB share to a drive. Replace the values below netbios or DNS name for your FSx file system and share name. The NFS volume vol1 that was mounted on the Linux EC2 instance is configured as a CIFS share fsxontap on the FSx file system.

   ```
   vserver cifs share show -vserver svm08 -share-name fsxontap
   
   
                                         Vserver: svm08
                                           Share: fsxontap
                        CIFS Server NetBIOS Name: FSXONTAPDEMO
                                            Path: /vol1
                                Share Properties: oplocks
                                                  browsable
                                                  changenotify
                                                  show-previous-versions
                              Symlink Properties: symlinks
                         File Mode Creation Mask: -
                    Directory Mode Creation Mask: -
                                   Share Comment: -
                                       Share ACL: Everyone / Full Control
                   File Attribute Cache Lifetime: -
                                     Volume Name: vol1
                                   Offline Files: manual
                   Vscan File-Operations Profile: standard
               Maximum Tree Connections on Share: 4294967295
                      UNIX Group for File Create: -
   ```

1. Create the SMB global mapping on the EC2 instance using the following command:

   ```
   New-SmbGlobalMapping -RemotePath \\fsxontapdemo.fsxontap.com\fsxontap -LocalPath Z:
   ```

1. When creating your Amazon ECS task definitions, add the following `volumes` and `mountPoints` container properties in the JSON container definition. Replace the `sourcePath` with the mount point and directory in your FSx for ONTAP file system.

   ```
   {
       "volumes": [
           {
               "name": "ontap-volume",
               "host": {
                   "sourcePath": "mountpoint"
               }
           }
       ],
       "mountPoints": [
           {
               "containerPath": "containermountpoint",
               "sourceVolume": "ontap-volume"
           }
       ],
       .
       .
       .
   }
   ```

# Using Amazon Elastic VMware Service with FSx for ONTAP
Using Amazon EVS

You can use FSx for ONTAP as an external datastore for Amazon Elastic VMware Service (Amazon EVS) Software-Defined Data Centers (SDDCs). For more information, see [Run high-performance workloads with Amazon FSx for NetApp ONTAP](https://docs.aws.amazon.com/evs/latest/userguide/fsx-ontap.html). For detailed instructions, see [Configure Amazon FSx for NetApp ONTAP as an NFS datastore](https://docs.aws.amazon.com/evs/latest/userguide/config-fsx-nfs-datastore.html) and [Configure Amazon FSx for NetApp ONTAP as an iSCSI datastore](https://docs.aws.amazon.com/evs/latest/userguide/config-fsx-iscsi-datastore.html).

# Using VMware Cloud with FSx for ONTAP
Using VMware Cloud

You can use FSx for ONTAP as an external datastore for VMware Cloud on AWS Software-Defined Data Centers (SDDCs). For more information, see [ Configure Amazon FSx for NetApp ONTAP as External Storage](https://docs.vmware.com/en/VMware-Cloud-on-AWS/services/com.vmware.vmc-aws-operations/GUID-D55294A3-7C40-4AD8-80AA-B33A25769CCA.html?hWord=N4IghgNiBcIGYGcAeIC+Q) and [VMware Cloud on AWS with Amazon FSx for NetApp ONTAP Deployment Guide](https://vmc.techzone.vmware.com/fsx-guide#overview).