

# Administering file systems
<a name="administer-lustre-file-systems"></a>

FSx for Lustre provides a set of features that simplify the performance of your administrative tasks. These include the ability to take point-in-time backups, to manage file system storage quotas, to manage your storage and throughput capacity, to manage data compression, and to set maintenance windows for performing routine software patching of the system.

You can administer your FSx for Lustre file systems using the Amazon FSx Management Console, AWS Command Line Interface (AWS CLI), Amazon FSx API, or AWS SDKs.

**Topics**
+ [Working with EFA-enabled file systems](efa-file-systems.md)
+ [Using Lustre storage quotas](lustre-quotas.md)
+ [Managing storage capacity](managing-storage-capacity.md)
+ [Managing provisioned SSD read cache](managing-ssd-read-cache.md)
+ [Managing metadata performance](managing-metadata-performance.md)
+ [Managing provisioned throughput capacity](managing-throughput-capacity.md)
+ [Lustre data compression](data-compression.md)
+ [Lustre root squash](root-squash.md)
+ [FSx for Lustre file system status](file-system-lifecycle-states.md)
+ [Tag your Amazon FSx for Lustre resources](tag-resources.md)
+ [Amazon FSx for Lustre maintenance windows](maintenance-windows.md)
+ [Managing Lustre versions](managing-lustre-version.md)
+ [Deleting a file system](delete-file-system.md)

# Working with EFA-enabled file systems
<a name="efa-file-systems"></a>

If you are creating a file system with over 10 GBps of throughput capacity, we recommend enabling Elastic Fabric Adapter (EFA) to optimize throughput per client instance. EFA is a high-performance network interface that uses a custom-built operating system bypass technique and the AWS Scalable Reliable Datagram (SRD) network protocol to increase performance. For information about EFA, see [Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html) in the *Amazon EC2 User Guide*. 

EFA-enabled file systems support two additional performance features: GPUDirect Storage (GDS) and ENA Express. GDS support builds on EFA to further enhance performance by enabling direct data transfer between the file system and the GPU memory, bypassing the CPU. This direct path eliminates the need for redundant memory copies and CPU involvement in the data transfer operations. With EFA and GDS support, you can achieve higher throughput to individual EFA-enabled client instances. ENA Express provides optimized network communication for Amazon EC2 instances using an advanced path selection algorithm and enhanced congestion control mechanism. With ENA Express support, you can achieve higher throughput to individual ENA Express-enabled client instances. For information about ENA Express, see [Improve network performance between EC2 instances with ENA Express](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express.html) in the *Amazon EC2 User Guide*. 

**Topics**
+ [Considerations when using EFA-enabled file systems](#efa-considerations)
+ [Prerequisites for using EFA-enabled file systems](#efa-prerequisites)
+ [Creating an EFA-enabled file system](#create-efa-file-system)

## Considerations when using EFA-enabled file systems
<a name="efa-considerations"></a>

Here are a few important items to consider when creating EFA-enabled file systems:
+ **Multiple connectivity options:** EFA-enabled file systems can communicate with client instances using ENA, ENA Express, and EFA.
+ **Deployment type:** EFA is supported on Persistent 2 file systems with a metadata configuration specified, including file systems using the Intelligent-Tiering storage class.
+ **Updating EFA setting:** You can choose to enable EFA when you create a new file system but you cannot enable or disable EFA on an existing file system.
+ **Scaling throughput with storage capacity:** You can scale storage capacity on an EFA-enabled SSD-based file system to increase throughput capacity but you cannot change the throughput tier of an EFA-enabled file system.
+ **AWS Regions:** For a list of AWS Regions that support EFA-enabled Persistent 2 file systems, see [Deployment type availability](using-fsx-lustre.md#persistent-deployment-regions).

## Prerequisites for using EFA-enabled file systems
<a name="efa-prerequisites"></a>

The following are prerequisites for using EFA-enabled file systems:

**To create your EFA-enabled file system:**
+ Use an EFA-enabled security group. For more information, see [EFA-enabled security groups](limit-access-security-groups.md#efa-security-groups).
+ Use the same Availability Zone and /16 CIDR as your EFA-enabled client instances within your Amazon VPC.
+ On Intelligent-Tiering file systems, EFA is only supported with a throughput capacity of 4,000 MBps or increments of 4,000 MBps.

**To access your file system using Elastic Fabric Adapter (EFA):**
+ Use Nitro v4 (or higher) EC2 instances that support EFA, excluding the trn2 instance family. See [Supported instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html#efa-instance-types) in the *Amazon EC2 User Guide*.
+ Run AL2023, RHEL 9.5 and newer, or Ubuntu 22\$1 with kernel version of 6.8 and newer. For more information, see [Installing the Lustre client](install-lustre-client.md).
+ Install the EFA modules and configure EFA interfaces on your client instances. For more information, see [Configuring EFA clients](configure-efa-clients.md).

**To access your file system using GPUDirect Storage (GDS):**
+ Use an Amazon EC2 P5, P5e, P5en, or P6-B200 client instance.
+ Install the NVIDIA Compute Unified Device Architecture (CUDA) package, the open source NVIDIA driver, and the NVIDIA GPUDirect Storage Driver on your client instance. For more information, see [Install the GDS driver (optional)](configure-efa-clients.md#install-gds-driver).

**To access your file system using ENA Express:**
+ Use Amazon EC2 instances that support ENA Express. See [Supported instance types for ENA Express](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express.html#ena-express-supported-instance-types) in the *Amazon EC2 User Guide*.
+ Update the settings for your Linux instance. See [Prerequisites for Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express.html#ena-express-prereq-linux) in the *Amazon EC2 User Guide*.
+ Enable ENA Express on network interfaces for your client instances. For details, see [Review ENA Express settings for your EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ena-express-list-view.html) in the *Amazon EC2 User Guide*. 

## Creating an EFA-enabled file system
<a name="create-efa-file-system"></a>

This section contains instructions on how to create an FSx for Lustre EFA-enabled file system using the AWS CLI. For information on how to create an EFA-enabled file system using the Amazon FSx console, see [Step 1: Create your FSx for Lustre file system](getting-started.md#getting-started-step1).

### To create an EFA-enabled file system (CLI)
<a name="create-efa-cli"></a>

Use the [create-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/create-file-system.html) CLI command (or the equivalent [CreateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystem.html) API operation). The following example creates an FSx for Lustre EFA-enabled file system with a `PERSISTENT_2` deployment type.

```
aws fsx create-file-system\
   --storage-capacity 4800 \
   --storage-type SSD \
   --file-system-type LUSTRE \
   --file-system-type-version 2.15 \
   --subnet-ids subnet-01234567890 \
   --security-group-ids sg-0123456789abcdefg \
   --lustre-configuration '{"DeploymentType": "PERSISTENT_2", "EfaEnabled": true}'
```

After successfully creating the file system, Amazon FSx returns the file system's description in JSON format.

# Using Lustre storage quotas
<a name="lustre-quotas"></a>

You can create storage quotas for users, groups, and projects on FSx for Lustre file systems. With storage quotas, you can limit the amount of disk space and the number of files that a user, group, or project can consume. Storage quotas automatically track user-level, group-level, and project-level usage so you can monitor consumption whether or not you choose to set storage limits.

Amazon FSx enforces quotas and prevents users who have exceeded them from writing to the storage space. When users exceed their quotas, they must delete enough files to get under the quota limits so that they can write to the file system again.

**Topics**
+ [Quota enforcement](#quotas-enforcement)
+ [Types of quotas](#quota-types)
+ [Quota limits and grace periods](#quota-limits)
+ [Setting and viewing quotas](#setting-quotas)
+ [Quotas and Amazon S3 linked buckets](#quotas-s3)
+ [Quotas and restoring backups](#quotas-backups)

## Quota enforcement
<a name="quotas-enforcement"></a>

User, group, and project quota enforcement is automatically enabled on all FSx for Lustre file systems. You cannot disable quota enforcement.

## Types of quotas
<a name="quota-types"></a>

System administrators with AWS account root user credentials can create the following types of quotas:
+ A *user quota* applies to an individual user. A user quota for a specific user can be different from the quotas of other users.
+ A *group quota* applies to all users who are members of a specific group.
+ A *project quota* applies to all files or directories associated with a project. A project can include multiple directories or individual files located in different directories within a file system.
**Note**  
Project quotas are only supported on Lustre version 2.15 on FSx for Lustre file systems.
+ A *block quota* limits the amount of disk space that a user, group, or project can consume. You configure the storage size in kilobytes.
+ An *inode quota* limits the number of files or directories that a user, group, or project can create. You configure the maximum number of inodes as an integer.

**Note**  
Default quotas aren't supported.

If you set quotas for a particular user and a group, and the user is a member of that group, the user's data usage applies to both quotas. It is also limited by both quotas. If either quota limit is reached, the user is blocked from writing to the file system.

**Note**  
Quotas set for the root user are not enforced. Similarly, writing data as the root user using the `sudo` command bypasses enforcement of the quota.

## Quota limits and grace periods
<a name="quota-limits"></a>

Amazon FSx enforces user, group, and project quotas as a hard limit or as a soft limit with a configurable grace period.

The hard limit is the absolute limit. If users exceed their hard limit, a block or inode allocation fails with a Disk quota exceeded message. Users who have reached their quota hard limit must delete enough files or directories to get under the quota limit before they can write to the file system again. When a grace period is set, users can exceed the soft limit within the grace period if under the hard limit.

For soft limits, you configure a grace period in seconds. The soft limit must be smaller than the hard limit.

You can set different grace periods for inode and block quotas. You can also set different grace periods for a user quota, a group quota, and a project quota. When user, group, and project quotas have different grace periods, the soft limit transforms to a hard limit after the grace period of any of these quotas elapses.

When users exceed a soft limit, Amazon FSx allows them to continue exceeding their quota until the grace period has elapsed or until the hard limit is reached. After the grace period ends, the soft limit converts to a hard limit, and users are blocked from any further write operations until their storage usage returns below the defined block quota or inode quota limits. Users don't receive a notification or warning when the grace period begins.

## Setting and viewing quotas
<a name="setting-quotas"></a>

You set storage quotas using Lustre file system `lfs` commands in your Linux terminal. The `lfs setquota` command sets quota limits, and the `lfs quota` command displays quota information.

For more information about Lustre quota commands, see the *Lustre Operations Manual* on the [Lustre documentation website](http://lustre.org/documentation/).

### Setting user, group, and project quotas
<a name="setting-user-quotas"></a>

The syntax of the `setquota` command for setting user, group, or project quotas is as follows.

```
lfs setquota {-u|--user|-g|--group|-p|--project} username|groupname|projectid
             [-b block_softlimit] [-B block_hardlimit]
             [-i inode_softlimit] [-I inode_hardlimit]
             /mount_point
```

Where:
+ `-u` or `--user` specifies a user to set a quota for.
+ `-g` or `--group` specifies a group to set a quota for.
+ `-p` or `--project` specifies a project to set a quota for.
+ `-b` sets a block quota with a soft limit. `-B` sets a block quota with a hard limit. Both *block\$1softlimit* and *block\$1hardlimit* are expressed in kilobytes, and the minimum value is 1024 KB.
+ `-i` sets an inode quota with a soft limit. `-I` sets an inode quota with a hard limit. Both *inode\$1softlimit* and *inode\$1hardlimit* are expressed in number of inodes, and the minimum value is 1024 inodes.
+ *mount\$1point* is the directory that the file system was mounted on.

**User quota example:** The following command sets a 5,000 KB soft block limit, an 8,000 KB hard block limit, a 2,000 soft inode limit, and a 3,000 hard inode limit quota for `user1` on the file system mounted to `/mnt/fsx`.

```
sudo lfs setquota -u user1 -b 5000 -B 8000 -i 2000 -I 3000 /mnt/fsx
```

**Group quota example:** The following command sets a 100,000 KB hard block limit for the group named `group1` on the file system mounted to `/mnt/fsx`.

```
sudo lfs setquota -g group1 -B 100000 /mnt/fsx
```

**Project quota example:** First make sure that you have used the `project` command to associate the desired files and directories with the project. For example, the following command associates all the files and sub-directories of the `/mnt/fsxfs/dir1` directory with the project whose project ID is `100`.

```
sudo lfs project -p 100 -r -s /mnt/fsxfs/dir1
```

Then use the `setquota` command to set the project quota. The following command sets a 307,200 KB soft block limit, a 309,200 KB hard block limit, a 10,000 soft inode limit, and an 11,000 hard inode limit quota for project `250` on the file system mounted to `/mnt/fsx`.

```
sudo lfs setquota -p 250 -b 307200 -B 309200 -i 10000 -I 11000 /mnt/fsx
```

### Setting grace periods
<a name="setting-grace"></a>

The default grace period is one week. You can adjust the default grace period for users, groups, or projects, using the following syntax.

```
lfs setquota -t {-u|-g|-p}
             [-b block_grace]
             [-i inode_grace]
             /mount_point
```

Where:
+ `-t` indicates that a grace time period will be set.
+ `-u` sets a grace period for all users.
+ `-g` sets a grace period for all groups.
+ `-p` sets a grace period for all projects.
+ `-b` sets a grace period for block quotas. `-i` sets a grace period for inode quotas. Both *block\$1grace* and *inode\$1grace* are expressed in integer seconds or in the `XXwXXdXXhXXmXXs` format.
+ *mount\$1point* is the directory that the file system was mounted on.

The following command sets grace periods of 1,000 seconds for user block quotas and 1 week and 4 days for user inode quotas.

```
sudo lfs setquota -t -u -b 1000 -i 1w4d /mnt/fsx
```

### Viewing quotas
<a name="viewing-quotas"></a>

The `quota` command displays information about user quotas, group quotas, project quotas, and grace periods.


| View quota command | Quota information displayed | 
| --- | --- | 
|  `lfs quota /mount_point`  |  General quota information (disk usage and limits) for the user running the command and the user's primary group.  | 
|  `lfs quota -u username /mount_point`  |  General quota information for a specific user. Users with AWS account root user credentials can run this command for any user, but non-root users can't run this command to get quota information about other users.  | 
|  `lfs quota -u username -v /mount_point`  |  General quota information for a specific user and detailed quota statistics for each object storage target (OST) and metadata target (MDT). Users with AWS account root user credentials can run this command for any user, but non-root users can't run this command to get quota information about other users.  | 
|  `lfs quota -g groupname /mount_point`  |  General quota information for a specific group.  | 
|  `lfs quota -p projectid /mount_point`  |  General quota information for a specific project.  | 
| `lfs quota -t -u /mount_point` | Block and inode grace times for user quotas. | 
| `lfs quota -t -g /mount_point` | Block and inode grace times for group quotas. | 
| `lfs quota -t -p /mount_point` | Block and inode grace times for project quotas. | 

## Quotas and Amazon S3 linked buckets
<a name="quotas-s3"></a>



You can link your FSx for Lustre file system to an Amazon S3 data repository. For more information, see [Linking your file system to an Amazon S3 bucket](create-dra-linked-data-repo.md).

You can optionally choose a specific folder or prefix within a linked S3 bucket as an import path to your file system. When a folder in Amazon S3 is specified and imported into your file system from S3, only the data from that folder is applied towards the quota. The data of the entire bucket is not counted against the quota limits.

File metadata in a linked S3 bucket are imported into a folder with a structure matching the imported folder from Amazon S3. These files count towards the inode quotas of the users and groups who own the files.

When a user performs an `hsm_restore` or lazy loads a file, the file's full size counts towards the block quota associated with the owner of the file. For example, if user A lazy loads a file that is owned by user B, the amount of storage and inode usage counts towards user B's quota. Similarly, when a user uses the Amazon FSx API to release a file, the data is freed up from the block quotas of the user or group who owns the file.

Because HSM restores and lazy loading are performed with root access, they bypass quota enforcement. Once data has been imported, it counts towards the user or group based on the ownership set in S3, which can cause users or groups to exceed their block limits. If this occurs, they'll need to free up files to be able to write to the file system again.

Similarly, file systems with automatic import enabled will automatically create new inodes for objects added to S3. These new inodes are created with root access and bypass quota enforcement while they're being created. These new inodes will count towards the users and groups, based on who owns the object in S3. If those users and groups exceed their inode quotas based on automatic import activity, they'll have to delete files in order to free up additional capacity and get below their quota limits.

## Quotas and restoring backups
<a name="quotas-backups"></a>

When you restore a backup, the quota settings of the original file system are implemented in the restored file system. For example, if quotas are set in file system A, and file system B is created from a backup of file system A, file system A's quotas are enforced in file system B.

# Managing storage capacity
<a name="managing-storage-capacity"></a>

You can increase the SSD or HDD storage capacity that is configured on your FSx for Lustre file system as you need additional storage and throughput. Because the throughput of an FSx for Lustre file system scales linearly with storage capacity, you also get a comparable increase in throughput capacity. To increase the storage capacity, you can use the Amazon FSx console, the AWS Command Line Interface (AWS CLI), or the Amazon FSx API.

When you request an update to your file system's storage capacity, Amazon FSx automatically adds new network file servers and scales your metadata server. While scaling storage capacity, the file system may be unavailable for a few minutes. File operations issued by clients while the file system is unavailable will transparently retry and eventually succeed after storage scaling is complete. During the time that the file system is unavailable, the file system status is set to `UPDATING`. Once storage scaling is complete, the file system status is set to `AVAILABLE`.

Amazon FSx then runs a storage optimization process that transparently rebalances data across the existing and newly added file servers. Rebalancing is performed in the background with no impact to file system availability. During rebalancing, you might see decreased file system performance as resources are consumed for data movement. For most file systems, storage optimization takes a few hours up to a few days. You can access and use your file system during the optimization phase.

You can track the storage optimization progress at any time using the Amazon FSx console, CLI, and API. For more information, see [Monitoring storage capacity increases](monitoring-storage-capacity-increase.md).

**Topics**
+ [Considerations when increasing storage capacity](#storage-capacity-important-to-know)
+ [When to increase storage capacity](#when-to-modify-storage-capacity)
+ [How concurrent storage scaling and backup requests are handled](#storage-capacity-changes-and-backups)
+ [Increasing storage capacity](increase-storage-capacity.md)
+ [Monitoring storage capacity increases](monitoring-storage-capacity-increase.md)

## Considerations when increasing storage capacity
<a name="storage-capacity-important-to-know"></a>

Here are a few important items to consider when increasing storage capacity:
+ **Increase only** – You can only *increase* the amount of storage capacity for a file system; you cannot decrease storage capacity.
+ **Increase increments** – When you increase storage capacity, use the increments listed in the **Increase storage capacity** dialog box.
+ **Time between increases** – You can't make further storage capacity increases on a file system until 6 hours after the last increase was requested.
+ **Throughput capacity** – You automatically increase throughput capacity when you increase the storage capacity. For persistent HDD file systems with SSD cache, the read cache storage capacity is also similarly increased to maintain an SSD cache that is sized to 20 percent of the HDD storage capacity. Amazon FSx calculates the new values for the storage and throughput capacity units and lists them in the **Increase storage capacity** dialog box.
**Note**  
You can independently modify the throughput capacity of a persistent SSD-based file system without having to update the file system's storage capacity. For more information, see [Managing provisioned throughput capacity](managing-throughput-capacity.md). 
+ **Deployment type** – You can increase the storage capacity of all deployment types except for scratch 1 file systems. 

## When to increase storage capacity
<a name="when-to-modify-storage-capacity"></a>

Increase your file system's storage capacity when it's running low on free storage capacity. Use the `FreeStorageCapacity` CloudWatch metric to monitor the amount of free storage that is available on the file system. You can create an Amazon CloudWatch alarm on this metric and get notified when it drops below a specific threshold. For more information, see [Monitoring with Amazon CloudWatch](monitoring-cloudwatch.md).

You can use CloudWatch metrics to monitor your file system's ongoing throughput usage levels. If you determine that your file system needs a higher throughput capacity, you can use the metric information to help you decide how much to increase the storage capacity. For information about how to determine your file system's current throughput, see [How to use Amazon FSx for Lustre CloudWatch metrics](how_to_use_metrics.md). For information about how storage capacity affects throughput capacity, see [Amazon FSx for Lustre performance](performance.md).

You can also view your file system's storage capacity and total throughput on the **Summary** panel of the file system details page.

## How concurrent storage scaling and backup requests are handled
<a name="storage-capacity-changes-and-backups"></a>

You can request a backup just before a storage scaling workflow begins or while it is in progress. The sequence of how Amazon FSx handles the two requests is as follows:
+ If a storage scaling workflow is in progress (storage scaling status is `IN_PROGRESS` and file system status is `UPDATING`) and you request a backup, the backup request is queued. The backup task is started when storage scaling is in the storage optimization phase (storage scaling status is `UPDATED_OPTIMIZING` and file system status is `AVAILABLE`).
+ If the backup is in progress (backup status is `CREATING`) and you request storage scaling, the storage scaling request is queued. The storage scaling workflow is started when Amazon FSx is transferring the backup to Amazon S3 (backup status is `TRANSFERRING`).

If a storage scaling request is pending and a file system backup request is also pending, the backup task has higher precedence. The storage scaling task does not start until the backup task is finished.

# Increasing storage capacity
<a name="increase-storage-capacity"></a>

You can increase a file system's storage capacity using the Amazon FSx console, the AWS CLI, or the Amazon FSx API.

**To increase storage capacity for a file system (console)**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems**, and choose the Lustre file system that you want to increase storage capacity for.

1. For **Actions**, choose **Update storage capacity**. Or, in the **Summary** panel, choose **Update** next to the file system's **Storage capacity** to display the **Increase storage capacity** dialog box.

1. For **Desired storage capacity**, provide a new storage capacity in GiB that is greater than the current storage capacity of the file system:
   + For a persistent SSD or scratch 2 file system, this value must be in multiples of 2400 GiB.
   + For a persistent HDD file system, this value must be in multiples of 6000 GiB for 12 MBps/TiB file systems and multiples of 1800 GiB for 40 MBps/TiB file systems.
   + For an EFA-enabled file system, this value must be in multiples of 38400 GiB for 125 MBps/TiB file systems, multiples of 19200 GiB for 250 MBps/TiB file systems, multiples of 9600 GiB for 500 MBps/TiB file systems, and multiples of 4800 GiB for 1000 MBps/TiB file systems.
**Note**  
You cannot increase the storage capacity of scratch 1 file systems.

1. Choose **Update** to initiate the storage capacity update.

1. You can monitor the update progress on the file systems detail page in the **Updates** tab.

**To increase storage capacity for a file system (CLI)**

1. To increase the storage capacity for an FSx for Lustre file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html). Set the following parameters:

   Set `--file-system-id` to the ID of the file system you are updating.

   Set `--storage-capacity` to an integer value that is the amount, in GiB, of the storage capacity increase. For a persistent SSD or scratch 2 file system, this value must be in multiples of 2400. For a persistent HDD file system, this value must be in multiples of 6000 for 12 MBps/TiB file systems and multiples of 1800 for 40 MBps/TiB file systems. The new target value must be greater than the current storage capacity of the file system.

   This command specifies a storage capacity target value of 9600 GiB for a persistent SSD or scratch 2 file system.

   ```
   $ aws fsx update-file-system \
       --file-system-id fs-0123456789abcdef0 \
       --storage-capacity 9600
   ```

1. You can monitor the progress of the update by using the AWS CLI command [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html). Look for the `administrative-actions` in the output.

   For more information, see [AdministrativeAction](https://docs.aws.amazon.com/fsx/latest/APIReference/API_AdministrativeAction.html).

# Monitoring storage capacity increases
<a name="monitoring-storage-capacity-increase"></a>

You can monitor the progress of a storage capacity increase using the Amazon FSx console, the API, or the AWS CLI.

## Monitoring increases in the console
<a name="monitor-storage-action-console"></a>

In the **Updates** tab in the file system details page, you can view the 10 most recent updates for each update type.

You can view the following information:

****Update type****  
Supported types are **Storage capacity** and **Storage optimization**.

****Target value****  
The desired value to update the file system's storage capacity to.

****Status****  
The current status of the storage capacity updates. The possible values are as follows:  
+ **Pending** – Amazon FSx has received the update request, but has not started processing it.
+ **In progress** – Amazon FSx is processing the update request.
+ **Updated; Optimizing** – Amazon FSx has increased the file system's storage capacity. The storage optimization process is now rebalancing data across the file servers.
+ **Completed** – The storage capacity increase completed successfully.
+ **Failed** – The storage capacity increase failed. Choose the question mark (**?**) to see details on why the storage update failed.

****Progress %****  
Displays the progress of the storage optimization process as percent complete.

****Request time****  
The time that Amazon FSx received the update action request.

## Monitoring increases with the AWS CLI and API
<a name="monitor-storage-action-cli-api"></a>

You can view and monitor file system storage capacity increase requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API action. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you increase a file system's storage capacity, two `AdministrativeActions` are generated: a `FILE_SYSTEM_UPDATE` and a `STORAGE_OPTIMIZATION` action. 

The following example shows an excerpt of the response of a **describe-file-systems** CLI command. The file system has a storage capacity of 4800 GB, and there is a pending administrative action to increase the storage capacity to 9600 GB.

```
{
    "FileSystems": [
        {
            "OwnerId": "111122223333",
            .
            .
            .
            "StorageCapacity": 4800,
            "AdministrativeActions": [
                {
                     "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
                     "RequestTime": 1581694764.757,
                     "Status": "PENDING",
                     "TargetFileSystemValues": {
                         "StorageCapacity": 9600
                     }
                },
                {
                    "AdministrativeActionType": "STORAGE_OPTIMIZATION",
                    "RequestTime": 1581694764.757,
                    "Status": "PENDING",
                }
            ]
```

Amazon FSx processes the `FILE_SYSTEM_UPDATE` action first, adding new file servers to the file system. When the new storage is available to the file system, the `FILE_SYSTEM_UPDATE` status changes to `UPDATED_OPTIMIZING`. The storage capacity shows the new larger value, and Amazon FSx begins processing the `STORAGE_OPTIMIZATION` administrative action. This is shown in the following excerpt of the response of a **describe-file-systems** CLI command. 

The `ProgressPercent` property displays the progress of the storage optimization process. After the storage optimization process completes successfully, the status of the `FILE_SYSTEM_UPDATE` action changes to `COMPLETED`, and the `STORAGE_OPTIMIZATION` action no longer appears.

```
{
    "FileSystems": [
        {
            "OwnerId": "111122223333",
            .
            .
            .
            "StorageCapacity": 9600,
            "AdministrativeActions": [
                {
                    "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
                    "RequestTime": 1581694764.757,
                    "Status": "UPDATED_OPTIMIZING",
                    "TargetFileSystemValues": {
                        "StorageCapacity": 9600
                }
                },
                {
                    "AdministrativeActionType": "STORAGE_OPTIMIZATION",
                    "RequestTime": 1581694764.757,
                    "Status": "IN_PROGRESS",
                    "ProgressPercent": 50,
                }
            ]
```



If the storage capacity increase fails, the status of the `FILE_SYSTEM_UPDATE` action changes to `FAILED`. The `FailureDetails` property provides information about the failure, shown in the following example.

```
{
    "FileSystems": [ 
        { 
            "OwnerId": "111122223333",
            .
            .
            .
            "StorageCapacity": 4800,
            "AdministrativeActions": [ 
                { 
                    "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
                    "FailureDetails": { 
                        "Message": "string"
                    },
                    "RequestTime": 1581694764.757,
                    "Status": "FAILED",
                    "TargetFileSystemValues": 
                        "StorageCapacity": 9600
                }
            ]
```

# Managing provisioned SSD read cache
<a name="managing-ssd-read-cache"></a>

When you create a file system with the Intelligent-Tiering storage class, you have the option to also provision an SSD-based read cache which provides SSD latencies for reads of your frequently-accessed data, up to 3 IOPS per GiB.

You can configure your SSD read cache for frequently-accessed data with one of these sizing mode options:
+ **Automatic (proportional to throughput capacity)**. With Automatic, Amazon FSx for Lustre automatically selects an SSD data read cache size based on provisioned throughput capacity.
+ **Custom (user-provisioned)**. With Custom, you can customize the size of your SSD read cache and scale it up or down at any time based on your workload's needs.
+ Choose **No Cache** if you do not want to use an SSD data read cache with your file system.

In Automatic (proportional to throughput capacity) mode, Amazon FSx automatically provisions the following default read cache size based on the throughput capacity of your file system.


| Provisioned throughput capacity (MBps) |  **SSD read cache in Automatic (proportional to throughput capacity) mode (GiB)** | **Supported SSD read cache size** | 
| --- |--- |--- |
| **** | **** | **minimum (GiB)** | **maximum (GiB)** | 
| --- |--- |--- |--- |
| Every 4000 | 20000 | 32 | 131072 | 
| --- |--- |--- |--- |

After your file system is created, you can modify your read cache's sizing mode and storage capacity at any time.

**Topics**
+ [Considerations when updating SSD read cache](#considerations-update-ssd-read-cache)
+ [Updating a provisioned SSD read cache](#update-ssd-read-cache)
+ [Monitoring SSD read cache updates](#monitoring-ssd-read-cache-update)

## Considerations when updating SSD read cache
<a name="considerations-update-ssd-read-cache"></a>

Here are a few important considerations when modifying your SSD data read cache:
+ Any time you modify the SSD read cache, all of its contents will be erased. This means that you may see a decrease in performance levels until the SSD read cache is populated again.
+ You can increase or decrease the capacity size of an SSD read cache. However, you can only do this once every six hours. There is no time restriction when adding or removing an SSD read cache from your file system.
+ You must increase or decrease the size of your SSD read cache by a minimum of 10% every time you modify it.

## Updating a provisioned SSD read cache
<a name="update-ssd-read-cache"></a>

You can update your SSD data read cache using the Amazon FSx console, the AWS CLI, or the Amazon FSx API.

### To update the SSD read cache for an Intelligent-Tiering file system (console)
<a name="update-sizing-mode-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the left navigation pane, choose **File systems**. In the **File systems** list, choose the FSx for Lustre file system that you want to update the SSD read cache for.

1. SSD On the **Summary** panel, choose **Update** next to the file system's **SSD read cache** value.

   The **Update SSD read cache** dialog box appears.

1. Select the new sizing mode that you would like for your data read cache, as follows:
   + Choose **Automatic (proportional to throughput capacity)** to have your data read cache automatically sized based on your throughput capacity.
   + Choose **Custom (user-provisioned)** if you know the approximate size of your dataset and would like to customize your data read cache. If you select Custom, you will also need to specify the **Desired read cache capacity** in GiB.
   + Choose **None** if you do not want to use an SSD data read cache with your Intelligent-Tiering file system.

1. Choose **Update**.

### To update the SSD read cache for an Intelligent-Tiering file system (CLI)
<a name="update-data-read-cache-cli"></a>

To update the SSD data read cache for an Intelligent-Tiering file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html) or the equivalent UpdateFileSystem API action. Set the following parameters:
+ Set `--file-system-id` to the ID of the file system that you are updating.
+ To modify your SSD read cache, use the `--lustre-configuration DataReadCacheConfiguration` property. This property has two parameters, `SizeGiB` and `SizingMode`:
  + **SizeGiB** ‐ Sets the size of your SSD read cache in GiB when using `USER_PROVISIONED` mode.
  + **SizingMode** ‐ Sets the sizing mode of your SSD read cache.
    + Set to `NO_CACHE` if you do not want to use an SSD read cache with your Intelligent-Tiering file system.
    + Set to `USER_PROVISIONED` to specify the exact size of your SSD read cache.
    + Set to `PROPORTIONAL_TO_THROUGHPUT_CAPACITY` to have your SSD data read cache automatically sized based on your throughput capacity.

The following example updates the SSD read cache to `USER_PROVISIONED` mode and sets the size to 524288 GiB.

```
aws fsx update-file-system \
   --file-system-id fs-0123456789abcdef0 \
   --lustre-configuration 'DataReadCacheConfiguration={SizeGiB=524288,SizingMode=USER_PROVISIONED}'
```

To monitor the progress of the update, use the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command. Look for the `AdministrativeActions` section in the output.

For more information, see [AdministrativeAction](https://docs.aws.amazon.com/fsx/latest/APIReference/API_AdministrativeAction.html) in the *Amazon FSx API Reference*.

## Monitoring SSD read cache updates
<a name="monitoring-ssd-read-cache-update"></a>

You can monitor the progress of an SSD read cache update by using the Amazon FSx console, the API, or the AWS CLI.

### Monitoring updates in the console
<a name="monitor-read-cache-action-console"></a>

You can monitor file system updates in the **Updates** tab on the **File system details** page.

For SSD read cache updates, you can view the following information:

****Update type****  
Supported types are **SSD read cache sizing mode** and **SSD read cache size**.

****Target value****  
The updated value for the file system's SSD read cache sizing mode or SSD read cache size.

****Status****  
The current status of the update. The possible values are as follows:  
+ **Pending** – Amazon FSx has received the update request, but has not started processing it.
+ **In progress** – Amazon FSx is processing the update request.
+ **Completed** – The update finished successfully.
+ **Failed** – The update request failed. Choose the question mark (**?**) to see details on why the request failed.

****Request time****  
The time that Amazon FSx received the update action request.

### Monitoring SSD read cache updates with the AWS CLI and API
<a name="monitor-ssd-read-cache-update-cli-api"></a>

You can view and monitor file system SSD read cache update requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API operation. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you update a file system's SSD read cache, a `FILE_SYSTEM_UPDATE` `AdministrativeActions` is generated.

The following example shows an excerpt of the response of a `describe-file-systems` CLI command. The file system has a pending administrative action to change the SSD read cache sizing mode to `USER_PROVISIONED` and the SSD read cache size to 524288.

```
"AdministrativeActions": [
    {
        "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
        "RequestTime": 1586797629.095,
        "Status": "PENDING",
        "TargetFileSystemValues": {
            "LustreConfiguration": {
                "DataReadCacheConfiguration": {
                     "SizingMode": "USER_PROVISIONED"
                     "SizeGiB": 524288,
                }
            }
        }
    }
]
```

When the new SSD read cache configuration is available to the file system, the `FILE_SYSTEM_UPDATE` status changes to `COMPLETED`. If the SSD read cache update request fails, the status of the `FILE_SYSTEM_UPDATE` action changes to `FAILED`.

# Managing metadata performance
<a name="managing-metadata-performance"></a>

You can update the metadata configuration of your FSx for Lustre file system without any disruption to your end users or applications by using the Amazon FSx console, Amazon FSx API, or AWS Command Line Interface (AWS CLI). The update procedure increases the number of provisioned Metadata IOPS for your file system.

**Note**  
Enhanced metadata is available only for 2.15 file systems. You can increase metadata performance only on FSx for Lustre file systems created with the Persistent 2 deployment type and a metadata configuration specified. You cannot add or update the metadata configuration for an FSx for Lustre file system if the metadata configuration is not specified at the time of file system creation. This also applies to file systems restored from backups of 2.12 file systems which did not support enhanced metadata performance, or from 2.15 file systems which had no metadata configuration specified.

The increased metadata performance of your file system is available for use within minutes. You can update the metadata performance at anytime, as long as metadata performance increase requests are at least 6 hours apart. While scaling metadata performance, the file system may be unavailable for a few minutes. File operations issued by clients while the file system is unavailable will transparently retry and eventually succeed after metadata performance scaling is complete. You will be billed for the new metadata performance increase after it becomes available to you.

You can track the progress of a metadata performance increase at any time by using the Amazon FSx console, CLI, and API. For more information, see [Monitoring metadata configuration updates](monitoring-metadata-performance-increase.md).

**Topics**
+ [Lustre metadata performance configuration](#metadata-configuration)
+ [Considerations when increasing metadata performance](#metadata-scaling-considerations)
+ [When to increase metadata performance](#when-to-modify-metadata-performance)
+ [Increasing metadata performance](modify-metadata-performance.md)
+ [Changing the metadata configuration mode](switch-provisioning-mode.md)
+ [Monitoring metadata configuration updates](monitoring-metadata-performance-increase.md)

## Lustre metadata performance configuration
<a name="metadata-configuration"></a>

The number of provisioned Metadata IOPS determines the maximum rate of metadata operations that can be supported by the file system.

When you create the file system, you choose a metadata configuration mode:
+ For SSD file systems, you can choose Automatic mode if you want Amazon FSx to automatically provision and scale the Metadata IOPS on your file system based on your file system's storage capacity. Note that Intelligent-Tiering file systems don't support Automatic mode.
+ For SSD file systems, you can choose User-provisioned if you want to specify the number of Metadata IOPS to provision for your file system.
+ For Intelligent-Tiering file systems, you must choose User-provisioned mode. With User-provisioned mode, you can specify the number of Metadata IOPS to provision for your file system.

On SSD file systems, you can switch from Automatic mode to User-provisioned mode at any time. You can also switch from User-provisioned to Automatic mode if the number of Metadata IOPS provisioned on your file system matches the default number of Metadata IOPS provisioned in Automatic mode. Intelligent-Tiering file systems only support User-provisioned mode, so you can't switch metadata configuration modes.

Valid Metadata IOPS values are as follows:
+ For SSD file systems, valid Metadata IOPS values are 1500, 3000, 6000, and multiples of 12000 up to a maximum of 192000.
+ For Intelligent-Tiering file systems, valid Metadata IOPS values are 6000 and 12000.

If the metadata performance of your workload exceeds the number of Metadata IOPS provisioned in Automatic mode, you can use User-provisioned mode to increase the Metadata IOPS value for your file system.

You can view the current value of the file system's metadata server configuration as follows:
+ Using the console – On the **Summary** panel of the file system details page, the **Metadata IOPS** field shows the current value of the provisioned Metadata IOPS and the current metadata configuration mode of the file system.
+ Using the CLI or API – Use the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) CLI command or the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API operation, and look for the `MetadataConfiguration` property.

## Considerations when increasing metadata performance
<a name="metadata-scaling-considerations"></a>

Here are a few important considerations when increasing your metadata performance:
+ **Metadata performance increase only** – You can only *increase* the number of Metadata IOPS for a file system; you cannot decrease the number of Metadata IOPS.
+ **Specifying Metadata IOPS in Automatic mode not supported** – You can't specify the number of Metadata IOPS on a file system that is in Automatic mode. You'll have to switch to User-provisioned mode and then make the request. For more information, see [Changing the metadata configuration mode](switch-provisioning-mode.md).
+ **Metadata IOPS for data written before scaling** – When scaling Metadata IOPS beyond 12000, FSx for Lustre adds new metadata servers to your file system. New metadata is automatically distributed across all servers for improved performance. However, existing metadata and subdirectories created before scaling remain on original servers, with no increase in Metadata IOPS.
+ **Time between increases** – You can't make further metadata performance increases on a file system until 6 hours after the last increase was requested.
+ **Concurrent metadata performance and SSD storage increases** – You cannot scale metadata performance and file system storage capacity concurrently.

## When to increase metadata performance
<a name="when-to-modify-metadata-performance"></a>

Increase the number of Metadata IOPS when you need to run workloads that require higher levels of metadata performance than is provisioned by default on your file system. You can monitor your metadata performance on the AWS Management Console by using the `Metadata IOPS Utilization` graph which provides the percentage of provisioned metadata server performance you are consuming on your file system.

You can also monitor your metadata performance using more granular CloudWatch metrics. CloudWatch metrics include `DiskReadOperations` and `DiskWriteOperations`, which provide the volume of metadata server operations that require disk IO, as well as granular metrics for metadata operations including file and directory creation, stats, reads, and deletes. For more information, see [FSx for Lustre metadata metrics](fs-metrics.md#fs-metadata-metrics).

# Increasing metadata performance
<a name="modify-metadata-performance"></a>

You can increase a file system's metadata performance by using the Amazon FSx console, the AWS CLI, or the Amazon FSx API.

## To increase metadata performance for a file system (console)
<a name="modify-metadata-console-ssd"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the left navigation pane, choose **File systems**. In the **File systems** list, choose the FSx for Lustre file system that you want to increase metadata performance for.

1. For **Actions**, choose **Update Metadata IOPS**. Or, in the **Summary** panel, choose **Update** next to the file system's **Metadata IOPS** field.

   The **Update Metadata IOPS** dialog box appears.

1. Choose **User-provisioned**.

1. For **Desired Metadata IOPS**, choose the new Metadata IOPS value. The value you enter must be greater than or equal to the current Metadata IOPS value.
   + For SSD file systems, valid values are `1500`, `3000`, `6000`, `12000`, and multiples of `12000` up to a maximum of `192000`.
   + For Intelligent-Tiering file systems, valid values are `6000` and `12000`.

1. Choose **Update**.

## To increase metadata performance for a file system (CLI)
<a name="modify-metadata-cli-ssd"></a>

To increase the metadata performance for an FSx for Lustre file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html) (UpdateFileSystem is the equivalent API action). Set the following parameters:
+ Set `--file-system-id` to the ID of the file system that you are updating.
+ To increase your metadata performance, use the `--lustre-configuration MetadataConfiguration` property. This property has two parameters, `Mode` and `Iops`.

  1. If your file system is in USER\$1PROVISIONED mode, using `Mode` is optional (if used, set `Mode` to `USER_PROVISIONED`).

     If your SSD file system is in AUTOMATIC mode, set `Mode` to `USER_PROVISIONED` (which switches the file system mode to USER\$1PROVISIONED in addition to increasing the Metadata IOPS value).

  1. For SSD file systems, set `Iops` to a value of `1500`, `3000`, `6000`, `12000`, or multiples of `12000` up to a maximum of `192000`. For Intelligent-Tiering file systems, set `Iops` to `6000` or `12000`. The value you enter must be greater than or equal to the current Metadata IOPS value.

The following example updates the provisioned Metadata IOPS to 12000.

```
aws fsx update-file-system \
    --file-system-id fs-0123456789abcdef0 \
    --lustre-configuration 'MetadataConfiguration={Mode=USER_PROVISIONED,Iops=12000}'
```

# Changing the metadata configuration mode
<a name="switch-provisioning-mode"></a>

For SSD-based file systems, you can change the metadata configuration mode of an existing file system using the AWS console and CLI, as explained in the following procedures.

When switching from Automatic mode to User-provisioned mode, you must provide a Metadata IOPS value greater than or equal to the current file system Metadata IOPS value.

If you request to switch from User-provisioned to Automatic mode and the current Metadata IOPS value is greater than the automated default, Amazon FSx rejects the request, because downscaling Metadata IOPS is not supported. To unblock mode switch, you must increase storage capacity to match your current Metadata IOPS in Automatic mode in order to enable mode switch again.

You can change a file system's metadata configuration mode by using the Amazon FSx console, the AWS CLI, or the Amazon FSx API.

## To change the metadata configuration mode for a file system (console)
<a name="switch-provisioning-mode-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. In the left navigation pane, choose **File systems**. In the **File systems** list, choose the FSx for Lustre file system that you want to change the metadata configuration mode for.

1. For **Actions**, choose **Update Metadata IOPS**. Or, in the **Summary** panel, choose **Update** next to the file system's **Metadata IOPS** field.

   The **Update Metadata IOPS** dialog box appears.

1. Do one of the following.
   + To switch from User-provisioned mode to Automatic mode, choose **Automatic**.
   + To switch from Automatic mode to User-provisioned mode, choose **User-provisioned**. Then, for **Desired Metadata IOPS**, provide a Metadata IOPS value greater than or equal to the current file system Metadata IOPS value.

1. Choose **Update**.

## To change the metadata configuration mode for an SSD file system (CLI)
<a name="switch-provisioning-mode-cli"></a>

To change the metadata configuration mode for an SSD FSx for Lustre file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html) (UpdateFileSystem is the equivalent API action). Set the following parameters:
+ Set `--file-system-id` to the ID of the file system that you are updating.
+ To change the metadata configuration mode on SSD-based file systems, use the `--lustre-configuration MetadataConfiguration` property. This property has two parameters, `Mode` and `Iops`.
  + To switch your SSD file system from AUTOMATIC mode to USER\$1PROVISIONED mode, set `Mode` to `USER_PROVISIONED` and `Iops` to a Metadata IOPS value greater than or equal to the current file system Metadata IOPS value. For example:

    ```
    aws fsx update-file-system \
        --file-system-id fs-0123456789abcdef0 \
        --lustre-configuration 'MetadataConfiguration={Mode=USER_PROVISIONED,Iops=96000}'
    ```
  + To switch from USER\$1PROVISIONED mode to AUTOMATIC mode, set `Mode` to `AUTOMATIC` and do not use the `Iops` parameter. For example:

    ```
    aws fsx update-file-system \
        --file-system-id fs-0123456789abcdef0 \
        --lustre-configuration 'MetadataConfiguration={Mode=AUTOMATIC}'
    ```

# Monitoring metadata configuration updates
<a name="monitoring-metadata-performance-increase"></a>

You can monitor the progress of metadata configuration updates by using the Amazon FSx console, the API, or the AWS CLI.

## Monitoring metadata configuration updates (console)
<a name="monitor-metadata-performance-action-console"></a>

You can monitor metadata configuration updates in the **Updates** tab on the **File system details** page.

For metadata configuration updates, you can view the following information:

****Update type****  
Supported types are **Metadata IOPS** and **Metadata configuration mode**.

****Target value****  
The updated value for the file system's Metadata IOPS or Metadata configuration mode.

****Status****  
The current status of the update. The possible values are as follows:  
+ **Pending** – Amazon FSx has received the update request, but has not started processing it.
+ **In progress** – Amazon FSx is processing the update request.
+ **Completed** – The update finished successfully.
+ **Failed** – The update request failed. Choose the question mark (**?**) to see details on why the request failed.

****Request time****  
The time that Amazon FSx received the update action request.

## Monitoring metadata configuration updates (CLI)
<a name="monitor-metadata-update-action-cli-api"></a>

You can view and monitor metadata configuration update requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API operation. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you update a file system's metadata performance or metadata configuration mode, a `FILE_SYSTEM_UPDATE` `AdministrativeActions` is generated.

The following example shows an excerpt of the response of a `describe-file-systems` CLI command. The file system has a pending administrative action to increase the Metadata IOPS to 96000 and the metadata configuration mode to USER\$1PROVISIONED.

```
"AdministrativeActions": [
    {
        "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
        "RequestTime": 1678840205.853,
        "Status": "PENDING",
        "TargetFileSystemValues": {
            "LustreConfiguration": {
                "MetadataConfiguration": {
                    "Iops": 96000,
                    "Mode": USER_PROVISIONED
                }
            }
        }
    }
]
```

Amazon FSx processes the `FILE_SYSTEM_UPDATE` action, modifying the file system's Metadata IOPS and metadata configuration mode. When the new metadata resources are available to the file system the `FILE_SYSTEM_UPDATE` status changes to `COMPLETED`.

If the metadata configuration update request fails, the status of the `FILE_SYSTEM_UPDATE` action changes to `FAILED`, as shown in the following example. The `FailureDetails` property provides information about the failure.

```
"AdministrativeActions": [
    {
        "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
        "RequestTime": 1678840205.853,
        "Status": "FAILED",
        "TargetFileSystemValues": {
            "LustreConfiguration": {
                "MetadataConfiguration": {
                    "Iops": 96000,
                    "Mode": USER_PROVISIONED
                }
            }
        },
        "FailureDetails": {
            "Message": "failure-message"
        }
    }
]
```

# Managing provisioned throughput capacity
<a name="managing-throughput-capacity"></a>

Every FSx for Lustre file system has a throughput capacity that is configured when you create the file system. For file systems using SSD or HDD storage, the throughput capacity is measured in megabytes per second per tebibyte (MBps/TiB). For file systems using Intelligent-Tiering storage, the throughput capacity is measured in megabytes per second (MBps) for the file system. Throughput capacity is one factor that determines the speed at which the file server hosting the file system can serve file data. Higher levels of throughput capacity also come with higher levels of I/O operations per second (IOPS) and more memory for caching of data on the file server. For more information, see [Amazon FSx for Lustre performance](performance.md).

You can modify the throughput tier of a persistent SSD-based file system by increasing or decreasing the value of the file system's throughput per unit of storage. Valid values depend on the deployment type of the file system, as follows:
+ For Persistent 1 SSD-based deployment types, valid values are 50, 100, and 200 MBps/TiB.
+ For Persistent 2 SSD-based deployment types, valid values are 125, 250, 500, and 1000 MBps/TiB.

You can modify the throughput capacity of an Intelligent-Tiering file system by increasing the value of the total throughput capacity for the file system. Valid values are 4,000 MBps or increments of 4,000 MBps, up to a maximum of 2,000,000 MBps.

You can view the current value of the file system's throughput capacity as follows:
+ Using the console – On the **Summary** panel of the file system details page, the **Throughput per unit of storage** field shows the current value for SSD-based file systems while the **Throughput capacity** field shows the current value for Intelligent-Tiering file systems.
+ Using the CLI or API – Use the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) CLI command or the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API operation, and look for the `PerUnitStorageThroughput` property.

 When you modify your file system's throughput capacity, behind the scenes, Amazon FSx switches out the file system's file servers on SSD file systems or adds new file servers on Intelligent-Tiering file systems. Your file system will be unavailable for up to an hour during throughput capacity scaling. You are billed for the new amount of throughput capacity once it is available to your file system.

**Topics**
+ [Considerations when updating throughput capacity](#throughput-capacity-considerations)
+ [When to modify throughput capacity](#when-to-modify-throughput-capacity)
+ [Modifying throughput capacity](increase-throughput-capacity.md)
+ [Monitoring throughput capacity changes](monitoring-throughput-capacity-changes.md)

## Considerations when updating throughput capacity
<a name="throughput-capacity-considerations"></a>

Here are a few important items to consider when updating throughput capacity:
+ **Increase or decrease** – You can increase or decrease the amount of throughput capacity for an SSD-based file system. You can only increase the amount of throughput capacity for an Intelligent-Tiering file system.
+ **Update increments** – When you modify throughput capacity, use the increments listed in the **Update throughput tier** dialog box for SSD-based file systems or in the **Update throughput capacity** dialog box for Intelligent-Tiering file systems.
+ **Time between increases** – You can't make further throughput capacity changes on a file system until 6 hours after the last request, or until the throughput optimization process has completed, whichever time is longer.
+ **Automatic scaling of SSD read cache** – For the SSD read cache default mode (Proportional to throughput capacity), Amazon FSx automatically provisions 5 GiB of data storage for every MBps of throughput capacity you provision. As you scale your file system’s throughput capacity, Amazon FSx automatically scales your SSD data cache by attaching additional cache storage to any newly added file servers. 
+ **Deployment type** – You can only update the throughput capacity of persistent SSD-based or Intelligent-Tiering deployment types. You cannot modify the throughput capacity of EFA-enabled SSD-based file systems.

## When to modify throughput capacity
<a name="when-to-modify-throughput-capacity"></a>

Amazon FSx integrates with Amazon CloudWatch, enabling you to monitor your file system's ongoing throughput usage levels. The performance (throughput and IOPS) that you can drive through your file system depends on your specific workload’s characteristics, in addition to your file system’s throughput capacity, storage capacity, and storage class. For information about how to determine your file system's current throughput, see [How to use Amazon FSx for Lustre CloudWatch metrics](how_to_use_metrics.md). For information about CloudWatch metrics, see [Monitoring with Amazon CloudWatch](monitoring-cloudwatch.md).

# Modifying throughput capacity
<a name="increase-throughput-capacity"></a>

You can modify an FSx for Lustre file system's throughput capacity using the Amazon FSx console, the AWS Command Line Interface (AWS CLI), or the Amazon FSx API.

## To modify an SSD file system's throughput capacity (console)
<a name="update-throughput-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems**, and choose the FSx for Lustre file system that you want to modify the throughput capacity for.

1. For **Actions**, choose **Update throughput tier**. Or, in the **Summary** panel, choose **Update** next to the file system's **Throughput per unit of storage**.

   The **Update throughput tier** window appears.

1. Choose the new value for **Desired throughput per unit of storage** from the list.

1. Choose **Update** to initiate the throughput capacity update.
**Note**  
Your file system may experience a very brief period of unavailability during the update.

## To modify an SSD file system's throughput capacity (CLI)
<a name="update-throughput-cli"></a>
+ To modify a file system's throughput capacity, use the [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html) CLI command (or the equivalent [UpdateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_UpdateFileSystem.html) API operation). Set the following parameters:
  + Set `--file-system-id` to the ID of the file system that you are updating.
  + Set `--lustre-configuration PerUnitStorageThroughput` to a value of `50`, `100`, or `200` MBps/TiB for Persistent 1 SSD file systems, or to a value of `125`, `250`, `500`, or `1000` MBps/TiB for Persistent 2 SSD file systems.

  This command specifies that throughput capacity be set to 1000 MBps/TiB for the file system.

  ```
  aws fsx update-file-system \
      --file-system-id fs-0123456789abcdef0 \
      --lustre-configuration PerUnitStorageThroughput=1000
  ```

## To modify an Intelligent-Tiering file system's throughput capacity (console)
<a name="update-int-throughput-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems**, and choose the FSx for Lustre file system that you want to modify the throughput capacity for.

1. For **Actions**, choose **Update throughput capacity**. Or, in the **Summary** panel, choose **Update** next to the file system's **Throughput capacity**.

   The **Update throughput capacity** dialog box appears.

1. Choose the new value for **Desired throughput capacity** from the list.

   Amazon FSx will automatically scale your data read cache to avoid clearing the cache contents.

1. Choose **Update** to initiate the throughput capacity update.
**Note**  
Your file system may experience a very brief period of unavailability during the update.

## To modify an Intelligent-Tiering file system's throughput capacity (CLI)
<a name="update-int-throughput-cli"></a>
+ To modify a file system's throughput capacity, use the [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html) CLI command (or the equivalent [UpdateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_UpdateFileSystem.html) API operation). Set the following parameters:
  + Set `--file-system-id` to the ID of the file system that you are updating.
  + If your data read cache is configured in proportional to throughput capacity mode, set `--lustre-configuration ThroughputCapacity` to a throughput level of increments of `4000` MBps, up to a maximum of `2000000` MBps.

    If your data read cache is configured in user-provisioned mode, you also need to use the `--lustre-configuration DataReadCacheConfiguration` property to specify the data read cache. You need to maintain the same cache storage per server ratio and specify the new SizeGiB, or the request will be rejected.

  This command specifies that throughput capacity be set to 8000 MBps for a file system that uses a read cache configured in proportional to throughput capacity mode.

  ```
  aws fsx update-file-system \
      --file-system-id fs-0123456789abcdef0 \
      --lustre-configuration '{
        "ThroughputCapacity": 8000
        }'
  ```

  This command specifies that throughput capacity be set to 8000 MBps for a file system that uses a read cache configured in user-provisioned mode.

  ```
  aws fsx update-file-system \
      --file-system-id fs-0123456789abcdef0 \
      --lustre-configuration { 
          "ThroughputCapacity": 8000, 
          "DataReadCacheConfiguration": '{ 
               "SizingMode":"USER_PROVISIONED"
               "SizeGiB":1000
               # New size should be cache storage allocated per server multiplied by number of file servers
           }
  }'
  ```

# Monitoring throughput capacity changes
<a name="monitoring-throughput-capacity-changes"></a>

You can monitor the progress of a throughput capacity modification using the Amazon FSx console, the API, and the AWS CLI.

**Monitoring throughput capacity changes (console)**
+ On the **Updates** tab in the file system details page, you can view the 10 most recent update actions for each update action type.

  For throughput capacity update actions, you can view the following information.

    
****Update type****  
Supported type is **Per unit storage throughput**.  
****Target value****  
The desired value to change the file system's throughput per unit of storage to.  
****Status****  
The current status of the update. For throughput capacity updates, the possible values are as follows:  
  + **Pending** – Amazon FSx has received the update request, but has not started processing it.
  + **In progress** – Amazon FSx is processing the update request.
  + **Updated; Optimizing** – Amazon FSx has updated the file system's network I/O, CPU, and memory resources. The new disk I/O performance level is available for write operations. Your read operations will see disk I/O performance between the previous level and the new level until your file system is no longer in the this state.
  + **Completed** – The throughput capacity update completed successfully.
  + **Failed** – The throughput capacity update failed. Choose the question mark (**?**) to see details on why the throughput update failed.  
****Request time****  
The time when Amazon FSx received the update request.

**Monitoring file system updates (CLI)**
+ You can view and monitor file system throughput capacity modification requests using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) CLI command and the [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html) API action. The `AdministrativeActions` array lists the 10 most recent update actions for each administrative action type. When you modify a file system's throughput capacity, a `FILE_SYSTEM_UPDATE` administrative action is generated.

  The following example shows the response excerpt of a `describe-file-systems` CLI command. The file system has a target throughput per unit of storage of 500 MBps/TiB.

  ```
  .
  .
  .
  "AdministrativeActions": [
      {
          "AdministrativeActionType": "FILE_SYSTEM_UPDATE",
          "RequestTime": 1581694764.757,
          "Status": "PENDING",
          "TargetFileSystemValues": {
            "LustreConfiguration": {
              "PerUnitStorageThroughput": 500
            }
          }
      }
  ]
  ```

  When Amazon FSx processes the action successfully, the status changes to `COMPLETED`. The new throughput capacity is then available to the file system, and shows in the `PerUnitStorageThroughput` property.

  If the throughput capacity modification fails, the status changes to `FAILED`, and the `FailureDetails` property provides information about the failure.

# Lustre data compression
<a name="data-compression"></a>

You can use the Lustre data compression feature to achieve cost savings on your high-performance Amazon FSx for Lustre file systems and backup storage. When data compression is enabled, Amazon FSx for Lustre automatically compresses newly-written files before they are written to disk and automatically uncompresses them when they are read.

Data compression uses the LZ4 algorithm, which is optimized to deliver high levels of compression without adversely impacting file system performance. LZ4 is a Lustre community-trusted and performance-oriented algorithm that provides a balance between compression speed and compressed file size. Enabling data compression does not typically have a measurable impact on latency.

Data compression reduces the amount of data that is transferred between Amazon FSx for Lustre file servers and storage. If you are not already using compressed file formats, you will see an increase in overall file system throughput capacity when using data compression. Increases in throughput capacity that are related to data compression will be capped after you have saturated your front-end network interface cards.

For example, if your file system is a PERSISTENT-50 SSD deployment type, your network throughput has a baseline of 250 MBps per TiB of storage. Your disk throughput has a baseline of 50 MBps per TiB. With data compression, your disk throughput could increase from 50 MBps per TiB to a maximum of 250 MBps per TiB, which is the baseline network throughput limit. For more information about network and disk throughput limits, see the file system performance tables in [Performance characteristics of SSD and HDD storage classes](ssd-storage.md). For more information about data compression performance, see the [Spend less while increasing performance with Amazon FSx for Lustre data compression](https://aws.amazon.com/blogs/storage/spend-less-while-increasing-performance-with-amazon-fsx-for-lustre-data-compression/) post on the *AWS Storage Blog*.

**Topics**
+ [Managing data compression](#manage-compression)
+ [Compressing previously written files](#migrate-compression)
+ [Viewing file sizes](#view-compression)
+ [Using CloudWatch metrics](#compression-metrics)

## Managing data compression
<a name="manage-compression"></a>

You can turn data compression on or off when creating a new Amazon FSx for Lustre file system. Data compression is turned off by default when you create an Amazon FSx for Lustre file system from the console, AWS CLI, or API.

### To turn on data compression when creating a file system (console)
<a name="create-compression-fs-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Follow the procedure for creating a new file system described in [Step 1: Create your FSx for Lustre file system](getting-started.md#getting-started-step1) in the *Getting started* section. 

1. In the **File system details** section, for **Data compression type**, choose **LZ4**.

1. Complete the wizard as you do when you create a new file system.

1. Choose **Review and create**.

1. Review the settings you chose for your Amazon FSx for Lustre file system, and then choose **Create file system**.

When the file system is **Available**, data compression is turned on.

### To turn on data compression when creating a file system (CLI)
<a name="create-compression-fs-cli"></a>
+ To create an FSx for Lustre file system with data compression turned on, use the Amazon FSx CLI command [https://docs.aws.amazon.com/cli/latest/reference/fsx/create-file-system.html](https://docs.aws.amazon.com/cli/latest/reference/fsx/create-file-system.html) with the `DataCompressionType` parameter, as shown following. The corresponding API operation is [CreateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystem.html).

  ```
  $ aws fsx create-file-system \
        --client-request-token CRT1234 \
        --file-system-type LUSTRE \
        --file-system-type-version 2.12 \
        --lustre-configuration DeploymentType=PERSISTENT_1,PerUnitStorageThroughput=50,DataCompressionType=LZ4 \
        --storage-capacity 3600 \
        --subnet-ids subnet-123456 \
        --tags Key=Name,Value=Lustre-TEST-1 \
        --region us-east-2
  ```

After successfully creating the file system, Amazon FSx returns the file system description as JSON, as shown in the following example.

```
{

    "FileSystems": [
        {
            "OwnerId": "111122223333",
            "CreationTime": 1549310341.483,
            "FileSystemId": "fs-0123456789abcdef0",
            "FileSystemType": "LUSTRE",
            "FileSystemTypeVersion": "2.12",
            "Lifecycle": "CREATING",
            "StorageCapacity": 3600,
            "VpcId": "vpc-123456",
            "SubnetIds": [
                "subnet-123456"
            ],
            "NetworkInterfaceIds": [
                "eni-039fcf55123456789"
            ],
            "DNSName": "fs-0123456789abcdef0.fsx.us-east-2.amazonaws.com",
            "ResourceARN": "arn:aws:fsx:us-east-2:123456:file-system/fs-0123456789abcdef0",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "Lustre-TEST-1"
                }
            ],
            "LustreConfiguration": {
                "DeploymentType": "PERSISTENT_1",
                "DataCompressionType": "LZ4",
                "PerUnitStorageThroughput": 50
            }
        }
    ]
}
```

You can also change the data compression configuration of your existing file systems. When you turn data compression on for an existing file system, only newly written files are compressed, and existing files are not compressed. For more information, see [Compressing previously written files](#migrate-compression).

### To update data compression on an existing file system (console)
<a name="manage-compression-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems**, and choose the Lustre file system that you want to manage data compression for.

1. For **Actions**, choose **Update data compression type**.

1. On the **Update data compression type** dialog box, choose **LZ4** to turn on data compression, or choose **NONE** to turn it off.

1. Choose **Update**.

1. You can monitor the update progress on the file systems detail page in the **Updates** tab.

### To update data compression on an existing file system (CLI)
<a name="manage-compression-cli"></a>

To update the data compression configuration for an existing FSx for Lustre file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html). Set the following parameters:
+ Set `--file-system-id` to the ID of the file system that you are updating.
+ Set `--lustre-configuration DataCompressionType` to `NONE` to turn off data compression or `LZ4` to turn on data compression with the LZ4 algorithm.

This command specifies that data compression is turned on with the LZ4 algorithm.

```
$ aws fsx update-file-system \
    --file-system-id fs-0123456789abcdef0 \
    --lustre-configuration DataCompressionType=LZ4
```

### Data compression configuration when creating a file system from backup
<a name="migrate-compression-backup"></a>

You can use an available backup to create a new Amazon FSx for Lustre file system. When you create a new file system from backup, there is no need to specify the `DataCompressionType`; the setting will be applied using the backup's `DataCompressionType` setting. If you choose to specify the `DataCompressionType` when creating from backup, the value must match the backup's `DataCompressionType` setting. 

To view the settings on a backup, choose it from the **Backups** tab of the Amazon FSx console. Details of the backup will be listed on the **Summary** page for the backup. You can also run the [https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-backups.html](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-backups.html) AWS CLI command (the equivalent API action is [https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeBackups.html](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeBackups.html)).

## Compressing previously written files
<a name="migrate-compression"></a>

Files are uncompressed if they were created when data compression was turned off on the Amazon FSx for Lustre file system. Turning on data compression will not automatically compress your existing uncompressed data.

You can use the `lfs_migrate` command that is installed as a part of the Lustre client installation to compress existing files. For an example, see [FSxL-Compression](https://github.com/aws-samples/fsx-solutions/blob/master/FSxL-Compression) which is available on GitHub.

## Viewing file sizes
<a name="view-compression"></a>

You can use the following commands to view the uncompressed and compressed sizes of your files and directories.
+ `du` displays compressed sizes.
+ `du --apparent-size` displays uncompressed sizes.
+ `ls -l` displays uncompressed sizes.

The following examples show the output of each command with the same file.

```
$ du -sh samplefile
272M	samplefile
$ du -sh --apparent-size samplefile
1.0G	samplefile
$ ls -lh samplefile
-rw-r--r-- 1 root root 1.0G May 10 21:16 samplefile
```

The `-h` option is useful for these commands because it prints sizes in a human-readable format.

## Using CloudWatch metrics
<a name="compression-metrics"></a>

You can use Amazon CloudWatch Logs metrics to view your file system usage. The `LogicalDiskUsage` metric shows the total logical disk usage (without compression), and the `PhysicalDiskUsage` metric shows the total physical disk usage (with compression). These two metrics are available only if your file system has data compression enabled or previously had it enabled.

You can determine your file system's compression ratio by dividing the `Sum` of the `LogicalDiskUsage` statistic by the `Sum` of the `PhysicalDiskUsage` statistic.

For more information about monitoring your file system’s performance, see [Monitoring Amazon FSx for Lustre file systems](monitoring_overview.md).

# Lustre root squash
<a name="root-squash"></a>

Root squash is an administrative feature that adds an additional layer of file access control on top of the current network-based access control and POSIX file permissions. Using the root squash feature, you can restrict root level access from clients that try to access your FSx for Lustre file system as root.

Root user permissions are required to perform administrative actions, such as managing permissions on FSx for Lustre file systems. However, root access provides unrestricted access to the users, allowing them to bypass permission checks to access, modify, or delete file system objects. Using the root squash feature, you can prevent unauthorized access or deletion of data by specifying a non-root user ID (UID) and group ID (GID) for your file system. Root users accessing the file system will automatically be converted to the specified less-privileged user/group with limited permissions that are set by the storage administrator.

The root squash feature also optionally allows you to provide a list of clients who are not affected by the root squash setting. These clients can access the file system as root, with unrestricted privileges.

**Topics**
+ [How root squash works](#root-squash-overview)
+ [Managing root squash](#manage-root-squash)

## How root squash works
<a name="root-squash-overview"></a>

The root squash feature works by re-mapping the user ID (UID) and group ID (GID) of the root user to a UID and GID specified by the Lustre system administrator. The root squash feature also lets you optionally specify a set of clients for which UID/GID re-mapping does not apply.

When you create a new FSx for Lustre file system, root squash is disabled by default. You enable root squash by configuring a UID and GID root squash setting for your FSx for Lustre file system. The UID and GID values are integers that can range from `0` to `4294967294`:
+ A non-zero value for UID and GID enables root squash. The UID and GID values can be different, but each must be a non-zero value.
+ A value of `0` (zero) for UID and GID indicates root, and therefore disables root squash.

During file system creation, you can use the Amazon FSx console to provide the root squash UID and GID values in the **Root Squash** property, as shown in [To enable root squash when creating a file system (console)](#create-root-squash-console). You can also use the `RootSquash` parameter with the AWS CLI or API to provide the UID and GID values, as shown in [To enable root squash when creating a file system (CLI)](#create-root-squash-cli).

Optionally, you can also specify a list of NIDs of clients for which root squash doesn't apply. A client NID is a Lustre Network Identifier used to uniquely identify a client. You can specify the NID as either a single address or a range of addresses:
+ A single address is described in standard Lustre NID format by specifying the client’s IP address followed by the Lustre network ID (for example, `10.0.1.6@tcp`).
+ An address range is described using a dash to separate the range (for example, `10.0.[2-10].[1-255]@tcp`).
+ If you don't specify any client NIDs, there will be no exceptions to root squash.

When creating or updating your file system, you can use the **Exceptions to Root Squash** property in the Amazon FSx console to provide the list of client NIDs. In the AWS CLI or API, use the `NoSquashNids` parameter. For more information, see the procedures in [Managing root squash](#manage-root-squash).

## Managing root squash
<a name="manage-root-squash"></a>

During file system creation, root squash is disabled by default. You can enable root squash when creating a new Amazon FSx for Lustre file system from the Amazon FSx console, AWS CLI, or API.

### To enable root squash when creating a file system (console)
<a name="create-root-squash-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Follow the procedure for creating a new file system described in [Step 1: Create your FSx for Lustre file system](getting-started.md#getting-started-step1) in the *Getting started* section. 

1. Open the **Root Squash - *optional*** section.

1. For **Root Squash**, provide the user and group IDs with which the root user can access the file system. You can specify any whole number in the range of `1`–`4294967294`:

   1. For **User ID**, specify the user ID for the root user to use.

   1. For **Group ID**, specify the group ID for the root user to use.

1. (Optional) For **Exceptions to Root Squash**, do the following:

   1. Choose **Add client address**.

   1. In the **Client addresses** field, specify the IP address of a client to which root squash doesn't apply, For information on the IP address format, see [How root squash works](#root-squash-overview).

   1. Repeat as needed to add more client IP addresses.

1. Complete the wizard as you do when you create a new file system.

1. Choose **Review and create**.

1. Review the settings you chose for your Amazon FSx for Lustre file system, and then choose **Create file system**.

When the file system is **Available**, root squash is enabled.

### To enable root squash when creating a file system (CLI)
<a name="create-root-squash-cli"></a>
+ To create an FSx for Lustre file system with root squash enabled, use the Amazon FSx CLI command [https://docs.aws.amazon.com/cli/latest/reference/fsx/create-file-system.html](https://docs.aws.amazon.com/cli/latest/reference/fsx/create-file-system.html) with the `RootSquashConfiguration` parameter. The corresponding API operation is [CreateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_CreateFileSystem.html).

  For the `RootSquashConfiguration` parameter, set the following options:
  + `RootSquash` – The colon-separated UID:GID values that specify the user ID and group ID for the root user to use. You can specify any whole number in the range of `0`–`4294967294` (0 is root) for each ID (for example, `65534:65534`).
  + `NoSquashNids` – Specify the Lustre Network Identifiers (NIDs) of clients to which root squash doesn't apply. For information on the client NID format, see [How root squash works](#root-squash-overview).

  The following example creates an FSx for Lustre file system with root squash enabled:

  ```
  $ aws fsx create-file-system \
        --client-request-token CRT1234 \
        --file-system-type LUSTRE \
        --file-system-type-version 2.15 \
        --lustre-configuration "DeploymentType=PERSISTENT_2,PerUnitStorageThroughput=250,DataCompressionType=LZ4,\
            RootSquashConfiguration={RootSquash="65534:65534",\
            NoSquashNids=["10.216.123.47@tcp", "10.216.12.176@tcp"]}" \
        --storage-capacity 2400 \
        --subnet-ids subnet-123456 \
        --tags Key=Name,Value=Lustre-TEST-1 \
        --region us-east-2
  ```

After successfully creating the file system, Amazon FSx returns the file system description as JSON, as shown in the following example.

```
{

    "FileSystems": [
        {
            "OwnerId": "111122223333",
            "CreationTime": 1549310341.483,
            "FileSystemId": "fs-0123456789abcdef0",
            "FileSystemType": "LUSTRE",
            "FileSystemTypeVersion": "2.15",
            "Lifecycle": "CREATING",
            "StorageCapacity": 2400,
            "VpcId": "vpc-123456",
            "SubnetIds": [
                "subnet-123456"
            ],
            "NetworkInterfaceIds": [
                "eni-039fcf55123456789"
            ],
            "DNSName": "fs-0123456789abcdef0.fsx.us-east-2.amazonaws.com",
            "ResourceARN": "arn:aws:fsx:us-east-2:123456:file-system/fs-0123456789abcdef0",
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "Lustre-TEST-1"
                }
            ],
            "LustreConfiguration": {
                "DeploymentType": "PERSISTENT_2",
                "DataCompressionType": "LZ4",
                "PerUnitStorageThroughput": 250,
                "RootSquashConfiguration": {
                    "RootSquash": "65534:65534", 
                    "NoSquashNids": "10.216.123.47@tcp 10.216.29.176@tcp"
            }
        }
    ]
}
```

You can also update the root squash settings of your existing file system using the Amazon FSx console, AWS CLI, or API. For example, you can change the root squash UID and GID values, add or remove client NIDs, or disable root squash.

### To update root squash settings on an existing file system (console)
<a name="update-root-squash-console"></a>

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Navigate to **File systems**, and choose the Lustre file system that you want to manage root squash for.

1. For **Actions**, choose **Update root squash**. Or, in the **Summary** panel, choose **Update** next to the file system's **Root Squash** field to display the **Update Root Squash Settings** dialog box.

1. For **Root Squash**, update the user and group IDs with which the root user can access the file system. You can specify any whole number in the range of `0`–`4294967294`. To disable root squash, specify `0` (zero) for both IDs.

   1. For **User ID**, specify the user ID for the root user to use.

   1. For **Group ID**, specify the group ID for the root user to use.

1. For **Exceptions to Root Squash**, do the following:

   1. Choose **Add client address**.

   1. In the **Client addresses** field, specify the IP address of a client to which root squash doesn't apply,

   1. Repeat as needed to add more client IP addresses.

1. Choose **Update**.
**Note**  
If root squash is enabled and you want to disable it, choose **Disable** instead of performing steps 4-6.

You can monitor the update progress on the file systems detail page in the **Updates** tab.

### To update root squash settings on an existing file system (CLI)
<a name="update-root-squash-cli"></a>

To update the root squash settings for an existing FSx for Lustre file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html). The corresponding API operation is [UpdateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_UdateFileSystem.html).

Set the following parameters:
+ Set `--file-system-id` to the ID of the file system that you are updating.
+ Set the `--lustre-configuration RootSquashConfiguration` options as follows:
  + `RootSquash` – Set the colon-separated UID:GID values that specify the user ID and group ID for the root user to use. You can specify any whole number in the range of `0`–`4294967294` (0 is root) for each ID. To disable root squash, specify `0:0` for the UID:GID values.
  + `NoSquashNids` – Specify the Lustre Network Identifiers (NIDs) of clients to which root squash doesn't apply. Use `[]` to remove all client NIDs, which means there will be no exceptions to root squash.

This command specifies that root squash is enabled using `65534` as the value for the root user's user ID and group ID.

```
$ aws fsx update-file-system \
    --file-system-id fs-0123456789abcdef0 \
    --lustre-configuration RootSquashConfiguration={RootSquash="65534:65534", \
          NoSquashNids=["10.216.123.47@tcp", "10.216.12.176@tcp"]}
```

If the command is successful, Amazon FSx for Lustre returns the response in JSON format.

You can view the root squash settings of your file system in the **Summary** panel of the file system details page on the Amazon FSx console or in the response of a [https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) CLI command (the equivalent API action is [https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html)). 

# FSx for Lustre file system status
<a name="file-system-lifecycle-states"></a>

You can view the status of an Amazon FSx file system by using the Amazon FSx console, the AWS CLI command [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html), or the API operation [DescribeFileSystems](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DescribeFileSystems.html).


| File system status  | Description | 
| --- | --- | 
| AVAILABLE | The file system is in a healthy state, and is reachable and available for use. | 
| CREATING | Amazon FSx is creating a new file system. | 
| DELETING | Amazon FSx is deleting an existing file system. | 
| UPDATING | The file system is undergoing a customer-initiated update. | 
| MISCONFIGURED | The file system is in a failed but recoverable state. | 
| FAILED |  This status can mean either of the following: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/fsx/latest/LustreGuide/file-system-lifecycle-states.html)  | 

# Tag your Amazon FSx for Lustre resources
<a name="tag-resources"></a>

To help you manage your file systems and other Amazon FSx for Lustre resources, you can assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags that you've assigned to it. This topic describes tags and shows you how to create them.

**Topics**
+ [Tag basics](#tag-basics)
+ [Tagging your resources](#tagging-your-resources)
+ [Tag restrictions](#tag-restrictions)
+ [Permissions and tag](#tags-iam)

## Tag basics
<a name="tag-basics"></a>

A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define.

Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For example, you could define a set of tags for your account's Amazon FSx for Lustre file systems that helps you track each instance's owner and stack level.

We recommend that you devise a set of tag keys that meets your needs for each resource type. Using a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter the resources based on the tags you add.

Tags don't have any semantic meaning to Amazon FSx and are interpreted strictly as a string of characters. Also, tags are not automatically assigned to your resources. You can edit tag keys and values, and you can remove tags from a resource at any time. You can set the value of a tag to an empty string, but you can't set the value of a tag to null. If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the old value. If you delete a resource, any tags for the resource are also deleted.

If you're using the Amazon FSx for Lustre API, the AWS CLI, or an AWS SDK, you can use the `TagResource` API action to apply tags to existing resources. Additionally, some resource-creating actions enable you to specify tags for a resource when the resource is created. If tags cannot be applied during resource creation, we roll back the resource creation process. This ensures that resources are either created with tags or not created at all, and that no resources are left untagged at any time. By tagging resources at the time of creation, you can eliminate the need to run custom tagging scripts after resource creation. For more information about enabling users to tag resources on creation, see [Grant permission to tag resources during creation](using-tags-fsx.md#supported-iam-actions-tagging).

## Tagging your resources
<a name="tagging-your-resources"></a>

You can tag Amazon FSx for Lustre resources that exist in your account. If you're using the Amazon FSx console, you can apply tags to resources by using the Tags tab on the relevant resource screen. When you create resources, you can apply the Name key with a value, and you can apply tags of your choice when creating a new file system. The console may organize resources according to the Name tag, but this tag doesn't have any semantic meaning to the Amazon FSx for Lustre service.

You can apply tag-based resource-level permissions in your IAM policies to the Amazon FSx for Lustre API actions that support tagging on creation to implement granular control over the users and groups that can tag resources on creation. Your resources are properly secured from creation—tags are applied immediately to your resources, therefore any tag-based resource-level permissions controlling the use of resources are immediately effective. Your resources can be tracked and reported on more accurately. You can enforce the use of tagging on new resources, and control which tag keys and values are set on your resources.

You can also apply resource-level permissions to the `TagResource` and `UntagResource` Amazon FSx for Lustre API actions in your IAM policies to control which tag keys and values are set on your existing resources.

For more information about tagging your resources for billing, see [Using cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) in the *AWS Billing User Guide*.

## Tag restrictions
<a name="tag-restrictions"></a>

The following basic restrictions apply to tags:
+ Maximum number of tags per resource – 50
+ For each resource, each tag key must be unique, and each tag key can have only one value.
+ Maximum key length – 128 Unicode characters in UTF-8
+ Maximum value length – 256 Unicode characters in UTF-8
+ The allowed characters for Amazon FSx for Lustre tags are: letters, numbers, and spaces representable in UTF-8, and the following characters: \$1 - = . \$1 : / @.
+ Tag keys and values are case-sensitive.
+ The `aws:` prefix is reserved for AWS use. If a tag has a tag key with this prefix, then you can't edit or delete the tag's key or value. Tags with the `aws:` prefix do not count against your tags per resource limit.

You can't delete a resource based solely on its tags; you must specify the resource identifier. For example, to delete a file system that you tagged with a tag key called `DeleteMe`, you must use the `DeleteFileSystem` action with the file system resource identifier, such as fs-1234567890abcdef0.

When you tag public or shared resources, the tags you assign are available only to your AWS account; no other AWS account will have access to those tags. For tag-based access control to shared resources, each AWS account must assign its own set of tags to control access to the resource.

## Permissions and tag
<a name="tags-iam"></a>

For more information about the permissions required to tag Amazon FSx resources at creation, see [Grant permission to tag resources during creation](using-tags-fsx.md#supported-iam-actions-tagging).For more information about using tags to restrict access to Amazon FSx resources in IAM policies, see [Using tags to control access to your Amazon FSx resources](using-tags-fsx.md#restrict-fsx-access-tags).

# Amazon FSx for Lustre maintenance windows
<a name="maintenance-windows"></a>

Amazon FSx for Lustre performs routine software patching for the Lustre software it manages. Patching occurs infrequently, typically once every several weeks. The maintenance window is your opportunity to control what day and time of the week this software patching occurs. You choose the maintenance window during file system creation. If you have no time preference, then a 30-minute default window is assigned.

Patching should require only a fraction of your 30-minute maintenance window. During these few minutes of time, your file system will be temporarily unavailable. File operations issued by clients while the file system is unavailable will transparently retry and eventually succeed after maintenance is complete. Note that the in-memory cache will be erased during maintenance, leading to higher latencies until after maintenance has been completed.

FSx for Lustre allows you to adjust your maintenance window as needed to accommodate your workload and operational requirements. You can move your maintenance window as frequently as required, provided that a maintenance window is scheduled at least once every 14 days. If a patch is released and you haven’t scheduled a maintenance window within 14 days, FSx for Lustre will proceed with maintenance on the file system to ensure its security and reliability.

You can use the Amazon FSx Management Console, AWS CLI, AWS API, or one of the AWS SDKs to change the maintenance window for your file systems.

**To change the maintenance window using the console**

1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

1. Choose **File systems** in the navigation pane.

1. Choose the file system that you want to change the maintenance window for. The file system details page appears.

1. Choose the **Maintenance** tab. The maintenance window **Settings** panel appears.

1. Choose **Edit** and enter the new day and time that you want the maintenance window to start.

1. Choose **Save** to save your changes. The new maintenance start time is displayed in the **Settings** panel.

You can change the maintenance window for your file system using the [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html) CLI command. Run the following command, replacing the file system ID with the ID for your file system, and the date and time with when you want to begin the window.

```
aws fsx update-file-system --file-system-id fs-01234567890123456 --lustre-configuration WeeklyMaintenanceStartTime=1:01:30
```

# Managing Lustre versions
<a name="managing-lustre-version"></a>

FSx for Lustre currently supports multiple long-lerm support (LTS) Lustre versions released by the Lustre community. Newer LTS versions provide benefits such as performance enhancements, new features, and support for the latest Linux kernel versions for your client instances. You can upgrade your file systems to newer Lustre versions within minutes using the AWS Management Console, AWS CLI, or AWS SDKs.

FSx for Lustre currently supports Lustre LTS versions 2.10, 2.12, and 2.15. You can determine the LTS version of your FSx for Lustre file systems using the AWS Management Console or by using the [describe-file-systems](https://docs.aws.amazon.com/cli/latest/reference/fsx/describe-file-systems.html) AWS CLI command.

Before you perform a Lustre version upgrade, we recommend that you follow the steps described in [Best practices for Lustre version upgrades](#version-upgrade-best-practices).

**Topics**
+ [Best practices for Lustre version upgrades](#version-upgrade-best-practices)
+ [Performing the upgrade](#perform-upgrade)

## Best practices for Lustre version upgrades
<a name="version-upgrade-best-practices"></a>

We recommend following these best practices before upgrading the Lustre version of your FSx for Lustre file system:
+ **Test in a non-production environment:** Test a Lustre version upgrade on a duplicate of your production file system before upgrading your production file system. This ensures a smooth upgrade process for your production workload.
+ **Ensure client compatibility:** Verify that the Linux kernel versions running on your client instances are compatible with the Lustre version you plan to upgrade to. See [Lustre file system and client kernel compatibility](lustre-client-matrix.md) for details.
+ **Back up your data:**
  + For file systems not linked to S3: We recommend that you create an FSx backup before upgrading the Lustre version so that you have a known restore point for your file system. If automatic daily backups are enabled on your file system, Amazon FSx will automatically create a backup of your file system before upgrading.
  + For file systems linked to S3 We recommend ensuring that all changes have been exported to S3 before upgrading. If you have enabled automatic export, check that the [`AgeOfOldestQueuedMessage`](fs-metrics.md#auto-import-export-metrics) AutoExport metric is zero to confirm that all changes have been successfully exported to S3. If you have not enabled automatic export, you can run a manual data repository task (DRT) export to synchronize your file system with the S3 bucket before upgrading.

## Performing the upgrade
<a name="perform-upgrade"></a>

To upgrade your FSx for Lustre file system to a newer version, follow the listed steps:

1. **Unmount all clients:** Before initiating the upgrade, you must unmount the file system from all client instances accessing your file system. You can verify that all clients are successfully unmounted by using the `ClientConnections` metric on Amazon CloudWatch - this metric should display zero connections. The upgrade process will not proceed if any clients remain connected to the file system.

   You can view the list of client network identifiers (NIDs) connected to the file system in the `.fsx/clientConnections` file stored at the root of your file system. This file is updated every 5 minutes. You can use the `cat` command to display the contents of the file, as in this example:

   ```
   cat /test/.fsx/clientConnections
   ```

1. **Upgrade the Lustre version:** You can upgrade the Lustre version of your FSx for Lustre file system using the Amazon FSx console, the AWS CLI, or the Amazon FSx API. We recommend upgrading your file systems to the latest Lustre version supported by FSx for Lustre.

   **To update the Lustre version of a file system (console)**

   1. Open the Amazon FSx console at [https://console.aws.amazon.com/fsx/](https://console.aws.amazon.com/fsx/).

   1. In the left navigation pane, choose **File systems**. In the **File systems** list, choose the FSx for Lustre file system that you want to update the Lustre version for.

   1. For **Actions**, choose **Update file system Lustre version**. Or, in the **Summary** panel, choose **Update** next to the file system's **Lustre version** field. The **Update file system Lustre version** dialog box appears. The **Update file system Lustre version** dialog box appears.

   1. For the **Select a new Lustre version** field, choose a Lustre version. The value you choose must be newer than the current Lustre version.

   1. Choose **Update**.

   **To update the Lustre version of a file system (CLI)**

   To update the Lustre version of an FSx for Lustre file system, use the AWS CLI command [update-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/update-file-system.html). (The equivalent API action is [UpdateFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_UpdateFileSystem.html).) Set the following parameters:
   + Set `--file-system-id` to the ID of the file system that you are updating.
   + Set `--file-system-type-version` to a newer Lustre version for the file system that you are updating.

   The following example updates the file system's Lustre version from 2.12 to 2.15:

   ```
   aws fsx update-file-system \
       --file-system-id fs-0123456789abcdef0 \
       --file-system-type-version "2.15"
   ```

1. **Mount all clients:** You can monitor the progress of Lustre version updates by using the **Updates** tab in the Amazon FSx console or `describe-file-systems` in the AWS CLI. Once the Lustre version upgrade status shows as `Completed`, you can safely remount the file system on your client instances and resume your workload.

# Deleting a file system
<a name="delete-file-system"></a>

You can delete an Amazon FSx for Lustre file system using the Amazon FSx console, the AWS CLI, and the Amazon FSx API. Before deleting an FSx for Lustre file system, you should [unmount](unmounting-fs.md) it from every connected Amazon EC2 instance. On S3-linked file systems, to ensure all of your data is written back to S3 before deleting your file system, you can either monitor for the [AgeOfOldestQueuedMessage](fs-metrics.md#auto-import-export-metrics) metric to be zero (if using automatic export) or you can run an [export data repository task](export-data-repo-task-dra.md). If you have automatic export enabled and want to use an export data repository task, you have to disable automatic export before executing the export data repository task.

To delete a file system after unmounting from every Amazon EC2 instance:
+ **Using the console** – Follow the procedure described in [Step 5: Clean up resources](getting-started.md#getting-started-step4).
+ **Using the API or CLI** – Use the the [DeleteFileSystem](https://docs.aws.amazon.com/fsx/latest/APIReference/API_DeleteFileSystem.html) API operation or the [delete-file-system](https://docs.aws.amazon.com/cli/latest/reference/fsx/delete-file-system.html) CLI command.